diff --git "a/abs_29K_G/test_abstract_long_2405.04356v1.json" "b/abs_29K_G/test_abstract_long_2405.04356v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04356v1.json" @@ -0,0 +1,1385 @@ +{ + "url": "http://arxiv.org/abs/2405.04356v1", + "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation", + "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "authors": "Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "main_content": "Introduction In recent years, multi-modal image generation has achieved remarkable success, driven by the advancements in Generative Adversarial Networks (GANs) [15] and diffusion models (DMs) [11, 18, 48]. Facial image processing has become a popular application for a variety of tasks, including face image generation [21, 39], face editing [6, 12, 30, 36, 37, 46], and style transfer [7, 64]. Many tasks typically utilize the pre-trained StyleGAN [21, 22], which can generate realistic facial images and edit facial attributes by manipulating the latent space using GAN inversion [39, 42, 58]. In these tasks, using multiple modalities as conditions is becoming a popular approach, which improves the user\u2019s controllability in generating realistic face images. However, existing GAN *Corresponding author This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). rebuttal (a) Oil painting (b) Watercolor Visual input 2D face image generation 3D-aware face image generation Face style transfer \u201cThe woman has bangs, brown hair. She is smiling.\u201d \u201cGreek statue\u201d \u201csilver hair Elf\u201d \u201cCartoon style\u201d Overview of our method \u201cThe chubby man has receding hairline, eyeglasses, gray hair, and double chin.\u201d \u201cWatercolor painting\u201d GAN Ours Diffusion \u201cShe has blond hair, straight hair, and wears heavy makeup.\u201d Visual condition Text condition Figure 1. We present a method to map the diffusion features to the latent space of a pre-trained GAN, which enables diverse tasks in multi-modal face image generation and style transfer. Our method can be applied to 2D and 3D-aware face image generation. inversion methods [51, 58] have poor alignment with inputs as they neglect the correlation between multi-modal inputs. They struggle to map the different modalities into the latent space of the pre-trained GAN, such as by mixing the latent codes or optimizing the latent code converted from a given image according to the input text. Recently, DMs have increased attention in multi-modal image generation thanks to the stability of training and the flexibility of using multiple modalities as conditions. DMs [23, 53, 54] can control the multiple modalities and render diverse images by manipulating the latent or attention features across the time steps. However, existing textto-image DMs rely on an autoencoder and text encoder, such as CLIP [41], trained on unstructured datasets collected from the web [40, 45] that may lead to unrealistic arXiv:2405.04356v1 [cs.CV] 7 May 2024 \fimage generation. Moreover, some approaches address multi-modal face image generation in a 3D domain. In GAN inversion [14, 51], multi-view images can be easily acquired by manipulating the latent code with pre-trained 3D GANs. While DMs are inefficient in learning 3D representation, which has the challenge to generate multi-view images directly due to the lack of 3D ground-truth (GT) data for training [32, 47]. They can be used as a tool to acquire training datasets for 3D-aware image generation [24, 33]. In this paper, we present a versatile face generative model that uses text and visual inputs. We propose an approach that takes the strengths of DMs and GAN and generates photo-realistic images with flexible control over facial attributes, which can be adapted to 2D and 3D domains, as illustrated in Figure 1. Our method employs a latent mapping strategy that maps the diffusion features into the latent space of a pre-trained GAN using multi-denoising step learning, producing the latent code that encodes the details of text prompts and visual inputs. In summary, our main contributions are: (i) We present a novel method to link a pre-trained GAN (StyleGAN [22], EG3D [4]) and DM (ControlNet [62]) for multi-modal face image generation. (ii) We propose a simple mapping network that links pretrained GAN and DM\u2019s latent spaces and an attentionbased style modulation network that enables the use of meaningful features related to multi-modal inputs. (iii) We present a multi-denoising step training strategy that enhances the model\u2019s ability to capture the textual and structural details of multi-modal inputs. (iv) Our model can be applied for both 2Dand 3D-aware face image generation without additional data or loss terms and outperforms existing DMand GAN-based methods. 2. Related Work 2.1. GAN Inversion GAN inversion approaches have gained significant popularity in the face image generation task [7, 31, 51, 59] using the pre-trained 2D GAN, such as StyleGAN [21, 22]. This method has been extended to 3D-aware image generation [27, 60, 61] by integrating 3D GANs, such as EG3D [4]. GAN inversion can be categorized into learning-based, optimization-based, and hybrid methods. Optimization-based methods [44, 67] estimate the latent code by minimizing the difference between an output and an input image. Learning-based methods [1, 52] train an encoder that maps an input image into the latent space of the pre-trained GAN. Hybrid methods [58, 66] combine these two methods, producing an initial latent code and then refining it with additional optimizations. Our work employs a learning-based GAN inversion, where a DM serves as the encoder. We produce latent codes by leveraging semantic features in the denoising U-Net, which can generate images with controlled facial attributes. 2.2. Diffusion Model for Image Generation Many studies have introduced text-to-image diffusion models [36, 43, 45] that generate images by encoding multimodal inputs, such as text and image, into latent features via foundation models [41] and mapping them to the features of denoising U-Net via an attention mechanism. ControlNet [62] performs image generation by incorporating various visual conditions (e.g., semantic mask, scribbles, edges) and text prompts. Image editing models using DMs [16, 20, 26, 28, 34] have exhibited excellent performance by controlling the latent features or the attention maps of a denoising U-Net. Moreover, DMs can generate and edit images by adjusting latent features over multiple denoising steps [2]. We focus on using latent features of DM, including intermediate features and cross-attention maps, across denoising steps to link them with the latent space of GAN and develop a multi-modal face image generation task. 2.3. Multi-Modal Face Image Generation Face generative models have progressed by incorporating various modalities, such as text [25], semantic mask [38, 55], sketch [5, 9], and audio [65]. Several methods adopt StyleGAN, which can generate high-quality face images and edit facial attributes to control the style vectors. The transformer-based models [3, 13] are also utilized, which improves the performance of face image generation by handling the correlation between multi-modal conditions using image quantization. A primary challenge faced in face generative models is to modify the facial attributes based on given conditions while minimizing changes to other attributes. Some methods [39, 57] edit facial attributes by manipulating the latent codes in GAN models. TediGAN [58] controls multiple conditions by leveraging an encoder to convert an input image into latent codes and optimizing them with a pre-trained CLIP model. Recent works [19, 35] use DMs to exploit the flexibility of taking multiple modalities as conditions and generate facial images directly from DMs. Unlike existing methods, we use the pre-trained DM [62] as an encoder to further produce the latent codes for the pre-trained GAN models. 3. Method 3.1. Overview Figure 2 illustrates the overall pipeline of our approach. During the reverse diffusion process, we use the middle and decoder blocks of a denoising U-Net in ControlNet [62] as an encoder E. A text prompt c, along with a visual condition x, are taken as input to the denoising U-Net. Subsequently, E produces the feature maps h from the middle block, and \f\ud835\udc300 \ud835\udefe \u2219\u2219\u2219 \ud835\udc61= 0 \ud835\udc61= \ud835\udc47 \ud835\udc3c0 \u2032 \ud835\udc3c0 \ud835\udc51 \ud835\udc210 \ud835\udc3c\ud835\udc47 \u2032 \u2219\u2219\u2219 Conv ReLU \ud835\udc21\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udc5a \ud835\udc300 \ud835\udc300 \ud835\udefd Conv ReLU FC \u0de0 \ud835\udc05\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd \ud835\udc1f0 \ud835\udc300 \ud835\udc5a \ud835\udc50 Reverse Process of Diffusion \ud835\udc1a\ud835\udc61 \ud835\udc1f\ud835\udc61 Max-pool Average Average Upsample \ud835\udc05\ud835\udc61 \ud835\udc00\ud835\udc61 \u0d25 \ud835\udc00\ud835\udc61 \u0d24 \ud835\udc05\ud835\udc61 Style Modulation Network \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \ud835\udc1a0 \ud835\udc50 \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Pixel-wise multiplication Pixel-wise addition Our Model Mapping Network AbSMNet Frozen Figure 2. Overview of our method. We use a diffusion-based encoder E, the middle and decoder blocks of a denoising U-Net, that extracts the semantic features ht, intermediate features ft, and cross-attention maps at at denoising step t. We present the mapping network M (Sec. 3.2) and the attention-based style modulation network (AbSMNet) T (Sec. 3.3) that are trained across t (Sec. 3.4). M converts ht into the mapped latent code wm t , and T uses ft and at to control the facial attributes from the text prompt c and visual input x. The modulation codes w\u03b3 t and w\u03b2 t are then used to scale and shift wm t to produce the final latent code, wt, that is fed to the pre-trained GAN G. We obtain the generation output I\u2032 t from our model Y and we use the image Id 0 from the U-Net after the entire denoising process for training T (Sec. 3.4). Note that only the networks with the dashed line ( ) are trainable, while others are frozen. the intermediate features f and the cross-attention maps a from the decoder blocks. h is then fed into the mapping network M, which transforms the rich semantic feature into a latent code wm. The Attention-based Style Modulation Network (AbSMNet), T , takes f and a as input to generate the modulation latent codes, w\u03b3 and w\u03b2, that determine facial attributes related to the inputs. The latent code w is then forwarded to the pre-trained GAN G that generates the output image I\u2032. Our model is trained across multiple denoising steps, and we use the denoising step t to indicate the features and images obtained at each denoising step. With this pipeline, we aim to estimate the latent code, w\u2217 t , that is used as input to G to render a GT image, Igt: w\u2217 t = arg min wt L(Igt, G(wt)), (1) where L(\u00b7, \u00b7) measures the distance between Igt and the rendered image, I\u2032 = G(wt). We employ learning-based GAN inversion that estimates the latent code from an encoder to reconstruct an image according to given inputs. 3.2. Mapping Network Our mapping network M aims to build a bridge between the latent space of the diffusion-based encoder E and that of the pre-trained GAN G. E uses a text prompt and a visual input, and these textual and image embeddings are aligned by the cross-attention layers [62]. The feature maps h from the middle block of the denoising U-Net particularly contain rich semantics that resemble the latent space of the generator [28]. Here we establish the link between the latent spaces of E and G by using ht across the denoising steps t. Given ht, we design M that produces a 512-dimensional latent code wm t \u2208RL\u00d7512 that can be mapped to the latent space of G: wm t = M(ht). (2) M is designed based on the structure of the map2style block in pSp [42], as seen in Figure 2. This network consists of convolutional layers downsampling feature maps and a fully connected layer producing the latent code wm t . 3.3. Attention-based Style Modulation Network By training M with learning-based GAN inversion, we can obtain wm t and use it as input to the pre-trained GAN for image generation. However, we observe that ht shows limitations in capturing fine details of the facial attributes due to its limited spatial resolution and data loss during the encoding. Conversely, the feature maps of the DM\u2019s decoder blocks show rich semantic representations [53], benefiting from aggregating features from DM\u2019s encoder blocks via skip connections. We hence propose a novel Attentionbased Style Modulation Network (AbSMNet), T , that produces style modulation latent codes, w\u03b3 t , w\u03b2 t \u2208RL\u00d7512, by using ft and at from E. To improve reflecting the multimodal representations to the final latent code wt, we modulate wm t from M using w\u03b3 t and w\u03b2 t , as shown in Figure 2. We extract intermediate features, ft = {f n t }N n=1, from N different blocks, and cross-attention maps, at = {ak t }K k=1, from K different cross-attention layers of the n-th block, in E that is a decoder stage of denoising U-Net. The discrim\f(a) Cross-attention maps averaging for all denoising steps t= 0 \ud835\udc61= \ud835\udc47 (b) Cross-attention maps for individual denoising steps \ud835\udc00\ud835\udc61 0 \ud835\udc00\ud835\udc61 1 \ud835\udc00\ud835\udc61 2 \u0d25 \ud835\udc00\ud835\udc61 \ud835\udc00\ud835\udc47 1 \ud835\udc05\ud835\udc47 1 \u0de0 \ud835\udc05\ud835\udc47 1 (c) Example of an intermediate feature map Multi-modal inputs Output \u201cThe person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Figure 3. Visualization of cross-attention maps and intermediate feature maps. (a) represents the semantic relation information between an input text and an input semantic mask in the spatial domain. The meaningful representations of inputs are shown across all denoising steps and N different blocks. (b) represents N different cross-attention maps, At, at denoising steps t = T and t = 0. (c) shows the example of refined intermediate feature map \u02c6 F1 T at 1st block and t = T that is emphasized corresponding to input multi-modal conditions. The red and yellow regions of the map indicate higher attention scores. As the denoising step approaches T, the text-relevant features appear more clearly, and as the denoising step t approaches 0, the features of the visual input are more preserved. inative representations are represented more faithfully because ft consists of N multi-scale feature maps that can capture different sizes of facial attributes, which allows for finer control over face attributes. For simplicity, we upsample each intermediate feature map of ft to same size intermediate feature maps Ft = {Fn t }N n=1, where Fn t \u2208RH\u00d7W \u00d7Cn has H, W, and Cn as height, width and depth. Moreover, at is used to amplify controlled facial attributes as it incorporates semantically related information in text and visual input. To match the dimension with Ft, we convert at to At = {An t }N n=1, where An t \u2208RH\u00d7W \u00d7Cn, by max-pooling the output of the cross-attention layers in each decoder block and upsampling the max-pooling outputs. To capture the global representations, we additionally compute \u00af At \u2208RH\u00d7W \u00d71 by depth-wise averaging the max-pooling output of at over each word in the text prompt and upsampling it. As illustrated in Figures 3 (a) and (b), At and \u00af At represent the specific regions aligned with input text prompt and visual input, such as semantic mask, across denoising steps t. By a pixel-wise multiplication between Ft and At, we can obtain the refined intermediate feature maps \u02c6 Ft that emphasize the representations related to multiShift Net \u0de1 \ud835\udc6d\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udc6d\ud835\udc61 Weighted sum map2style \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd Scale Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc59 Shift Net Concat Scale Net Shift Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc54 \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefe \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc59 \ud835\udefc\ud835\udc61 \ud835\udefd 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udefc\ud835\udc61 \ud835\udefe map2style \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \u0de0 \ud835\udc05\ud835\udc61 Weighted sum \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe Figure 4. Style modulation network in T . The refined intermediate feature maps \u02c6 Ft and \u02c6 \u00af Ft are used to capture local and global semantic representations, respectively. They are fed into the scale and shift network, respectively. The weighted summations of these outputs are used as input to the map2style network, which finally generates the scale and shift modulation latent codes, w\u03b3 t , and w\u03b2 t . modal inputs as shown in Figure 3 (c). The improved average feature map \u02c6 \u00af Ft \u2208RH\u00d7W \u00d71 is also obtained by multiplying \u00af At with \u00af Ft, where \u00af Ft \u2208RH\u00d7W \u00d71 is obtained by first averaging the feature maps in Ft = {Fn t }N n=1 and then depth-wise averaging the outputs. \u02c6 Ft and \u02c6 \u00af Ft distinguish textand structural-relevant semantic features, which improves the alignment with the inputs. We use \u02c6 Ft and \u02c6 \u00af Ft as input to the style modulation network that produces the modulation codes w\u03b3 t , and w\u03b2 t as shown in Figure 4. We capture both local and global features by using \u02c6 Ft, which consists of feature maps representing different local regions on the face, and \u02c6 \u00af Ft, which implies representations of the entire face. We concatenate N intermediate feature maps of \u02c6 Ft, concat(\u02c6 F1 t \u00b7 \u00b7 \u00b7 \u02c6 FN t ), and it is forward to the scale and shift networks that consist of convolutional layers and Leaky ReLU, forming the local modulation feature maps, \u02c6 F\u03b3l t and \u02c6 F\u03b2l t . We also estimate global modulation feature maps, \u02c6 F\u03b3g t and \u02c6 F\u03b2g t , by feeding \u02c6 \u00af Ft to the scale and shift network. The final scale, \u02c6 F\u03b3 t , and shift, \u02c6 F\u03b2 t , feature maps are estimated by the weighted summation: \u02c6 F\u03b3 t = \u03b1\u03b3 t \u02c6 F\u03b3l t + (1 \u2212\u03b1\u03b3 t )\u02c6 F\u03b3g t , (3) \u02c6 F\u03b2 t = \u03b1\u03b2 t \u02c6 F\u03b2g t + (1 \u2212\u03b1\u03b2 t )\u02c6 F\u03b2g t , where \u03b1\u03b3 t and \u03b1\u03b2 t are learnable weight parameters. Through the map2style module, we then convert \u02c6 F\u03b3 t and \u02c6 F\u03b2 t into the final scale, w\u03b3 t \u2208RL\u00d7512, and shift, w\u03b2 t \u2208RL\u00d7512, latent codes. With these modulation latent codes, we achieve more precise control over facial details while corresponding to the input multi-modal inputs at the pixel level. Finally, the mapped latent code wm t from M is modulated by w\u03b3 t and w\u03b2 t from T to get the final latent code wt that is used to obtain the generated image I\u2032 t as follows: wt = wm t \u2299w\u03b3 t \u2295w\u03b2 t , (4) I\u2032 t = G(wt). (5) \f10132 5987 13044 9807 rebuttal (a) \u201cThis person has brown hair, and eyeglasses.\u201d (b)\u201cThis person has mustache.\u201d (c) \u201cThis person has gray hair, and eyeglasses.\u201d Inputs TediGAN UaC Ours (a) (b) (c) (a) (b) (c) (a) (b) (c) (a) \u201cShe has high cheekbones, straight hair, black hair.\u201d (b)\u201cShe has high cheekbones, straight hair, blond hair.\u201d (c) \u201cHe has blond hair, sideburns.\u201d (a) \u201cHe has brown hair, and wavy hair.\u201d (b)\u201cHe has black hair, and straight hair.\u201d (c) \u201cHe has black hair, and goatee.\u201d Collaborative ControlNet Figure 5. Visual examples of the 2D face image generation using a text prompt and a semantic mask. For each semantic mask, we use three different text prompts (a)-(c), resulting in different output images (a)-(c). 3.4. Loss Functions To optimize M and T , we use reconstruction loss, perceptual loss, and identity loss for image generation, and regularization loss [42] that encourages the latent codes to be closer to the average latent code \u00af w. For training M, we use the GT image Igt as reference to encourage the latent code wm t to generate a photo-realistic image as follows: LM = \u03bbm 0 \u2225Igt \u2212G(wm t )\u22252+ (6) \u03bbm 1 \u2225F(Igt) \u2212F(G(wm t )\u22252+ \u03bbm 2 (1 \u2212cos(R(Igt), R(G(wm t ))))+ \u03bbm 3 \u2225E(zt, t, x, c) \u2212\u00af w\u22252, where R(\u00b7) is pre-trained ArcFace network [8], F(\u00b7) is the feature extraction network [63], zt is noisy image, and the hyper-parameters \u03bbm (\u00b7) guide the effect of losses. Note that we freeze T while training M. For training T , we use Id 0 produced by the encoder E into the reconstruction and perceptual losses. With these losses, the loss LT encourages the network to control facial attributes while preserving the identity of Igt: LT = \u03bbs 0\u2225Id 0 \u2212G(wt)\u22252+ (7) \u03bbs 1\u2225F(Id 0) \u2212F(G(wt)\u22252+ \u03bbs 2(1 \u2212cos(R(Igt), R(G(wt))))+ \u03bbs 3\u2225E(zt, t, x, c) \u2212\u00af w\u22252, where the hyper-parameters \u03bbs (\u00b7) guide the effect of losses. Similar to Equation 6, we freeze M while training T . We further introduce a multi-step training strategy that considers the evolution of the feature representation in E over the denoising steps. We observe that E tends to focus more on text-relevant features in an early step, t = T, and structure-relevant features in a later step, t = 0. Figure 3 (b) shows the attention maps \u00af A showing variations across the denoising step. As the attention map, we can capture the textual and structural features by varying the denoising steps. To effectively capture the semantic details of multi-modal conditions, our model is trained across multiple denoising steps. 4. Experiments 4.1. Experimental Setup We use ControlNet [62] as the diffusion-based encoder that receives multi-modal conditions, including text and visual conditions such as a semantic mask and scribble map. The StyleGAN [22] and EG3D [4] are exploited as pre-trained 2D and 3D GAN, respectively. See the Supplementary Material for the training details, the network architecture, and additional results. Datasets. We employ the CelebAMask-HQ [29] dataset comprising 30,000 face RGB images and annotated semantic masks, including 19 facial-component categories such as skin, eyes, mouth, and etc. We also use textual de\fOurs I (a) (b) (c) (d) Ours IDE-3D \u201cThe person has brown hair, and sideburn.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has black hair, and wavy hair.\u201d (a) (b) (c) (d) Inputs Figure 6. Visual examples of the 3D-aware face image generation using a text and a semantic mask. We show the images generated with inputs and arbitrary viewpoints. Input conditions Method Model Domain FID\u2193 LPIPS\u2193 SSIM\u2191 ID\u2191 ACC\u2191 mIoU\u2191 Text + semantic mask TediGAN [58] GAN 2D 54.83 0.31 0.62 0.63 81.68 40.01 IDE-3D [51] GAN 3D 39.05 0.40 0.41 0.54 47.07 10.98 UaC [35] Diffusion 2D 45.87 0.38 0.59 0.32 81.49 42.68 ControlNet [62] Diffusion 2D 46.41 0.41 0.53 0.30 82.42 42.77 Collaborative [19] Diffusion 2D 48.23 0.39 0.62 0.31 74.06 30.69 Ours GAN 2D 46.68 0.30 0.63 0.76 83.41 43.82 Ours GAN 3D 44.91 0.28 0.64 0.78 83.05 43.74 Text + scribble map ControlNet [62] Diffusion 2D 93.26 0.52 0.25 0.21 Ours GAN 2D 55.60 0.32 0.56 0.72 Ours GAN 3D 48.76 0.34 0.49 0.62 Table 1. Quantitative results of multi-modal face image generation on CelebAMask-HQ [29] with annotated text prompts [58]. scriptions provided by [58] describing the facial attributes, such as black hair, sideburns, and etc, corresponding to the CelebAMask-HQ dataset. For the face image generation task using a scribble map, we obtain the scribble maps by applying PiDiNet [49, 50] to the RGB images in CelebAMask-HQ. We additionally compute camera parameters based on [4, 10] for 3D-aware image generation. Comparisons. We compare our method with GAN-based models, such as TediGAN [58] and IDE-3D [51], and DMbased models, such as Unite and Conquer (UaC) [35], ControlNet [62], and Collaborative diffusion (Collaborative) [19], for face generation task using a semantic mask and a text prompt. IDE-3D is trained by a CLIP loss term like TediGAN to apply a text prompt for 3D-aware face image generation. ControlNet is used for face image generation using a text prompt and a scribble map. We use the official codes provided by the authors, and we downsample the results into 256 \u00d7 256 for comparison. Evaluation Metrics. For quantitative comparisons, we evaluate the image quality and semantic consistency using sampled 2k semantic maskand scribble map-text prompt pairs. Frechet Inception Distance (FID) [17], LPIPS [63], and the Multiscale Structural Similarity (MS-SSIM) [56] are employed for the evaluation of visual quality and diversity, respectively. We also compute the ID similarity mean score (ID) [8, 57] before and after applying a text prompt. Additionally, we assess the alignment accuracy between the input semantic masks and results using mean Intersectionover-Union (mIoU) and pixel accuracy (ACC) for the face generation task using a semantic mask. 4.2. Results Qualitative Evaluations. Figure 5 shows the visual comparisons between ours and two existing methods for 2D face image generation using a text prompt and a semantic mask as input. We use the same semantic mask with different text prompts (a)-(c). TediGAN produces results consistent with the text prompt as the latent codes are optimized using the input text prompt. However, the results are inconsistent with the input semantic mask, as highlighted in the red boxes. UaC shows good facial alignment with the input semantic mask, but the results are generated with unexpected attributes, such as glasses, that are not indicated in the inputs. Collaborative and ControlNet produce inconsistent, blurry, and unrealistic images. Our model is capable of preserving semantic consistency with inputs and generating realistic facial images. As shown in Figure 5, our method preserves the structure of the semantic mask, such as the hairline, face position, and mouth shape, while changing the attributes through a text prompt. Figure 6 compares our method with IDE-3D [51] to validate the performance of 3D-aware face image generation \fInput View 1. 2. 3. 4. Novel Views (a) Inputs (b) ControlNet (c) Ours Input text: 1. \u201cThis young woman has straight hair, and eyeglasses and wears lipstick.\u201d 2. \u201cThe man has mustache, receding hairline, big nose, goatee, sideburns, bushy eyebrows, and high cheekbones.\u201d 3. \u201cShe has big lips, pointy nose, receding hairline, and arched eyebrows.\u201d 4. \u201cThis man has mouth slightly open, and arched eyebrows. He is smiling.\u201d Figure 7. Visual examples of 3D-aware face image generation using text prompts and scribble maps. Using (1-4) the text prompts and their corresponding (a) scribble maps, we compare the results of (b) ControlNet with (c) multi-view images generated by ours. using a semantic mask and a text prompt. We use the same semantic mask with different text prompts in Figures 6 (a) and (b), and use the same text prompt with different semantic masks in Figures 6 (c) and (d). The results of IDE-3D are well aligned with the semantic mask with the frontal face. However, IDE-3D fails to produce accurate results when the non-frontal face mask is used as input. Moreover, the results cannot reflect the text prompt. Our method can capture the details provided by input text prompts and semantic masks, even in a 3D domain. Figure 7 shows visual comparisons with ControlNet on 2D face generation from a text prompt and a scribble map. The results from ControlNet and our method are consistent with both the text prompt and the scribble map. ControlNet, however, tends to over-emphasize the characteristic details related to input conditions. Our method can easily adapt to the pre-trained 3D GAN and produce photo-realistic multiview images from various viewpoints. Quantitative Evaluations. Table 1 reports the quantitative results on CelebAMask-HQ with text prompts [58]. Our method using text prompts and semantic masks shows performance increases in all metrics in 2D and 3D domains, compared with TediGAN and UaC. Our model using 2D GAN significantly improves LPIPS, ID, ACC, and mIoU scores, surpassing TediGAN, UaC, ControlNet, and Collaborative, respectively. It demonstrates our method\u2019s strong ability to generate photo-realistic images while reflecting input multi-modal conditions better. For 3D-aware face image generation using a text prompt and a semantic mask, it \ud835\udcaf (c) w/o \ud835\udc34, \u04a7 \ud835\udc34 (d) Full model urns, and bags under eyes.\u201d and has arched eyebrows, black hair.\u201d 2. 3. 1. Input text: 1. \u201cThis man has gray hair.\u201d 2. \u201cHe has double chin, sideburns, and bags under eyes.\u201d 3. \u201cShe wears heavy makeup and has arched eyebrows, black hair.\u201d (a) Inputs (b) w/o T (c) w/o A, \u00af A (d) Ours Figure 8. Effect of M and T . (b) shows the results using only M, and (c) shows the effect of the cross-attention maps (A and \u00af A) in T . The major changes are highlighted with the white boxes. Method M T At Igt Id 0 FID\u2193 LPIPS\u2193ID\u2191 ACC\u2191 (a) \u2713 \u2713 \u2713 62.08 0.29 0.62 81.09 (b) \u2713 \u2713 \u2713 \u2713 48.68 0.28 0.66 82.86 (c) \u2713 \u2713 \u2713 \u2713 54.27 0.31 0.58 80.58 (d) \u2713 \u2713 \u2713 \u2713 61.60 0.29 0.62 80.04 (e) \u2713 \u2713 \u2713 \u2713 \u2713 44.91 0.28 0.78 83.05 Table 2. Ablation analysis on 3D-aware face image generation using a text prompt and a semantic mask. We compare (a) and (b) with (e) to show the effect of our style modulation network and (c) and (d) with (e) to analyze the effect of Igt and Id in model training. is reasonable that IDE-3D shows the highest FID score as the method additionally uses an RGB image as input to estimate the latent code for face generation. The LPIPS, SSIM, and ID scores are significantly higher than IDE-3D, with scores higher by 0.116, 0.23, and 0.24, respectively. Our method using 3D GAN exhibits superior ACC and mIoU scores for the 3D face generation task compared to IDE3D, with the score difference of 35.98% and 32.76%, likely due to its ability to reflect textual representations into spatial information. In face image generation tasks using a text prompt and a scribble map, our method outperforms ControlNet in FID, LPIPS, SSIM, and ID scores in both 2D and 3D domains. Note that the ACC and mIoU scores are applicable for semantic mask-based methods. 4.3. Ablation Study We conduct ablation studies to validate the effectiveness of our contributions, including the mapping network M, the AbSM network T , and the loss functions LM and LT . Effectiveness of M and T . We conduct experiments with different settings to assess the effectiveness of M and T . \fw/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours \u201cShe wears lipstick and has arched eyebrows, and slightly \u201cThis young person has goatee, mustache, big lips, and strai d) Ours urs and big lips, ws, and (a) Inputs (b) w/ \ud835\udc3c0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. 1. Input text: 1. \u201cThis young person has goatee, mustache, big lips, and straight hair.\u201d 2. \u201cShe wears lipstick and has arched eyebrows, and mouth slightly open.\u201d Figure 9. Effect of using Id from the denoising U-Net and the GT image Igt in model training. Using text prompts (1, 2) with (a) the semantic mask, we show face images using our model trained with (b) Id 0 , (c) Igt, and (d) both. We also show the advantages of using cross-attention maps in our model. The quantitative and qualitative results are presented in Table 2 and Figure 8, respectively. When using only M, we can generate face images that roughly preserve the structures of a given semantic mask in Figure 8 (a), including the outline of the facial components (e.g. face, eye) in Figure 8 (b). On the other hand, T enables the model to express face attribute details effectively, such as hair colors and mouth open, based on the multi-modal inputs in Figure 8 (c). The FID and ACC scores are higher than the model using only M in Table 2 (b). We further present the impact of adopting cross-attention maps to T for style modulation. Figure 8 (d) shows how the attention-based modulation approach enhances the quality of results, particularly in terms of the sharpness of desired face attributes and the overall consistency between the generated image and multi-modal conditions. Table 2 (e) demonstrates the effectiveness of our method by showing improvements in FID, LPIPS, ID, and ACC. Our method, including both M and T with cross-attention maps, significantly improves the FID showing our model\u2019s ability to generate high-fidelity images. From the improvement of the ID score, the crossattention maps enable relevantly applying the details of input conditions to facial components. Model Training. We analyze the effect of loss terms LM and LT by comparing the performance with the model trained using either Id 0 from the denoising U-Net or GT image Igt. The model trained using Id 0 produces the images in Figure 9 (b), which more closely reflected the multi-modal conditions (a), such as \u201cgoatee\u201d and \u201chair contour\u201d. In Table 2 (c), the ACC score of this model is higher than the model trained only using Igt in Table 2 (d). The images generated by the model trained with Igt in Figure 9 (c) are more perceptually realistic, as evidenced by the lower LPIPS score compared to the model trained with Id 0 in TaInput text: 1. 2. 3. 1. \u201cA photo of a face of a beautiful elf with silver hair in live action movie.\u201d 2. \u201cA photo of a white Greek statue.\u201d 3. \u201cA photo of a face of a zombie.\u201d Figure 10. Visual examples of 3D face style transfer. Our method generates stylized multi-view images by mapping the latent features of DM and GAN. ble 2 (c) and (d). Using Igt also preserves more conditionirrelevant features inferred by the ID scores in Table 2 (c) and (d). In particular, our method combines the strengths of two models as shown in Figure 9 (d) and Table 2 (e). 4.4. Limitations and Future Works Our method can be extended to multi-modal face style transfer (e.g. face \u2192Greek statue) by mapping the latent spaces of DM and GAN without CLIP losses and additional dataset, as shown in Figure 10. For the 3D-aware face style transfer task, we train our model using Id 0 that replaces GT image Igt in our loss terms. This method, however, is limited as it cannot transfer extremely distinct style attributes from the artistic domain to the photo-realistic domain of GAN. To better transfer the facial style in the 3D domain, we will investigate methods to map the diffusion features related to the input pose into the latent space of GAN in future works. 5.", + "additional_graph_info": { + "graph": [ + [ + "Jihyun Kim", + "Dongsu Ryu" + ], + [ + "Jihyun Kim", + "Hyesung Kang" + ], + [ + "Jihyun Kim", + "Soohyun Kim" + ], + [ + "Dongsu Ryu", + "Hyesung Kang" + ], + [ + "Dongsu Ryu", + "Santabrata Das" + ], + [ + "Dongsu Ryu", + "Indranil Chattopadhyay" + ], + [ + "Hyesung Kang", + "Renyue Cen" + ], + [ + "Hyesung Kang", + "Vahe Petrosian" + ], + [ + "Soohyun Kim", + "Seungryong Kim" + ], + [ + "Soohyun Kim", + "Junho Kim" + ], + [ + "Soohyun Kim", + "Taekyung Kim" + ], + [ + "Soohyun Kim", + "Hwan Heo" + ], + [ + "Soohyun Kim", + "Jiyoung Lee" + ] + ], + "node_feat": { + "Jihyun Kim": [ + { + "url": "http://arxiv.org/abs/2405.04356v1", + "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation", + "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "authors": "Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In recent years, multi-modal image generation has achieved remarkable success, driven by the advancements in Generative Adversarial Networks (GANs) [15] and diffusion models (DMs) [11, 18, 48]. Facial image processing has become a popular application for a variety of tasks, including face image generation [21, 39], face editing [6, 12, 30, 36, 37, 46], and style transfer [7, 64]. Many tasks typically utilize the pre-trained StyleGAN [21, 22], which can generate realistic facial images and edit facial attributes by manipulating the latent space using GAN inversion [39, 42, 58]. In these tasks, using multiple modalities as conditions is becoming a popular approach, which improves the user\u2019s controllability in generating realistic face images. However, existing GAN *Corresponding author This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). rebuttal (a) Oil painting (b) Watercolor Visual input 2D face image generation 3D-aware face image generation Face style transfer \u201cThe woman has bangs, brown hair. She is smiling.\u201d \u201cGreek statue\u201d \u201csilver hair Elf\u201d \u201cCartoon style\u201d Overview of our method \u201cThe chubby man has receding hairline, eyeglasses, gray hair, and double chin.\u201d \u201cWatercolor painting\u201d GAN Ours Diffusion \u201cShe has blond hair, straight hair, and wears heavy makeup.\u201d Visual condition Text condition Figure 1. We present a method to map the diffusion features to the latent space of a pre-trained GAN, which enables diverse tasks in multi-modal face image generation and style transfer. Our method can be applied to 2D and 3D-aware face image generation. inversion methods [51, 58] have poor alignment with inputs as they neglect the correlation between multi-modal inputs. They struggle to map the different modalities into the latent space of the pre-trained GAN, such as by mixing the latent codes or optimizing the latent code converted from a given image according to the input text. Recently, DMs have increased attention in multi-modal image generation thanks to the stability of training and the flexibility of using multiple modalities as conditions. DMs [23, 53, 54] can control the multiple modalities and render diverse images by manipulating the latent or attention features across the time steps. However, existing textto-image DMs rely on an autoencoder and text encoder, such as CLIP [41], trained on unstructured datasets collected from the web [40, 45] that may lead to unrealistic arXiv:2405.04356v1 [cs.CV] 7 May 2024 \fimage generation. Moreover, some approaches address multi-modal face image generation in a 3D domain. In GAN inversion [14, 51], multi-view images can be easily acquired by manipulating the latent code with pre-trained 3D GANs. While DMs are inefficient in learning 3D representation, which has the challenge to generate multi-view images directly due to the lack of 3D ground-truth (GT) data for training [32, 47]. They can be used as a tool to acquire training datasets for 3D-aware image generation [24, 33]. In this paper, we present a versatile face generative model that uses text and visual inputs. We propose an approach that takes the strengths of DMs and GAN and generates photo-realistic images with flexible control over facial attributes, which can be adapted to 2D and 3D domains, as illustrated in Figure 1. Our method employs a latent mapping strategy that maps the diffusion features into the latent space of a pre-trained GAN using multi-denoising step learning, producing the latent code that encodes the details of text prompts and visual inputs. In summary, our main contributions are: (i) We present a novel method to link a pre-trained GAN (StyleGAN [22], EG3D [4]) and DM (ControlNet [62]) for multi-modal face image generation. (ii) We propose a simple mapping network that links pretrained GAN and DM\u2019s latent spaces and an attentionbased style modulation network that enables the use of meaningful features related to multi-modal inputs. (iii) We present a multi-denoising step training strategy that enhances the model\u2019s ability to capture the textual and structural details of multi-modal inputs. (iv) Our model can be applied for both 2Dand 3D-aware face image generation without additional data or loss terms and outperforms existing DMand GAN-based methods. 2. Related Work 2.1. GAN Inversion GAN inversion approaches have gained significant popularity in the face image generation task [7, 31, 51, 59] using the pre-trained 2D GAN, such as StyleGAN [21, 22]. This method has been extended to 3D-aware image generation [27, 60, 61] by integrating 3D GANs, such as EG3D [4]. GAN inversion can be categorized into learning-based, optimization-based, and hybrid methods. Optimization-based methods [44, 67] estimate the latent code by minimizing the difference between an output and an input image. Learning-based methods [1, 52] train an encoder that maps an input image into the latent space of the pre-trained GAN. Hybrid methods [58, 66] combine these two methods, producing an initial latent code and then refining it with additional optimizations. Our work employs a learning-based GAN inversion, where a DM serves as the encoder. We produce latent codes by leveraging semantic features in the denoising U-Net, which can generate images with controlled facial attributes. 2.2. Diffusion Model for Image Generation Many studies have introduced text-to-image diffusion models [36, 43, 45] that generate images by encoding multimodal inputs, such as text and image, into latent features via foundation models [41] and mapping them to the features of denoising U-Net via an attention mechanism. ControlNet [62] performs image generation by incorporating various visual conditions (e.g., semantic mask, scribbles, edges) and text prompts. Image editing models using DMs [16, 20, 26, 28, 34] have exhibited excellent performance by controlling the latent features or the attention maps of a denoising U-Net. Moreover, DMs can generate and edit images by adjusting latent features over multiple denoising steps [2]. We focus on using latent features of DM, including intermediate features and cross-attention maps, across denoising steps to link them with the latent space of GAN and develop a multi-modal face image generation task. 2.3. Multi-Modal Face Image Generation Face generative models have progressed by incorporating various modalities, such as text [25], semantic mask [38, 55], sketch [5, 9], and audio [65]. Several methods adopt StyleGAN, which can generate high-quality face images and edit facial attributes to control the style vectors. The transformer-based models [3, 13] are also utilized, which improves the performance of face image generation by handling the correlation between multi-modal conditions using image quantization. A primary challenge faced in face generative models is to modify the facial attributes based on given conditions while minimizing changes to other attributes. Some methods [39, 57] edit facial attributes by manipulating the latent codes in GAN models. TediGAN [58] controls multiple conditions by leveraging an encoder to convert an input image into latent codes and optimizing them with a pre-trained CLIP model. Recent works [19, 35] use DMs to exploit the flexibility of taking multiple modalities as conditions and generate facial images directly from DMs. Unlike existing methods, we use the pre-trained DM [62] as an encoder to further produce the latent codes for the pre-trained GAN models. 3. Method 3.1. Overview Figure 2 illustrates the overall pipeline of our approach. During the reverse diffusion process, we use the middle and decoder blocks of a denoising U-Net in ControlNet [62] as an encoder E. A text prompt c, along with a visual condition x, are taken as input to the denoising U-Net. Subsequently, E produces the feature maps h from the middle block, and \f\ud835\udc300 \ud835\udefe \u2219\u2219\u2219 \ud835\udc61= 0 \ud835\udc61= \ud835\udc47 \ud835\udc3c0 \u2032 \ud835\udc3c0 \ud835\udc51 \ud835\udc210 \ud835\udc3c\ud835\udc47 \u2032 \u2219\u2219\u2219 Conv ReLU \ud835\udc21\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udc5a \ud835\udc300 \ud835\udc300 \ud835\udefd Conv ReLU FC \u0de0 \ud835\udc05\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd \ud835\udc1f0 \ud835\udc300 \ud835\udc5a \ud835\udc50 Reverse Process of Diffusion \ud835\udc1a\ud835\udc61 \ud835\udc1f\ud835\udc61 Max-pool Average Average Upsample \ud835\udc05\ud835\udc61 \ud835\udc00\ud835\udc61 \u0d25 \ud835\udc00\ud835\udc61 \u0d24 \ud835\udc05\ud835\udc61 Style Modulation Network \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \ud835\udc1a0 \ud835\udc50 \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Pixel-wise multiplication Pixel-wise addition Our Model Mapping Network AbSMNet Frozen Figure 2. Overview of our method. We use a diffusion-based encoder E, the middle and decoder blocks of a denoising U-Net, that extracts the semantic features ht, intermediate features ft, and cross-attention maps at at denoising step t. We present the mapping network M (Sec. 3.2) and the attention-based style modulation network (AbSMNet) T (Sec. 3.3) that are trained across t (Sec. 3.4). M converts ht into the mapped latent code wm t , and T uses ft and at to control the facial attributes from the text prompt c and visual input x. The modulation codes w\u03b3 t and w\u03b2 t are then used to scale and shift wm t to produce the final latent code, wt, that is fed to the pre-trained GAN G. We obtain the generation output I\u2032 t from our model Y and we use the image Id 0 from the U-Net after the entire denoising process for training T (Sec. 3.4). Note that only the networks with the dashed line ( ) are trainable, while others are frozen. the intermediate features f and the cross-attention maps a from the decoder blocks. h is then fed into the mapping network M, which transforms the rich semantic feature into a latent code wm. The Attention-based Style Modulation Network (AbSMNet), T , takes f and a as input to generate the modulation latent codes, w\u03b3 and w\u03b2, that determine facial attributes related to the inputs. The latent code w is then forwarded to the pre-trained GAN G that generates the output image I\u2032. Our model is trained across multiple denoising steps, and we use the denoising step t to indicate the features and images obtained at each denoising step. With this pipeline, we aim to estimate the latent code, w\u2217 t , that is used as input to G to render a GT image, Igt: w\u2217 t = arg min wt L(Igt, G(wt)), (1) where L(\u00b7, \u00b7) measures the distance between Igt and the rendered image, I\u2032 = G(wt). We employ learning-based GAN inversion that estimates the latent code from an encoder to reconstruct an image according to given inputs. 3.2. Mapping Network Our mapping network M aims to build a bridge between the latent space of the diffusion-based encoder E and that of the pre-trained GAN G. E uses a text prompt and a visual input, and these textual and image embeddings are aligned by the cross-attention layers [62]. The feature maps h from the middle block of the denoising U-Net particularly contain rich semantics that resemble the latent space of the generator [28]. Here we establish the link between the latent spaces of E and G by using ht across the denoising steps t. Given ht, we design M that produces a 512-dimensional latent code wm t \u2208RL\u00d7512 that can be mapped to the latent space of G: wm t = M(ht). (2) M is designed based on the structure of the map2style block in pSp [42], as seen in Figure 2. This network consists of convolutional layers downsampling feature maps and a fully connected layer producing the latent code wm t . 3.3. Attention-based Style Modulation Network By training M with learning-based GAN inversion, we can obtain wm t and use it as input to the pre-trained GAN for image generation. However, we observe that ht shows limitations in capturing fine details of the facial attributes due to its limited spatial resolution and data loss during the encoding. Conversely, the feature maps of the DM\u2019s decoder blocks show rich semantic representations [53], benefiting from aggregating features from DM\u2019s encoder blocks via skip connections. We hence propose a novel Attentionbased Style Modulation Network (AbSMNet), T , that produces style modulation latent codes, w\u03b3 t , w\u03b2 t \u2208RL\u00d7512, by using ft and at from E. To improve reflecting the multimodal representations to the final latent code wt, we modulate wm t from M using w\u03b3 t and w\u03b2 t , as shown in Figure 2. We extract intermediate features, ft = {f n t }N n=1, from N different blocks, and cross-attention maps, at = {ak t }K k=1, from K different cross-attention layers of the n-th block, in E that is a decoder stage of denoising U-Net. The discrim\f(a) Cross-attention maps averaging for all denoising steps t= 0 \ud835\udc61= \ud835\udc47 (b) Cross-attention maps for individual denoising steps \ud835\udc00\ud835\udc61 0 \ud835\udc00\ud835\udc61 1 \ud835\udc00\ud835\udc61 2 \u0d25 \ud835\udc00\ud835\udc61 \ud835\udc00\ud835\udc47 1 \ud835\udc05\ud835\udc47 1 \u0de0 \ud835\udc05\ud835\udc47 1 (c) Example of an intermediate feature map Multi-modal inputs Output \u201cThe person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Figure 3. Visualization of cross-attention maps and intermediate feature maps. (a) represents the semantic relation information between an input text and an input semantic mask in the spatial domain. The meaningful representations of inputs are shown across all denoising steps and N different blocks. (b) represents N different cross-attention maps, At, at denoising steps t = T and t = 0. (c) shows the example of refined intermediate feature map \u02c6 F1 T at 1st block and t = T that is emphasized corresponding to input multi-modal conditions. The red and yellow regions of the map indicate higher attention scores. As the denoising step approaches T, the text-relevant features appear more clearly, and as the denoising step t approaches 0, the features of the visual input are more preserved. inative representations are represented more faithfully because ft consists of N multi-scale feature maps that can capture different sizes of facial attributes, which allows for finer control over face attributes. For simplicity, we upsample each intermediate feature map of ft to same size intermediate feature maps Ft = {Fn t }N n=1, where Fn t \u2208RH\u00d7W \u00d7Cn has H, W, and Cn as height, width and depth. Moreover, at is used to amplify controlled facial attributes as it incorporates semantically related information in text and visual input. To match the dimension with Ft, we convert at to At = {An t }N n=1, where An t \u2208RH\u00d7W \u00d7Cn, by max-pooling the output of the cross-attention layers in each decoder block and upsampling the max-pooling outputs. To capture the global representations, we additionally compute \u00af At \u2208RH\u00d7W \u00d71 by depth-wise averaging the max-pooling output of at over each word in the text prompt and upsampling it. As illustrated in Figures 3 (a) and (b), At and \u00af At represent the specific regions aligned with input text prompt and visual input, such as semantic mask, across denoising steps t. By a pixel-wise multiplication between Ft and At, we can obtain the refined intermediate feature maps \u02c6 Ft that emphasize the representations related to multiShift Net \u0de1 \ud835\udc6d\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udc6d\ud835\udc61 Weighted sum map2style \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd Scale Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc59 Shift Net Concat Scale Net Shift Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc54 \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefe \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc59 \ud835\udefc\ud835\udc61 \ud835\udefd 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udefc\ud835\udc61 \ud835\udefe map2style \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \u0de0 \ud835\udc05\ud835\udc61 Weighted sum \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe Figure 4. Style modulation network in T . The refined intermediate feature maps \u02c6 Ft and \u02c6 \u00af Ft are used to capture local and global semantic representations, respectively. They are fed into the scale and shift network, respectively. The weighted summations of these outputs are used as input to the map2style network, which finally generates the scale and shift modulation latent codes, w\u03b3 t , and w\u03b2 t . modal inputs as shown in Figure 3 (c). The improved average feature map \u02c6 \u00af Ft \u2208RH\u00d7W \u00d71 is also obtained by multiplying \u00af At with \u00af Ft, where \u00af Ft \u2208RH\u00d7W \u00d71 is obtained by first averaging the feature maps in Ft = {Fn t }N n=1 and then depth-wise averaging the outputs. \u02c6 Ft and \u02c6 \u00af Ft distinguish textand structural-relevant semantic features, which improves the alignment with the inputs. We use \u02c6 Ft and \u02c6 \u00af Ft as input to the style modulation network that produces the modulation codes w\u03b3 t , and w\u03b2 t as shown in Figure 4. We capture both local and global features by using \u02c6 Ft, which consists of feature maps representing different local regions on the face, and \u02c6 \u00af Ft, which implies representations of the entire face. We concatenate N intermediate feature maps of \u02c6 Ft, concat(\u02c6 F1 t \u00b7 \u00b7 \u00b7 \u02c6 FN t ), and it is forward to the scale and shift networks that consist of convolutional layers and Leaky ReLU, forming the local modulation feature maps, \u02c6 F\u03b3l t and \u02c6 F\u03b2l t . We also estimate global modulation feature maps, \u02c6 F\u03b3g t and \u02c6 F\u03b2g t , by feeding \u02c6 \u00af Ft to the scale and shift network. The final scale, \u02c6 F\u03b3 t , and shift, \u02c6 F\u03b2 t , feature maps are estimated by the weighted summation: \u02c6 F\u03b3 t = \u03b1\u03b3 t \u02c6 F\u03b3l t + (1 \u2212\u03b1\u03b3 t )\u02c6 F\u03b3g t , (3) \u02c6 F\u03b2 t = \u03b1\u03b2 t \u02c6 F\u03b2g t + (1 \u2212\u03b1\u03b2 t )\u02c6 F\u03b2g t , where \u03b1\u03b3 t and \u03b1\u03b2 t are learnable weight parameters. Through the map2style module, we then convert \u02c6 F\u03b3 t and \u02c6 F\u03b2 t into the final scale, w\u03b3 t \u2208RL\u00d7512, and shift, w\u03b2 t \u2208RL\u00d7512, latent codes. With these modulation latent codes, we achieve more precise control over facial details while corresponding to the input multi-modal inputs at the pixel level. Finally, the mapped latent code wm t from M is modulated by w\u03b3 t and w\u03b2 t from T to get the final latent code wt that is used to obtain the generated image I\u2032 t as follows: wt = wm t \u2299w\u03b3 t \u2295w\u03b2 t , (4) I\u2032 t = G(wt). (5) \f10132 5987 13044 9807 rebuttal (a) \u201cThis person has brown hair, and eyeglasses.\u201d (b)\u201cThis person has mustache.\u201d (c) \u201cThis person has gray hair, and eyeglasses.\u201d Inputs TediGAN UaC Ours (a) (b) (c) (a) (b) (c) (a) (b) (c) (a) \u201cShe has high cheekbones, straight hair, black hair.\u201d (b)\u201cShe has high cheekbones, straight hair, blond hair.\u201d (c) \u201cHe has blond hair, sideburns.\u201d (a) \u201cHe has brown hair, and wavy hair.\u201d (b)\u201cHe has black hair, and straight hair.\u201d (c) \u201cHe has black hair, and goatee.\u201d Collaborative ControlNet Figure 5. Visual examples of the 2D face image generation using a text prompt and a semantic mask. For each semantic mask, we use three different text prompts (a)-(c), resulting in different output images (a)-(c). 3.4. Loss Functions To optimize M and T , we use reconstruction loss, perceptual loss, and identity loss for image generation, and regularization loss [42] that encourages the latent codes to be closer to the average latent code \u00af w. For training M, we use the GT image Igt as reference to encourage the latent code wm t to generate a photo-realistic image as follows: LM = \u03bbm 0 \u2225Igt \u2212G(wm t )\u22252+ (6) \u03bbm 1 \u2225F(Igt) \u2212F(G(wm t )\u22252+ \u03bbm 2 (1 \u2212cos(R(Igt), R(G(wm t ))))+ \u03bbm 3 \u2225E(zt, t, x, c) \u2212\u00af w\u22252, where R(\u00b7) is pre-trained ArcFace network [8], F(\u00b7) is the feature extraction network [63], zt is noisy image, and the hyper-parameters \u03bbm (\u00b7) guide the effect of losses. Note that we freeze T while training M. For training T , we use Id 0 produced by the encoder E into the reconstruction and perceptual losses. With these losses, the loss LT encourages the network to control facial attributes while preserving the identity of Igt: LT = \u03bbs 0\u2225Id 0 \u2212G(wt)\u22252+ (7) \u03bbs 1\u2225F(Id 0) \u2212F(G(wt)\u22252+ \u03bbs 2(1 \u2212cos(R(Igt), R(G(wt))))+ \u03bbs 3\u2225E(zt, t, x, c) \u2212\u00af w\u22252, where the hyper-parameters \u03bbs (\u00b7) guide the effect of losses. Similar to Equation 6, we freeze M while training T . We further introduce a multi-step training strategy that considers the evolution of the feature representation in E over the denoising steps. We observe that E tends to focus more on text-relevant features in an early step, t = T, and structure-relevant features in a later step, t = 0. Figure 3 (b) shows the attention maps \u00af A showing variations across the denoising step. As the attention map, we can capture the textual and structural features by varying the denoising steps. To effectively capture the semantic details of multi-modal conditions, our model is trained across multiple denoising steps. 4. Experiments 4.1. Experimental Setup We use ControlNet [62] as the diffusion-based encoder that receives multi-modal conditions, including text and visual conditions such as a semantic mask and scribble map. The StyleGAN [22] and EG3D [4] are exploited as pre-trained 2D and 3D GAN, respectively. See the Supplementary Material for the training details, the network architecture, and additional results. Datasets. We employ the CelebAMask-HQ [29] dataset comprising 30,000 face RGB images and annotated semantic masks, including 19 facial-component categories such as skin, eyes, mouth, and etc. We also use textual de\fOurs I (a) (b) (c) (d) Ours IDE-3D \u201cThe person has brown hair, and sideburn.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has black hair, and wavy hair.\u201d (a) (b) (c) (d) Inputs Figure 6. Visual examples of the 3D-aware face image generation using a text and a semantic mask. We show the images generated with inputs and arbitrary viewpoints. Input conditions Method Model Domain FID\u2193 LPIPS\u2193 SSIM\u2191 ID\u2191 ACC\u2191 mIoU\u2191 Text + semantic mask TediGAN [58] GAN 2D 54.83 0.31 0.62 0.63 81.68 40.01 IDE-3D [51] GAN 3D 39.05 0.40 0.41 0.54 47.07 10.98 UaC [35] Diffusion 2D 45.87 0.38 0.59 0.32 81.49 42.68 ControlNet [62] Diffusion 2D 46.41 0.41 0.53 0.30 82.42 42.77 Collaborative [19] Diffusion 2D 48.23 0.39 0.62 0.31 74.06 30.69 Ours GAN 2D 46.68 0.30 0.63 0.76 83.41 43.82 Ours GAN 3D 44.91 0.28 0.64 0.78 83.05 43.74 Text + scribble map ControlNet [62] Diffusion 2D 93.26 0.52 0.25 0.21 Ours GAN 2D 55.60 0.32 0.56 0.72 Ours GAN 3D 48.76 0.34 0.49 0.62 Table 1. Quantitative results of multi-modal face image generation on CelebAMask-HQ [29] with annotated text prompts [58]. scriptions provided by [58] describing the facial attributes, such as black hair, sideburns, and etc, corresponding to the CelebAMask-HQ dataset. For the face image generation task using a scribble map, we obtain the scribble maps by applying PiDiNet [49, 50] to the RGB images in CelebAMask-HQ. We additionally compute camera parameters based on [4, 10] for 3D-aware image generation. Comparisons. We compare our method with GAN-based models, such as TediGAN [58] and IDE-3D [51], and DMbased models, such as Unite and Conquer (UaC) [35], ControlNet [62], and Collaborative diffusion (Collaborative) [19], for face generation task using a semantic mask and a text prompt. IDE-3D is trained by a CLIP loss term like TediGAN to apply a text prompt for 3D-aware face image generation. ControlNet is used for face image generation using a text prompt and a scribble map. We use the official codes provided by the authors, and we downsample the results into 256 \u00d7 256 for comparison. Evaluation Metrics. For quantitative comparisons, we evaluate the image quality and semantic consistency using sampled 2k semantic maskand scribble map-text prompt pairs. Frechet Inception Distance (FID) [17], LPIPS [63], and the Multiscale Structural Similarity (MS-SSIM) [56] are employed for the evaluation of visual quality and diversity, respectively. We also compute the ID similarity mean score (ID) [8, 57] before and after applying a text prompt. Additionally, we assess the alignment accuracy between the input semantic masks and results using mean Intersectionover-Union (mIoU) and pixel accuracy (ACC) for the face generation task using a semantic mask. 4.2. Results Qualitative Evaluations. Figure 5 shows the visual comparisons between ours and two existing methods for 2D face image generation using a text prompt and a semantic mask as input. We use the same semantic mask with different text prompts (a)-(c). TediGAN produces results consistent with the text prompt as the latent codes are optimized using the input text prompt. However, the results are inconsistent with the input semantic mask, as highlighted in the red boxes. UaC shows good facial alignment with the input semantic mask, but the results are generated with unexpected attributes, such as glasses, that are not indicated in the inputs. Collaborative and ControlNet produce inconsistent, blurry, and unrealistic images. Our model is capable of preserving semantic consistency with inputs and generating realistic facial images. As shown in Figure 5, our method preserves the structure of the semantic mask, such as the hairline, face position, and mouth shape, while changing the attributes through a text prompt. Figure 6 compares our method with IDE-3D [51] to validate the performance of 3D-aware face image generation \fInput View 1. 2. 3. 4. Novel Views (a) Inputs (b) ControlNet (c) Ours Input text: 1. \u201cThis young woman has straight hair, and eyeglasses and wears lipstick.\u201d 2. \u201cThe man has mustache, receding hairline, big nose, goatee, sideburns, bushy eyebrows, and high cheekbones.\u201d 3. \u201cShe has big lips, pointy nose, receding hairline, and arched eyebrows.\u201d 4. \u201cThis man has mouth slightly open, and arched eyebrows. He is smiling.\u201d Figure 7. Visual examples of 3D-aware face image generation using text prompts and scribble maps. Using (1-4) the text prompts and their corresponding (a) scribble maps, we compare the results of (b) ControlNet with (c) multi-view images generated by ours. using a semantic mask and a text prompt. We use the same semantic mask with different text prompts in Figures 6 (a) and (b), and use the same text prompt with different semantic masks in Figures 6 (c) and (d). The results of IDE-3D are well aligned with the semantic mask with the frontal face. However, IDE-3D fails to produce accurate results when the non-frontal face mask is used as input. Moreover, the results cannot reflect the text prompt. Our method can capture the details provided by input text prompts and semantic masks, even in a 3D domain. Figure 7 shows visual comparisons with ControlNet on 2D face generation from a text prompt and a scribble map. The results from ControlNet and our method are consistent with both the text prompt and the scribble map. ControlNet, however, tends to over-emphasize the characteristic details related to input conditions. Our method can easily adapt to the pre-trained 3D GAN and produce photo-realistic multiview images from various viewpoints. Quantitative Evaluations. Table 1 reports the quantitative results on CelebAMask-HQ with text prompts [58]. Our method using text prompts and semantic masks shows performance increases in all metrics in 2D and 3D domains, compared with TediGAN and UaC. Our model using 2D GAN significantly improves LPIPS, ID, ACC, and mIoU scores, surpassing TediGAN, UaC, ControlNet, and Collaborative, respectively. It demonstrates our method\u2019s strong ability to generate photo-realistic images while reflecting input multi-modal conditions better. For 3D-aware face image generation using a text prompt and a semantic mask, it \ud835\udcaf (c) w/o \ud835\udc34, \u04a7 \ud835\udc34 (d) Full model urns, and bags under eyes.\u201d and has arched eyebrows, black hair.\u201d 2. 3. 1. Input text: 1. \u201cThis man has gray hair.\u201d 2. \u201cHe has double chin, sideburns, and bags under eyes.\u201d 3. \u201cShe wears heavy makeup and has arched eyebrows, black hair.\u201d (a) Inputs (b) w/o T (c) w/o A, \u00af A (d) Ours Figure 8. Effect of M and T . (b) shows the results using only M, and (c) shows the effect of the cross-attention maps (A and \u00af A) in T . The major changes are highlighted with the white boxes. Method M T At Igt Id 0 FID\u2193 LPIPS\u2193ID\u2191 ACC\u2191 (a) \u2713 \u2713 \u2713 62.08 0.29 0.62 81.09 (b) \u2713 \u2713 \u2713 \u2713 48.68 0.28 0.66 82.86 (c) \u2713 \u2713 \u2713 \u2713 54.27 0.31 0.58 80.58 (d) \u2713 \u2713 \u2713 \u2713 61.60 0.29 0.62 80.04 (e) \u2713 \u2713 \u2713 \u2713 \u2713 44.91 0.28 0.78 83.05 Table 2. Ablation analysis on 3D-aware face image generation using a text prompt and a semantic mask. We compare (a) and (b) with (e) to show the effect of our style modulation network and (c) and (d) with (e) to analyze the effect of Igt and Id in model training. is reasonable that IDE-3D shows the highest FID score as the method additionally uses an RGB image as input to estimate the latent code for face generation. The LPIPS, SSIM, and ID scores are significantly higher than IDE-3D, with scores higher by 0.116, 0.23, and 0.24, respectively. Our method using 3D GAN exhibits superior ACC and mIoU scores for the 3D face generation task compared to IDE3D, with the score difference of 35.98% and 32.76%, likely due to its ability to reflect textual representations into spatial information. In face image generation tasks using a text prompt and a scribble map, our method outperforms ControlNet in FID, LPIPS, SSIM, and ID scores in both 2D and 3D domains. Note that the ACC and mIoU scores are applicable for semantic mask-based methods. 4.3. Ablation Study We conduct ablation studies to validate the effectiveness of our contributions, including the mapping network M, the AbSM network T , and the loss functions LM and LT . Effectiveness of M and T . We conduct experiments with different settings to assess the effectiveness of M and T . \fw/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours \u201cShe wears lipstick and has arched eyebrows, and slightly \u201cThis young person has goatee, mustache, big lips, and strai d) Ours urs and big lips, ws, and (a) Inputs (b) w/ \ud835\udc3c0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. 1. Input text: 1. \u201cThis young person has goatee, mustache, big lips, and straight hair.\u201d 2. \u201cShe wears lipstick and has arched eyebrows, and mouth slightly open.\u201d Figure 9. Effect of using Id from the denoising U-Net and the GT image Igt in model training. Using text prompts (1, 2) with (a) the semantic mask, we show face images using our model trained with (b) Id 0 , (c) Igt, and (d) both. We also show the advantages of using cross-attention maps in our model. The quantitative and qualitative results are presented in Table 2 and Figure 8, respectively. When using only M, we can generate face images that roughly preserve the structures of a given semantic mask in Figure 8 (a), including the outline of the facial components (e.g. face, eye) in Figure 8 (b). On the other hand, T enables the model to express face attribute details effectively, such as hair colors and mouth open, based on the multi-modal inputs in Figure 8 (c). The FID and ACC scores are higher than the model using only M in Table 2 (b). We further present the impact of adopting cross-attention maps to T for style modulation. Figure 8 (d) shows how the attention-based modulation approach enhances the quality of results, particularly in terms of the sharpness of desired face attributes and the overall consistency between the generated image and multi-modal conditions. Table 2 (e) demonstrates the effectiveness of our method by showing improvements in FID, LPIPS, ID, and ACC. Our method, including both M and T with cross-attention maps, significantly improves the FID showing our model\u2019s ability to generate high-fidelity images. From the improvement of the ID score, the crossattention maps enable relevantly applying the details of input conditions to facial components. Model Training. We analyze the effect of loss terms LM and LT by comparing the performance with the model trained using either Id 0 from the denoising U-Net or GT image Igt. The model trained using Id 0 produces the images in Figure 9 (b), which more closely reflected the multi-modal conditions (a), such as \u201cgoatee\u201d and \u201chair contour\u201d. In Table 2 (c), the ACC score of this model is higher than the model trained only using Igt in Table 2 (d). The images generated by the model trained with Igt in Figure 9 (c) are more perceptually realistic, as evidenced by the lower LPIPS score compared to the model trained with Id 0 in TaInput text: 1. 2. 3. 1. \u201cA photo of a face of a beautiful elf with silver hair in live action movie.\u201d 2. \u201cA photo of a white Greek statue.\u201d 3. \u201cA photo of a face of a zombie.\u201d Figure 10. Visual examples of 3D face style transfer. Our method generates stylized multi-view images by mapping the latent features of DM and GAN. ble 2 (c) and (d). Using Igt also preserves more conditionirrelevant features inferred by the ID scores in Table 2 (c) and (d). In particular, our method combines the strengths of two models as shown in Figure 9 (d) and Table 2 (e). 4.4. Limitations and Future Works Our method can be extended to multi-modal face style transfer (e.g. face \u2192Greek statue) by mapping the latent spaces of DM and GAN without CLIP losses and additional dataset, as shown in Figure 10. For the 3D-aware face style transfer task, we train our model using Id 0 that replaces GT image Igt in our loss terms. This method, however, is limited as it cannot transfer extremely distinct style attributes from the artistic domain to the photo-realistic domain of GAN. To better transfer the facial style in the 3D domain, we will investigate methods to map the diffusion features related to the input pose into the latent space of GAN in future works. 5." + }, + { + "url": "http://arxiv.org/abs/1209.0353v1", + "title": "Correlation between Ultra-high Energy Cosmic Rays and Active Galactic Nuclei from Fermi Large Area Telescope", + "abstract": "We study the possibility that the $\\gamma$-ray loud active galactic nuclei\n(AGN) are the sources of ultra-high energy cosmic rays (UHECR), through the\ncorrelation analysis of their locations and the arrival directions of UHECR. We\nuse the $\\gamma$-ray loud AGN with $d\\le 100 {\\rm Mpc}$ from the second Fermi\nLarge Area Telescope AGN catalog and the UHECR data with $E\\ge 55 {\\rm EeV}$\nobserved by Pierre Auger Observatory. The distribution of arrival directions\nexpected from the $\\gamma$-ray loud AGN is compared with that of the observed\nUHECR using the correlational angular distance distribution and the\nKolmogorov-Smirnov test. We conclude that the hypothesis that the $\\gamma$-ray\nloud AGN are the dominant sources of UHECR is disfavored unless there is a\nlarge smearing effect due to the intergalactic magnetic fields.", + "authors": "Jihyun Kim, Hang Bae Kim", + "published": "2012-09-03", + "updated": "2012-09-03", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION The origin of ultra-high energy cosmic rays (UHECR), whose energies are above 1 EeV(= 1018 eV), has been searched for many years; however, it is still in vague. In the search for the origin of UHECR, the Greisen-Zatsepin-Kuzmin (GZK) suppression [1, 2] plays an important role. This suppression tells us that the sources of UHECR with energies above the GZK cuto\ufb00, EGZK \u223c40 EeV, should be located within the GZK radius, rGZK \u223c100 Mpc, because the UHECR coming from beyond the GZK radius cannot reach us by loosing energies as consequences of the interactions between cosmic microwave background photons. The recent observations [3\u20135] support the GZK suppression, thus we can focus on the source candidates lying within \u223c100 Mpc. For the possible sources of UHECR, several kinds of astrophysical objects have been proposed, such as \u03b3-ray bursts, radio galaxies, and active galactic nuclei (AGN) [6\u201313] which are known to be able to accelerate UHECR enough. To verify that these astrophysical objects could be the sources of UHECR, it is worthwhile to compare the arrival directions of UHECR and the positions of source candidates using statistical tests. Although many statistical studies for correlation have been done [14\u201322], the origin of UHECR is not con\ufb01rmed yet. In our previous work [21, 22], we examined the AGN model, where UHECR with energies above a certain energy cuto\ufb00are generated from the AGN lying within a certain distance cut. We used AGN listed in V\u00b4 eron-Cetty and V\u00b4 eron (VCV) catalog [23, 24] and the UHECR data observed by Pierre Auger Observatory (PAO) [19, 20]. By statistical test methods, we concluded that the whole AGN listed in VCV catalog cannot be the true sources of UHECR and pointed out that a certain subset of listed AGN could be the true sources of UHECR. Some subsets of AGN, which have the counterpart in X-ray or \u03b3-ray bands have been tested for the possibility that they are responsible for UHECR [20, 25\u201328]. In the case of the correlation with AGN detected in hard X-ray band, it is found that the fractional excess of pairs relative to the isotropic expectation [20]. In the case of AGN detected in \u03b3-ray band, a marginal correlation is found when we consider the small angular separations only, but the strong correlation is found when we consider the angular separations up to 20\u25e6[27]. This paper focuses on the statistical tests for the correlation of UHECR with certain subsets of AGN, especially \u03b3-ray loud AGN because \u03b3-ray loud AGN have su\ufb03cient power to accelerate UHECR. In Section II, the source models of UHECR which assume that UHECR 1 \fwith E \u2265Ec come from AGN with d \u2264dc which are emitting strong \u03b3-rays is presented. Based on the \u03b3-ray loud AGN model we construct, we create the mock arrival directions of UHECR by Monte-Carlo simulation. In Section III, we describe the UHECR data and the \u03b3ray loud AGN data we use in our analysis. We use 2010 PAO data [20] for observed UHECR data and the second catalog of AGN detected by the Fermi Large Area Telescope (LAT) [29] for the \u03b3-ray loud AGN data. To test the correlation between the observed UHECR and the \u03b3-ray loud AGN, the methods which can compare the distribution of observed UHECR arrival direction and that of the mock UHECR arrival direction expected from the source model are needed to be established. The brief descriptions of our test methods are provided in Section IV. The results of statistical tests are given in Section V and the discussion and the conclusion follow in Section VI. II. SOURCE MODEL OF UHECR Several astrophysical objects are know to be able to accelerate CR up to ultra-high energy. Among them, AGN are the most popular objects since the strong correlation was claimed by PAO [19]. However, its updated analysis results and other studies exclude the hypothesis that the whole set of AGN is responsible for the UHECR [18, 20\u201322]. In our previous work [22], using PAO UHECR having energies above 55 EeV [20] and AGN within 100 Mpc listed in the 13th edition of VCV catalog [24], we concluded that we can reject the hypothesis that the whole AGN within 100 Mpc are the real sources of UHECR. Also, we tested the possibility that the subset of AGN is responsible for UHECR; we took AGN within arbitrary distance band as source candidates and found a good correlation for AGN within 60\u221280 Mpc. However, we do not have a reasonable physical explanation for distance grouping. This motivates us to try the subclass of AGN with proper physical properties appropriate for UHECR acceleration. In this paper, we study the hypotheses that AGN emitting strong \u03b3-ray are the sources of UHECR, based on the theoretical study in Ref. [30]. Dermer et al. calculated the emissivity of non-thermal radiation from AGN using the \ufb01rst Fermi LAT AGN catalog (1LAC) [31] to con\ufb01rm that they have su\ufb03cient power to accelerate UHECR. In the Fermi acceleration mechanism using colliding shell model, they found that some of AGN listed in 1LAC have enough power to accelerate UHECR. (See the Figure 3. in [30].) Therefore, we set up the \u03b32 \fray loud AGN model for the UHECR source in the same way as our AGN model introduced in our previous work [22]. When UHECR propagate through the universe, UHECR undergo de\ufb02ection of its trajectory by intergalactic magnetic \ufb01elds. These phenomena are embedded in the simulation for the mock UHECR expected from the AGN model to compare with observed UHECR by introducing the smearing angle parameter (\u03b8s) and by restricting the distance of the source (dc) and the energy of observed UHECR (Ec) following the GZK suppression. We study two versions of \u03b3-ray loud AGN models in this paper. The \ufb01rst one assumes that UHECR with energies E \u2265Ec come from AGN which are emitting strong \u03b3-rays with distance d \u2264dc, and the second one assumes that among the \u03b3-ray loud AGN those having TeV or very high energies \u03b3-ray emission are responsible for the UHECR. The same constraints of UHECR energy and AGN distance are applied to these models. We call the \ufb01rst one the \u03b3-ray loud AGN model, and the second one the TeV \u03b3-ray AGN model. The UHECR \ufb02ux in the simulation for these two models are described below. The expected UHECR \ufb02ux at a given arrival direction \u02c6 r is composed of the \u03b3-ray loud AGN contribution and the isotropic background contribution F(\u02c6 r) = FAGN(\u02c6 r) + FISO, (1) where FAGN(\u02c6 r) is the distribution of all AGN within the distance cut and FISO the contribution of isotropic background from the outside of distance cut dc. That is, a certain fraction of UHECR is coming from AGN and the remaining fraction of them is originated from the isotropic background. We introduce the AGN fraction parameter fA as fA = F AGN F AGN + FISO , (2) where F AGN = (4\u03c0)\u22121 R FAGN(\u02c6 r)d\u2126is the average AGN-contributed \ufb02ux. In the next step, we consider two approaches for FAGN(\u02c6 r) because the relation between UHECR \ufb02ux and AGN property is not established yet. The UHECR \ufb02ux from all AGN can be written as FAGN(\u02c6 r) \u221d X j\u2208AGN Lj 4\u03c0d2 j \u00b7 exp \u0002 \u2212(\u03b8j(\u02c6 r)/\u03b8sj)2\u0003 , (3) where Lj is the UHECR luminosity, di is the distance, \u03b8j(\u02c6 r) = cos\u22121(\u02c6 r \u00b7 \u02c6 r\u2032 j) is the angle between the direction \u02c6 r and the j-th AGN, \u03b8sj is the smearing angle of the j-th AGN. 3 \fThe \ufb01rst approach assumes that all AGN have the same UHECR luminosity, Lj = L, and the same smearing angle, \u03b8sj = \u03b8s. The second approach assumes that the UHECR \ufb02ux contributed by AGN is proportional to the \u03b3-ray \ufb02ux of AGN. FAGN(\u02c6 r) \u221d X j\u2208AGN F\u03b3,j \u00b7 exp \u0002 \u2212(\u03b8j(\u02c6 r)/\u03b8sj)2\u0003 , (4) where F\u03b3,j is the photon \ufb02ux of AGN detected by Fermi LAT in the 1 \u2212100 GeV energy band. Although the normalization is needed for the accurate expression of Eq. (3) and (4), we neglect it because we are not concerned with the total \ufb02ux of UHECR in the test. We have two free parameters in our model for the simulation, the smearing angle \u03b8s and the AGN fraction fA. For the \ufb01ducial values of \u03b8s and fA, we take \u03b8s = 6\u25e6[32] and fA = 0.7 [33]. In the last step for realizing the mock UHECR in the simulation, we need to consider the exposure function re\ufb02ecting the e\ufb03ciency of the detector. The geometric e\ufb03ciency of the detector depends on the location of the experimental site and the zenith angle cut. Then, the exposure function h(\u03b4) is given by [34] h(\u03b4) = 1 \u03c0 [sin \u03b1m cos \u03bb cos \u03b4 + \u03b1m sin \u03bb sin \u03b4] , (5) where \u03bb is the latitude of the detector array, \u03b8m is the zenith angle cut, and \u03b1m = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0, for \u03be > 1, \u03c0, for \u03be < \u22121, cos\u22121 \u03be, otherwise with \u03be = cos \u03b8m \u2212sin \u03bb sin \u03b4 cos \u03bb cos \u03b4 . The latitude of the PAO site is \u03bb = \u221235.20\u25e6and the zenith angle cut of the released data is \u03b8m = 60\u25e6. III. DESCRIPTION OF THE DATA We get the information on the \u03b3-ray loud AGN from the second catalog of Fermi LAT AGN (2LAC) published in 2011 [29]. The 2LAC includes the AGN information collected by the Fermi LAT for two years. It contains 1017 \u03b3-ray sources located at high galactic latitude (|b| > 10\u25e6) as the low galactic latitude region is masked by the galactic plane.That region is excluded because the low galactic latitude region is too noisy to be investigated due 4 \fName l b z F\u03b3 TeV \ufb02ag class Centaurus A 309.52 19.42 0.0008 3.03 \u00d7 10\u22129 Y Radio Galaxy NGC 0253 97.37 -87.96 0.0010 6.2 \u00d7 10\u221210 N Starburst Galaxy M82 141.41 40.57 0.0012 1.02 \u00d7 10\u22129 N Starburst Galaxy M87 283.78 74.49 0.0036 1.73 \u00d7 10\u22129 Y Radio Galaxy NGC 1068 172.10 -51.93 0.0042 5.1 \u00d7 10\u221210 N Starburst Galaxy Fornax A 240.16 -56.69 0.0050 5.3 \u00d7 10\u221210 N Radio Galaxy NGC 6814 29.35 -16.01 0.0052 6.8 \u00d7 10\u221210 N Unidenti\ufb01ed NGC 1275 150.58 -13.26 0.018 1.88 \u00d7 10\u22128 Y Radio Galaxy TABLE I: The 8 clean \u03b3-ray loud AGN within 100 Mpc. l: galactic longitude (degrees), b: galactic latitude (degrees), z: redshift, F\u03b3: photon \ufb02ux (photon/cm2/s), TeV \ufb02ag: TeV AGN, class: optical class to di\ufb00use radio emission, interloping galactic point sources, and heavy optical extinction. There are 886 AGN samples, which are called clean AGN, categorized by the condition that the sole AGN is associated with the \u03b3-ray source and has the association probability P is larger than 0.8. In the \u03b3-ray loud AGN model, we pick up the distance cut dc = 100 Mpc corresponding to the redshift z \u223c0.024. (We use h = 0.70, \u2126m = 0.27, and \u2126\u039b = 0.73 to convert the redshift to the distance.) Then, only 8 AGN among the clean AGN are picked up as UHECR source candidates. For the TeV AGN model, we use the list of TeV AGN detected by the Fermi LAT. There are 34 TeV AGN among the clean AGN and only 3 TeV AGN are used for the source candidate after the distance cut. (See the Table 9 in [29].) In the Table I, we list 8 source candidate AGN within 100 Mpc, which are detected in \u03b3-ray range, and the TeV AGN are marked using TeV \ufb02ag, Y. We use PAO 2010 data [20] for the observed UHECR data, which were collected by the surface detector from 2004-01-01 to 2009-12-31. It includes 69 events in the declination band \u03b4 = \u221290\u25e6\u201324.8\u25e6in the equatorial coordinates and having energies above 55 EeV. Among them, we use only 57 PAO UHECR to avoid the galactic plane region |b| < 10\u25e6as mentioned above. Fig. 1 shows the distributions of the arrival directions of PAO data and \u03b3-ray loud AGN detected by Fermi LAT in the galactic coordinates using the Hammer projection. 5 \f. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0\u25e6 60\u25e6 120\u25e6 180\u25e6 240\u25e6 300\u25e6 90\u25e6 \u221290\u25e6 60\u25e6 \u221260\u25e6 30\u25e6 \u221230\u25e6 0\u25e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 FIG. 1: Distributions of 8 Fermi LAT AGN within 100 Mpc (blue and red squares) and 57 PAO UHECR (black bullets) in galactic coordinate using the Hammer projection. The red squares represent \u03b3-ray emitting AGN in TeV band. The magenta line means a boundary of PAO \ufb01eld of view and the cyan lines represent the border of the low latitude region (|b| < 10\u25e6). IV. STATISTICAL TEST METHOD To compare the arrival direction distribution expected from the model and that of the observed data, we need the method by which we can represent the characteristic of the distribution of arrival direction and apply the statistical test easily. We proposed some comparison methods in the previous paper [21, 22]. In this analysis, we take the correlational angular distance distribution (CADD) method which is most appropriate for the test of the correlation between point sources and UHECR. CADD is the distribution of the angular distances between all pairs of the point source and UHECR arrival directions: CADD : \b \u03b8ij\u2032 \u2261arccos ( \u02c6 ri \u00b7 \u02c6 r\u2032 j ) | i = 1, . . . , N; j = 1, . . . , M \t , (6) where \u02c6 ri are the UHECR arrival directions, \u02c6 r\u2032 j are the point source directions, and N and M are their total numbers, respectively. Now, we get two CADDs to compare: CADDO from the observed UHECR and the \u03b3-ray loud AGN, and CADDM from the mock UHECR of the model under consideration and the same \u03b3-ray loud AGN. The total number of the data in CADD is NCADD = NM, which means that the number of data in CADD are larger than the sampling number N. 6 \fBy comparing CADDO and CADDM, we can test whether our models are suitable to describe the observation. There are several statistical test methods which can prove that two distributions are di\ufb00erent or not. One of the widely used statistical test methods is Kolmogorov-Smirnov (KS) test. It uses the KS statistic, which is the maximum absolute di\ufb00erence (DKS) between two cumulative probability distributions (CPD), CPD of the observed CADDO, SO(x), and that of the theoretically expected CADDM, SM(x), DKS = max x |SO(x) \u2212SM(x)| . (7) Once we calculate the KS statistic, we can get the probability that two di\ufb00erent distributions come from the same population through the Monte-Carlo simulation. To get CADDM accurately, we generates 105 mock UHECR data. To obtain the probability distribution of the KS statistic DKS, we generate 105 DKS for a given model. Thus our probability estimation is reliable up to roughly 10\u22124. V. RESULTS In this work, we test 4 models for the UHECR sources: 1) the \u03b3-ray loud AGN model with UHECR \ufb02ux proportional to the inverse square of the distance, 2) the \u03b3-ray loud AGN model with UHECR \ufb02ux proportional to the \u03b3-ray \ufb02ux of AGN, 3) the TeV AGN model with UHECR \ufb02ux proportional to the inverse square of the distance, and 4) the TeV AGN model with UHECR \ufb02ux proportional to the \u03b3-ray \ufb02ux of AGN. From now on, we call them \u03b3-d model, \u03b3-f model, T-d model, and T-f model, respectively. Fig. 2 shows the distributions of the mock UHECR of 4 models with the smearing angle \u03b8s = 6\u25e6and the AGN fraction fA = 0.7. The two left panels are for the \u03b3-ray loud AGN model, i.e., \u03b3-d model (upper panel) and \u03b3-f model (lower panel), and the two right panels are for the TeV AGN model, i.e., T-d model (upper panel) and T-f model (lower panel). The blue squares mark the locations of \u03b3-ray loud AGN and the red dots represent mock UHECR generated from the source models. The mock UHECR concentrated near the AGN are generated from the source AGN and the uniformly distributed mock UHECR com from the isotropic background. In Fig. 2, we can see the distinguished features depending on the source models we assumed. There are 6 \u03b3-ray loud AGN in the \ufb01eld of view of PAO (\u03b3-d model and \u03b3-f 7 \f. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0\u25e6 60\u25e6 120\u25e6 180\u25e6 240\u25e6 300\u25e6 90\u25e6 \u221290\u25e6 60\u25e6 \u221260\u25e6 30\u25e6 \u221230\u25e6 0\u25e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . ... . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0\u25e6 60\u25e6 120\u25e6 180\u25e6 240\u25e6 300\u25e6 90\u25e6 \u221290\u25e6 60\u25e6 \u221260\u25e6 30\u25e6 \u221230\u25e6 0\u25e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . ... . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. .. . . .. . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . ... . . . . . . .. . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. .. . . . . . . . . . . . .. . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . \u25a0 \u25a0 \u25a0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0\u25e6 60\u25e6 120\u25e6 180\u25e6 240\u25e6 300\u25e6 90\u25e6 \u221290\u25e6 60\u25e6 \u221260\u25e6 30\u25e6 \u221230\u25e6 0\u25e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 \u25a0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0\u25e6 60\u25e6 120\u25e6 180\u25e6 240\u25e6 300\u25e6 90\u25e6 \u221290\u25e6 60\u25e6 \u221260\u25e6 30\u25e6 \u221230\u25e6 0\u25e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f \u000f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . ... . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . \u25a0 \u25a0 \u25a0 FIG. 2: Distributions of the mock UHECR with smearing angle \u03b8s = 6\u25e6and AGN fraction fA = 1 (upper panels) and that of the mock UHECR with AGN fraction fA = 0.7 (lower panels). Left panels are for the \u03b3-ray loud AGN model and right panels are for the TeV AGN model. The red dots are the mock UHECR generated by each AGN model and other marks are same with Fig. 1. model), and there are 2 TeV AGN among them (T-d model and T-f model). Also, we can see the di\ufb00erence in the mock UHECR distributions due to the di\ufb00erence in UHECR \ufb02ux modeling. For the \u03b3-d model, Cen A and NGC 0253 are the dominant sources because they are close to us. In contrast, for the \u03b3-f model, 6 AGN contribute rather equally to the generation of mock UHECR. For the T-d model, Cen A is a dominant source, but Cen A and M87 share the proportion to generate the mock UHECR in the T-f model. In Fig. 3, the probability of each model as a function of the AGN fraction from 0 to 1 and the smearing angle from 0\u25e6to 180\u25e6is given. The black gradient color represents the probability that the arrival direction distribution of the PAO UHECR comes from the given AGN source model. The red, green, and blue contours represent 1\u03c3, 2\u03c3, and 3\u03c3 probability lines. First, let us compare the results of the \u03b3-ray loud AGN models and TeV AGN models with UHECR \ufb02ux proportional to the inverse square of the distance. Because the set of TeV AGN is a subset of \u03b3-ray loud AGN, the principal di\ufb00erence between two models is the 8 \f1\u03c3 2\u03c3 3\u03c3 0 0.2 0.4 0.6 0.8 1 AGN fraction (f) 0 30 60 90 120 150 180 smearing angle (\u03b8s) 10-5 10-4 10-3 10-2 10-1 1 1\u03c3 2\u03c3 3\u03c3 0 0.2 0.4 0.6 0.8 1 AGN fraction (f) 0 30 60 90 120 150 180 smearing angle (\u03b8s) 10-5 10-4 10-3 10-2 10-1 1 1\u03c3 2\u03c3 3\u03c3 0 0.2 0.4 0.6 0.8 1 AGN fraction (f) 0 30 60 90 120 150 180 smearing angle (\u03b8s) 10-5 10-4 10-3 10-2 10-1 1 1\u03c3 2\u03c3 3\u03c3 0 0.2 0.4 0.6 0.8 1 AGN fraction (f) 0 30 60 90 120 150 180 smearing angle (\u03b8s) 10-5 10-4 10-3 10-2 10-1 1 FIG. 3: Probability dependencies on the AGN fraction (fA) and the smearing angle \u03b8s. The left panel is for \u03b3-ray loud AGN model and the right panel is for TeV AGN model. The black gradient color means the probability and the solid lines represent the contour plot for 1\u03c3 (red), 2\u03c3 (green), and 3\u03c3 (blue). number of source AGN. It brings a signi\ufb01cant di\ufb00erence in the results, which is shown in the upper panels in Fig. 3 (\u03b3-d model and T-d model). As a rule of thumb, we choose the 3\u03c3 probability as a criterion for ruling out the model. The critical value of the AGN fraction is fA,c \u223c0.4 and the critical value of the smearing angle is \u03b8s,c \u223c25\u25e6for the \u03b3-ray loud AGN model. Decreasing the AGN fraction from the critical value or increasing the smearing 9 \fangle from the critical value increases the probability. In comparison, for the TeV model, the critical value of the AGN fraction is fA,c \u223c0.2 and the critical value of the smearing angle is \u03b8s,c \u223c110\u25e6. We can deduce that it is hard to describe the observed UHECR distribution from T-d model. Next, let us compare the results of two models with the same source AGN but with di\ufb00erent UHECR \ufb02ux models: the model which assumes that UHECR \ufb02ux is proportional to the inverse square of the distance of source AGN and the model which assumes that UHECR \ufb02ux is proportional to the \u03b3-ray \ufb02ux of source AGN. When we compare \u03b3-d model to \u03b3-f model, the critical values of the AGN fraction and the smearing angle are fA,c \u223c0.4 and \u03b8s,c \u223c25\u25e6for the \u03b3-d model, and the critical values of the AGN fraction and the smearing angle are fA,c \u223c0.7 and \u03b8s,c \u223c20\u25e6for the \u03b3-f model. We can exclude the AGN models in the cases having the parameters within the critical values (blue line). There is no crucial distinction between the di\ufb00erent UHECR \ufb02ux models for the \u03b3-ray loud AGN models. However, when we compare T-d model to T-f model, the di\ufb00erent UHECR \ufb02ux models result in the meaningful distinction. In the T-d model, Cen A is a dominant source to generate the mock UHECR, thus they are clustered around Cen A. Nevertheless, there is no signi\ufb01cant di\ufb00erence among Cen A, M87, and NGC 1275 to produce the mock UHECR in the T-f model, thus the mock UHECR are clustered not only around Cen A but also around M87 and NGC 1275. This is the reason why the probability is increased dramatically as the smearing angle increases in the T-f model. The proportion of the mock UHECR produced by NGC 1275 which is located outside of the \ufb01eld of view increases as the smearing angle increases. Therefore, the clustered feature of the T-d model is quite di\ufb00erent from the T-f model and the small number of source cause the clear discrepancy. In short, we \ufb01nd that the source models assuming \u03b3-ray loud AGN are responsible for UHECR (\u03b3-d model and \u03b3-f model) are more plausible than the source models assuming AGN having higher energy are responsible for UHECR (T-d model and T-f model). Also, which \ufb02ux model is appropriate for describing the UHECR \ufb02ux is not conclusive yet. This seems worthwhile to continue to study. At this stage, we can state the critical values that the null hypotheses are rejected. The critical regions are inside the 3\u03c3 contours. For the \u03b3-d model, the critical values of the AGN fraction and the smearing angle are fA,c \u223c0.4 and \u03b8s,c \u223c25\u25e6, and for the \u03b3-f model, the critical values of the AGN fraction and the smearing 10 \fangle are fA,c \u223c0.7 and \u03b8s,c \u223c20\u25e6. The critical values are fA,c \u223c0.2 and \u03b8s,c \u223c110\u25e6in the case of the T-d model and the critical values are fA,c \u223c0.6 and \u03b8s,c \u223c30\u25e6in the case of the T-f model. That is, the \u03b3-ray loud AGN dominance models with small smearing angle are excluded. VI. DISCUSSION AND" + } + ], + "Dongsu Ryu": [ + { + "url": "http://arxiv.org/abs/1905.04476v2", + "title": "A Diffusive Shock Acceleration Model for Protons in Weak Quasi-parallel Intracluster Shocks", + "abstract": "Low sonic Mach number shocks form in the intracluster medium (ICM) during the\nformation of the large-scale structure of the universe. Nonthermal cosmic-ray\n(CR) protons are expected to be accelerated via diffusive shock acceleration\n(DSA) in those ICM shocks, although observational evidence for the $\\gamma$-ray\nemission of hadronic origin from galaxy clusters has yet to be established.\nConsidering the results obtained from recent plasma simulations, we improve the\nanalytic test-particle DSA model for weak quasi-parallel ($Q_\\parallel$)\nshocks, previously suggested by \\citet{kang2010}. In the model CR spectrum, the\ntransition from the postshock thermal to CR populations occurs at the injection\nmomentum, $p_{\\rm inj}$, above which protons can undergo the full DSA process.\nAs the shock energy is transferred to CR protons, the postshock gas temperature\nshould decrease accordingly and the subshock strength weakens due to the\ndynamical feed of the CR pressure to the shock structure. This results in the\nreduction of the injection fraction, although the postshock CR pressure\napproaches an asymptotic value when the CR spectrum extends to the relativistic\nregime. Our new DSA model self-consistently accounts for such behaviors and\nadopts better estimations for $p_{\\rm inj}$. With our model DSA spectrum, the\nCR acceleration efficiency ranges $\\eta\\sim10^{-3}-0.01$ for supercritical,\n$Q_\\parallel$-shocks with sonic Mach number $2.25\\lesssim M_{\\rm s}\\lesssim5$\nin the ICM. Based on \\citet{ha2018b}, on the other hand, we argue that proton\nacceleration would be negligible in subcritical shocks with $M_{\\rm s}<2.25$.", + "authors": "Dongsu Ryu, Hyesung Kang, Ji-Hoon Ha", + "published": "2019-05-11", + "updated": "2019-08-10", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Hierarchical clustering of the large-scale structure of the universe induces supersonic \ufb02ow motions of baryonic matter, which result in the formation of weak shocks with sonic Mach numbers Ms \u22724 in the hot intracluster medium (ICM) (e.g., Ryu et al. 2003; Vazza et al. 2009; Ha et al. 2018a). In particular, shocks associated with mergers of subcluster clumps have been observed in X-ray and radio (e.g., Brunetti & Jones 2014; van Weeren et al. 2019). These ICM shocks are thought to accelerate cosmic ray (CR) protons and electrons via di\ufb00usive shock acceleration (DSA) (Bell 1978; Drury 1983). Although the acceleration of relativistic electrons can be inferred from the so-called giant radio relics Corresponding author: Hyesung Kang hskang@pusan.ac.kr (e.g., van Weeren et al. 2019), the presence of the CR protons produced by ICM shocks has yet to be established (e.g., Pfrommer & En\u00dflin 2004; Pinzke & Pfrommer 2010; Zandanel & Ando 2014; Vazza et al. 2016; Kang & Ryu 2018). The inelastic collisions of CR protons with thermal protons followed by the decay of \u03c00 produce di\ufb00use \u03b3-ray emission, which has not been detected so far (Ackermann et al. 2016). Previous studies using cosmological hydrodynamic simulations with some prescriptions for CR proton acceleration suggested that the non-detection of \u03b3-ray emission from galaxy clusters would constrain the acceleration e\ufb03ciency \u03b7 \u227210\u22123 for ICM shocks with 2 \u2272Ms \u22725 (e.g., Vazza et al. 2016); the acceleration e\ufb03ciency is de\ufb01ned in terms of the shock kinetic energy \ufb02ux, as \u03b7 \u2261ECR,2u2/(0.5\u03c11u3 sh) (Ryu et al. 2003). Hereafter, the subscripts 1 and 2 denote the preshock and postshock states, respectively. And \u03c1 is the density, u is the \ufb02ow speed in the shock-rest frame, arXiv:1905.04476v2 [astro-ph.HE] 10 Aug 2019 \f2 Ryu et al. ush is the shock speed, and ECR,2 is the postshock CR proton energy density. Proton injection is one of the key processes that govern the DSA acceleration e\ufb03ciency. In the so-called thermal leakage model, suprathermal particles in the tail of the postshock thermal distribution were thought to re-cross the shock from downstream to upstream and participate in the Fermi I process (e.g. Malkov 1997; Kang et al. 2002). Through hybrid simulations, however, Caprioli & Spitkovsky (2014a, CS14a, hereafter) showed that in quasi-parallel (Q\u2225, hereafter, with \u03b8Bn \u227245\u25e6) shocks, protons are injected through specular re\ufb02ection o\ufb00the shock potential barrier, gaining energy via shock drift acceleration (SDA), and that the self-excitation of upstream turbulent waves is essential for multiple cycles of re\ufb02ection and SDA. Here, \u03b8Bn is the obliquity angle between the shock normal and the background magnetic \ufb01eld direction. They considered relatively strong (Ms \u22736.5) Q\u2225-shocks in plasmas with \u03b2 \u223c1, where \u03b2 = Pgas/PB is the ratio of the gas to magnetic pressures. As CRs are accelerated to higher energies, the CR energy density increases in time before the acceleration saturates at ECR,2/(ECR,2 + Eth,2) \u22480.06 \u22120.13 for Ms \u22486.3\u221263 (see Figure 3 of CS14a) with the injection fraction, \u03be \u223c10\u22124 \u221210\u22123 (see Equation [4] below). Here, Eth,2 is the energy density of postshock thermal protons. As a result, the postshock thermal distribution gradually shifts to lower temperatures as the CR powerlaw tail increases its extent (see Figure 1 of CS14a). Moreover, CS14a found that in the immediate postshock region, the proton momentum distribution can be represented by three components: the Maxwellian distribution of thermal particles, fth(p), the CR powerlaw spectrum, fCR(p), and the suprathermal \u2018bridge\u2019 connecting smoothly fth and fCR (see Figure 2 of CS14a). This suprathermal bridge gradually disappears as the plasma moves further downstream away from the shock, because the electromagnetic turbulence and ensuing kinetic processes responsible for the generation of suprathermal particles decrease in the downstream region. Far downstream from the shock, the transition from the Maxwellian to CR distributions occurs rather sharply at the so-called injection momentum, which can be parameterized as pinj \u2248Qi pth,p, where pth,p = p 2mpkBT2 is the postshock thermal proton momentum and Qi \u223c3 \u22123.5 is the injection parameter. Here, T2 is the temperature of postshock thermal ions, mp is the proton mass, and kB is the Boltzmann constant. They suggested that the CR energy spectrum can be modeled by the DSA power-law attached to the postshock Maxwellian at pinj, although their hybrid simulations revealed a picture that is quite di\ufb00erent from the thermal leakage injection model. Later, Caprioli et al. (2015, CPS15, hereafter) presented a minimal model for proton injection that accounts for quasi-periodic shock reformation and multicycles of re\ufb02ection/SDA energization, and predicted the CR spectrum consistent with the hybrid simulations of CS14a. Recently, Ha et al. (2018b, HRKM18, hereafter) studied, through Particle-in-Cell (PIC) simulations, the early acceleration of CR protons in weak (Ms \u22482 \u22124) Q\u2225-shocks in hot ICM plasmas where \u03b2 \u223c100 (e.g., Ryu et al. 2008). In the paper, they argued that only supercritical Q\u2225-shocks with Ms \u22732.25 develop overshoot/undershoot oscillations in their structures, resulting in a signi\ufb01cant amount of incoming protons being re\ufb02ected at the shock and injected into the DSA process. Subcritical Q\u2225-shocks with Ms \u22722.25, on the other hand, have relatively smooth structures, and hence the preacceleration and injection of protons into DSA are negligible. Thus, it was suggested that ICM Q\u2225shocks may accelerate CR protons only if Ms \u22732.25. Although the simulations followed only to the very early stage of DSA where the maximum ion momentum reaches up to pmax/mic \u223c0.5 (mi is the reduced ion mass1), HRKM18 attempted to quantify proton acceleration at ICM Q\u2225-shocks. The simulated CR spectrum indicated the injection parameter of Qi \u22482.7, which led to a rather high injection fraction, \u03be \u22482 \u00d7 10\u22123 \u221210\u22122, for shocks with Ms = 2.25 \u22124. If we simply extrapolate this injection fraction to the relativistic regime of pmax/mic \u226b1, the ensuing DSA e\ufb03ciency would be rather high, \u03b7 > 0.01, which is in strong disagreement with the existing observations of \u03b3-rays from galaxies clusters. In a \u2018\ufb02uid-version\u2019 of numerical studies of DSA, on the other hand, the time-dependent di\ufb00usion-convection equation for the isotropic part of the momentum distribution function, fCR(p), is solved, adopting a Bohmtype spatial di\ufb00usion coe\ufb03cient (\u03ba \u221dp) and a \u2018macroscopic\u2019 prescription for thermal leakage injection (\u03c4esp) (e.g., Kang et al. 2002). Previous studies using this approach managed to follow the evolution of CR proton spectrum into the relativistic energies of up to pmax/mpc \u223c50 for shocks with a wide range of sonic Mach numbers (e.g., Kang & Jones 2005). They showed that, as the CR pressure increases in time, the subshock weakens and T2 decreases accordingly, resulting in the 1 Throughout the paper, we di\ufb00erentiate mi from mp, because the e\ufb00ects of the reduced mass ratio, mi/me (me is the electron mass), in PIC simulations remain to be fully understood. In the simulations of HRKM18, for example, mi = 100 \u2212800 me was used. \fProton Acceleration in ICM Shocks 3 gradual reduction of the injection rate and fCR(pinj) [see Figure 5 of Kang & Jones (2005)]. This leads to the decrease of the injection fraction \u03be(t) with time, although the postshock CR pressure reaches an approximate timeasymptotic value [see Figure 6 of Kang et al. (2002)]. These results are consistent with those of the hybrid simulations described above. Previously, Kang & Ryu (2010) considered an analytic model for fCR(p) in the test-particle regime of DSA for weak ICM shocks. They suggested that the test-particle solution of fCR(p) could be valid only if Qi \u22733.8, which results in the injection fraction \u03be \u227210\u22123 and the CR pressure PCR,2/\u03c11u2 sh < 0.1. In that study, however, the changes of T2(t) and \u03be(t) with the increase of pmax were not included self-consistently, because Qi, although a free parameter, has a \ufb01xed value, and T2 was estimated simply from the Rankine-Hugoniot relation, relying on the test-particle assumption. Hence, the model failed to incorporate the full aspect of DSA observed in the previous simulations. Based on the earlier studies of DSA using hybrid, PIC, and \ufb02uid simulations, we here propose an improved analytic model that is designed to approximately emulate the CR proton spectrum of DSA for given shock parameters. The basic formulation is still based on the testparticle solution with a thermal leakage injection recipe with a free parameter, Qi, as in Kang & Ryu (2010). The main improvement is, however, the inclusion of the reduction of the postshock thermal energy density due to the transfer of the shock energy to the CR population in a self-consistent manner; also the model considers a more realistic range of Qi \u22483.0 \u22123.5 that re\ufb02ects the results of the hybrid simulations of CS14a and CPS15. In the next section, we \ufb01rst review what has been learned about proton injection and acceleration at Q\u2225shocks from recent plasma simulations. In Section 3, we describe our analytic DSA model for the CR proton spectrum produced at weak Q\u2225-shocks, along with the injection fraction and acceleration e\ufb03ciency that characterize the DSA of CR protons. A brief summary follows in Section 4. 2. IMPLICATIONS FROM PLASMA SIMULATIONS Although the structure and time variation of collisionless shocks are primarily governed by the dynamics of re\ufb02ected protons and the waves excited by them in the foreshock region, the roles of electron kinetic processes in proton injection to DSA has not yet been fully explored (e.g., Balogh & Truemann 2013). Only PIC simulations can follow from \ufb01rst principles various microinstabilities and wave-particle interactions due to ion and electron kinetic processes. Owing to greatly disparate time and length scales of ion and electron processes, however, the runs of PIC simulations are limited to only several \u00d7102 \u2126\u22121 ci , depending on mi/me, \u03b2, and the dimension of simulations. Here, \u2126\u22121 ci = mic/eB0, is the ion cyclotron period where c is the speed of light, e is the electron charge, and B0 is the background magnetic \ufb01eld strength. Typically, the injection and early acceleration of protons can be followed up to the maximum momentum of pmax/pth,i \u223c30 (pth,i = \u221a2mikBT2) in PIC simulations (e.g., Park et al. 2015, HRKM18). Hybrid simulations, in which electrons are modeled as a charge-neutralizing \ufb02uid, can be run to several \u00d7102 \u2212 103 \u2126\u22121 cp (where \u2126\u22121 cp = mpc/eB0), neglecting details of electron kinetic processes. Yet they can follow proton acceleration only up to pmax/mpush \u223c30 or so (e.g., CP14a). With currently available computational resources, both PIC and hybrid simulations can only study the early development of suprathermal and nonthermal protons. Thus, it would be a rather challenging task to extrapolate what we have learned about DSA from existing plasma simulations to the relativistic regime of pmax/mpc \u226b1. 2.1. Hybrid Simulations As discussed in the introduction, the injection and acceleration of protons at \u03b2 \u22481 Q\u2225-shocks with Ms \u22736.3 were studied extensively through 2D hybrid simulations (CS14a and CPS15). A small fraction of incoming protons can be injected to DSA after undergoing two to three cycles of SDA, followed by re\ufb02ection o\ufb00the shock potential drop. In addition, at low-\u03b2 (\u03b2 \u22721) shocks, the proton re\ufb02ection can be facilitated by the magnetic mirror force due to the compression of locally perpendicular magnetic \ufb01elds in upstream MHD turbulence, which are self-excited by back-streaming protons (e.g., Sundberg et al. 2016). The e\ufb03ciency of proton injection could be quantitatively di\ufb00erent at weak ICM shocks with \u03b2 \u223c100, because the shock potential drop is smaller at lower Ms shocks and the magnetic mirror force is weaker in higher \u03b2 plasmas. Caprioli & Spitkovsky (2014b, CS14b, hereafter), on the other hand, showed that the magnetic \ufb01eld ampli\ufb01cation due to resonant and non-resonant streaming instabilities increases with the Alfv\u00b4 en Mach number, MA \u2248\u03b21/2Ms. Hence, the level of upstream turbulence is expected to be higher for higher \u03b2 shocks at a given Ms. Therefore, higher \u03b2 could have two opposite e\ufb00ects on the e\ufb03ciency of proton injection, i.e., weaker magnetic mirror but stronger turbulence in the foreshock. Unfortunately, so far hybrid simulations for \f4 Ryu et al. 10 -4 10 -3 10 -2 10 -1 10 -4 10 -3 10 -2 10 -1 50 100 150 200 250 0.005 0.01 0.015 0.02 ( \u03b31 ) d N / d \u03b3 \u03b3 1 m i / m e = 1 0 0 \u2126cit = 94 (HRKM18) \u2126cit = 240 (a) M s = 3 . 2 (b) \u2126c i t \u03be Figure 1. (a) Postshock energy spectrum, dN/d\u03b3, of ions with mi = 100me, taken from PIC simulations for the ICM shock of Ms = 3.2 with \u03b8Bn = 13\u25e6, \u03b2 = 100, and T1 = 8.6 keV (108 K). For \u2126cit \u224894 (red), the simulation data reported in HRKM18 are adopted, while for \u2126cit \u2248240 (blue), those from the new extended simulation described in Section 2.2 are used. The red and blue dashed lines show the \ufb01ts for the respective spectra (solid lines) to Maxwellian and test-particle power-law forms. The vertical dotted magenta line marks the injection energy, \u03b3inj, where the two \ufb01tting forms cross each other. (b) Time evolution of the injection fraction \u03be(t), calculated with the postshock energy spectra for the shock model shown in panel (a). The red and blue arrows denote the points for \u2126cit \u224894, and 240, respectively. high-\u03b2 (\u03b2 \u226b1) shocks have not been published in the literature yet. CPS15 suggested that the proton injection at weak shocks may be di\ufb00erent from their \ufb01ndings for strong shocks in the following senses: (1) the overshoot in the shock potential is smaller at weaker shocks, leading to a smaller re\ufb02ection fraction at each confrontation with the shock, (2) the fractional energy gain at each SDA cycle is smaller, so more SDA cycles are required for injection, (3) the levels of turbulence and magnetic \ufb01eld ampli\ufb01cation are weaker. As a result, the proton injection and acceleration e\ufb03ciencies should be smaller at weaker shocks. According to Figure 3 of CS14a, for the Ms \u22486.3 shock (M = 5 in their de\ufb01nition), the DSA e\ufb03ciency is \u03b7 \u22480.036, so a smaller \u03b7 is expected for ICM shocks with Ms \u22724. Moreover, CS14b showed in their Figure 9 that the normalization (amplitude) of postshock fCR decreases as pmax(t) increases with time. We interpret that this trend is caused by the increase in the number of SDA cycles required for injection to DSA, because the subshock weakens gradually due to the CR feedback, and so the energy gain per SDA cycle is reduced. Considering that the ratio of pmax/pth,p reaches only to \u223c30 in these hybrid simulations, the normalization of fCR may continue to decrease as the CR spectrum extends to the relativistic region with pmax/mpc \u226b1. 2.2. Particle-in-cell Simulations HRKM18 explored for the \ufb01rst time the criticality of high-\u03b2 Q\u2225shocks and showed that protons can be injected to DSA and accelerated to become CRs only at supercritical shocks with Ms \u22732.25. Figure 7 of HRKM18 showed that the shock criticality does not sensitively depend on mi/me and numerical resolution, but the acceleration rate depends slightly on \u03b2. As mentioned before, turbulence is excited more strongly for higher \u03b2 cases due to higher MA. But the re\ufb02ection fraction is smaller for higher \u03b2 due to weaker magnetic mirror forces, leading to lower re\ufb02ection fraction and lower amplitude of fCR near pinj. In order to get a glimpse of the long-term evolution of the CR proton spectrum, we extend the 1D PIC simulation reported in HRKM18 from \u2126citend = 90 to 270 for the model of Ms = 3.2, \u03b8Bn = 13\u25e6, mi/me = 100, \u03b2 = 100, and T1 = 8.6 keV (108 K). Details of numerical and model setups can be found in HRKM18 (see their Table 1). The main change is that a di\ufb00erent computation domain, [Lx, Ly] = [3 \u00d7 104, 1] (c/wpe)2, is adopted here in order to accommodate the longer simulation time. Because of severe computational requirements, in practice, it is di\ufb03cult to extend this kind of PIC simulations to a much larger box for a much longer duration. In this simulation the average velocity of ions is \u221a 18.36 times higher than that of real protons for the given temperature. Figure 1 shows the time evolution of the postshock energy spectra of ions, dN/d\u03b3 (where \u03b3 is the Lorentz factor), and the injection fraction, \u03be(t) [see Eq. (11) of HRKM18]. We adopt the simulation data of HRKM18 for \u2126cit \u224894 (red), while the data from the new extended simulation is used for \u2126cit \u2248240 (blue). The region of (1.5 \u22122.5)rL,i behind the shock is included, \fProton Acceleration in ICM Shocks 5 where rL,i is the ion Larmor radius de\ufb01ned with the incoming \ufb02ow speed. Note that the spectrum near the energy cuto\ufb00might not be correctly reproduced due to the limited size of the simulation domain. We notice the following features in Figure 1(a) : (1) the postshock temperature decreases slightly with time, (2) the injection parameter, Qi = pinj/pth,i, increases from \u223c2.7 to \u223c3.0 as the time increases from \u2126cit \u224890 to 240, and (3) the amplitude of dN/d\u03b3(\u03b3inj) decreases gradually. Figure 1(b) shows the resulting gradual decrease of \u03be(t), which may continue further in time. As in HRKM18, a somewhat arbitrary value of pmin = \u221a 2pinj is adopted (see the next section for a further discussion). We interpret the bump in the evolution of \u03be(t) near \u2126cit \u2248210 as a consequence of shock reformation. 3. ANALYTIC MODEL FOR CR PROTON SPECTRUM The analytic model presented here inherits the testparticle DSA model with a thermal leakage injection recipe, which was suggested by Kang & Ryu (2010). It describes the downstream CR proton spectrum, fCR(p), for weak shocks. For the preshock gas with the density, n1, and the temperature, T1, the postshock vales, n2, and T2,02, can be calculated from the RankineHugoniot jump condition. For example, the shock compression ratio is given as r = n2/n1 = (\u03b3g + 1)/(\u03b3g \u22121 + 2/M 2 s ), where \u03b3g = 5/3 is the gas adiabatic index. Following Kang & Ryu (2010), we parameterize the model as follows: (1) The CR proton spectrum follows the test-particle DSA power-law, as fCR(p) \u221dp\u2212q, where q = 3r/(r \u22121). (2) The transition from the postshock thermal to CR spectra occurs at the injection momentum pinj = Qi \u00b7 pth,p, (1) where Qi is the injection parameter. The main improvement here is that the postshock temperature, T2, decreases slightly from T2,0, and hence pth,p does too, as the fraction of the shock energy transferred to CRs increases. Our model leads to the following form of the CR proton spectrum, fCR(p) \u2248\u03c8 \u00b7 fN \u0012 p pinj \u0013\u2212q exp \" \u2212 \u0012 p pmax \u00132# . (2) Here, the maximum momentum of CR protons, pmax, increases with the shock age (e.g., Kang & Ryu 2010). 2 Here, T2,0 denotes the temperature of the thermal gas when the postshock CR energy density, ECR,2, is negligible, reserving T2 for the cases of non-negligible ECR,2. The normalization factor can be approximated as fN = n2 \u03c01.5 p\u22123 th,p exp(\u2212Q2 i ), (3) assuming the CR power-law spectrum is hinged to the postshock Maxwell distribution at pinj. Therefore, in our model, Qi is the key parameter that controls fN. In addition, we introduce an additional parameter, \u03c8 \u223c1, to accommodate any uncertainties in determining the value of Qi and the resulting amplitude, fN. Throughout this paper, however, \u03c8 = 1 is used. Figure 2(a) shows the model spectrum, fCR(p), calculated with Equations (2)-(3), which illustrates the transition from the thermal to nonthermal CR spectra at pinj. The PIC simulation described in section 2.2 indicates Qi \u22483 when pmax/mic \u22480.5, but Qi may further increase for pmax/mic \u226b1, as noted above. On the other hand, hybrid simulations for strong shocks of \u03b2 \u22481 (CS14a, CPS15) showed that it is expected to range as Qi \u22483.0 \u22123.5. As discussed in section 2.1, higher \u03b2 could have two opposite e\ufb00ects on proton re\ufb02ection, weaker magnetic mirror but stronger upstream turbulence. So it is di\ufb03cult to make quantitative predictions on the long-term evolution of Qi in high-\u03b2 shocks without performing plasma simulations of very long duration. Here, we will consider the range of Qi = 3.3 \u22123.5 as an educated guess from the previous plasma simulation. Moreover, in our analytic model, Qi < 3.3 would give the DSA e\ufb03ciency of \u03b7 \u22730.01 for 3 \u2272Ms \u22725 (see Figure 4 below), which would be incompatible with the non-detection of \u03b3-ray emission from galaxy clusters (Vazza et al. 2016). From the model fCR(p) in Equation (2), we calculate the injection fraction of CR protons by \u03be \u22614\u03c0 n2 Z pmax pmin fCR(p)p2dp, (4) as in HRKM18. The postshock CR energy density is estimated by ECR,2 = 4\u03c0c Z pmax pmin ( q p2 + (mpc)2 \u2212mpc)fCR(p)p2dp. (5) In the case of very weak shocks, where the CR spectrum is dominated by low energy particles, both \u03be and ECR,2 depend sensitively on the lower bound of the integrals, pmin (e.g., Pfrommer & En\u00dflin 2004). We here adopt pmin \u2248pinj for \ufb01ducial models, while pmin = 780 MeV/c, the threshold energy of \u03c0-production reaction, will be considered as well for comparison. As mentioned above, ECR,2 may increase, as fCR(p) extends to higher pmax, resulting in the decrease of the postshock gas temperature from T2,0 to T2 (see Figure 5 \f6 Ryu et al. Figure 2. Proton distribution function, f(p)p4, calculated with Equations (2)-(3). Panel (a): f(p)p4 in a Ms = 3.2 shock with Qi,0 = 3.0 (blue line), 3.3 (black line), and 3.5 (red line), when the maximum momentum is pmax \u226bpinj. The vertical dashed line shows the injection momentum, pinj, with Qi,0 = 3.3. Panels (b)-(d): Change of f(p)p4 in Ms = 2.5, 3.2, and 4.0 shocks with Qi,0 = 3.5, as pmax increases. Here, T1 = 108 K. The DSA test-particle slope, qtp, is given in each panel. Due to the energy transfer to the CR component, the temperature reduction factor, RT, decreases. Hence, while pinj is \ufb01xed, the injection parameter, Qi = Qi,0/\u221aRT, increases, leading to the reduction of the normalization factor, fN. of Kang & Jones (2005) and Figure 1 of CS14a). Thus, we introduce the temperature reduction factor, RT = Eth(T2, 0) \u2212ECR,2 Eth(T2,0) . (6) Then, T2 = RTT2,0 is the reduced postshock temperature.3 CPS15 suggested that when the postshock CR energy density approaches to ECR,2 \u22480.1Esh = 0.1(\u03c11u2 sh/2), the subshock weakens substantially, which suppresses the proton re\ufb02ection and injection. Hence, the normalization of fCR is expected to decrease as pmax increases. Our model is designed to mimic such a behavior by \ufb01nding the self-consistent postshock thermal distribution with a lower temperature, while pinj is assumed to be \ufb01xed. Then, the injection parameter increases as Qi = Qi,0/\u221aRT, where Qi,0 is the initial value, leading to smaller values of fN. Note that pinj at shocks with 3 The fraction of thermal particles that becomes CR protons is assumed to be small, i.e., \u03be \u226a1. di\ufb00erent parameters (Ms, \u03b8Bn, and \u03b2) is controlled by a number of complex kinetic process, and hence should be studied through long-term plasma simulations, beyond the current computational capacity. Considering that the proton injection into DSA is yet to be fully understood, \ufb01xing pinj while slightly increasing Qi in our model should be regarded as a reasonable assumption. Figure 2(a) shows the model spectrum, including that of the self-consistent thermal distribution, in a Ms = 3.2 shock for Qi,0 = 3.0 \u22123.5; the spectrum depends on the adopted value of Qi,0. Panels (b)-(d) illustrate the change of the model spectrum as pmax increases in shocks with Ms = 2.5, 3.2, and 4.0, respectively. As pmax and also ECR,2 increase, the Maxwellian part shifts to slightly lower T2, and RT decreases accordingly. Because pinj is assumed to be \ufb01xed, Qi increases and thus the normalization factor fN decreases in our model. Figure 3 shows the change of \u03be, RT, ECR,2/Esh, and \u03b7, calculated with Equations (2)-(6), as pmax increases for Qi,0 = 3.3 (dashed lines) and 3.5 (solid lines) in shocks with Ms = 2.25 \u22124.0. The CR acceleration e\ufb03ciency \fProton Acceleration in ICM Shocks 7 Figure 3. Change of the injection fraction, \u03be, the temperature reduction factor, RT, the postshock CR energy fraction, ECR,2/Esh, and the CR acceleration e\ufb03ciency, \u03b7, as pmax increases. Here, T1 = 108 K, pmin = pinj, Qi,0 = 3.3 (dashed lines) and 3.5 (solid lines) are adopted. As RT decreases, the injection parameter increases as Qi = Qi,0/\u221aRT, which results in the reduction of fN as in Equation (3). is related to the postshock CR energy density, as \u03b7 = ECR,2/rEsh. Figure 3(b) plots how RT decreases, as pmax increases. The injection fraction, \u03be, increases with increasing pmax during the early acceleration phase, but decreases for pmax/mpc \u226b1. The latter behavior results from the gradual reduction of fCR(pinj), which is caused by the self-adjustment of the shock structure, that is, the cooling of the postshock thermal protons, the growing of the precursor, and the weakening of the subshock due to the dynamical feedback of the CR pressure. ECR,2/Esh and \u03b7, on the other hand, monotonically increase and approach to asymptotic values for pmax/mpc \u2273102. Figure 4 shows the asymptotic values of those quantities as a function of Ms (\ufb01lled circles and lines) for Qi,0 = 3.3 (black) and 3.5 (red), which would cover the most realistic range for ICM shocks (CS14a). As mentioned in the introduction, HRKM18 showed that ICM Q\u2225-shocks with Ms < 2.25 may not inject protons into the DSA process, resulting in ine\ufb03cient CR proton acceleration. We here include the Ms = 1.5 and 2.0 cases (connected with dotted lines) for illustrative purposes, showing the values estimated with our model. Note that the asymptotic value of \u03be(Ms) decreases with increasing Ms for supercritical shocks with Ms \u2265 2.25. This behavior is opposite to the relation, \u03be \u221dM 1.5 s , during the very early acceleration stage of the PIC simulations reported in HRKM18. In those PIC simulations, pmax/mpc \u22720.5, so the CR feedback e\ufb00ect is not very signi\ufb01cant. However, our analytic model is designed to take account for the dynamic feedback of the CR pressure to the shock structure when pmax/mpc \u226b1, so \u03be could be smaller at higher Ms. With the adopted value of Qi,0 = 3.3 \u22123.5, ECR,2/Esh < 0.1, so the test-particle assumption should be valid. The acceleration e\ufb03ciency increases with Ms and is close to \u03b7 \u22480.01 \u22120.02 in the range of Ms = 3 \u22125. Obviously, if the injection parameter is larger than what the hybrid simulations of CS14a indicated, that is, Qi,0 > 3.5, then DSA would be even less e\ufb03cient. \f8 Ryu et al. Figure 4. The injection fraction, \u03be, the temperature reduction factor, RT, the postshock CR energy fraction, ECR,2/Esh, and the CR acceleration e\ufb03ciency, \u03b7, as a function of Ms, for pmin = pinj and pmax = 105mpc. Here, T1 = 108 K. The black and red \ufb01lled circles connected with solid lines are the results for Qi,0 = 3.3 and 3.5, respectively. The two points for Ms = 1.5 and 2.0 are connected with the dotted lines, because subcritical shocks with Ms < 2.25 may not preaccelerate and inject thermal protons to the full DSA process according HRKM18. The open triangles represent the values calculated with pmin = 780 MeV/c. In the studies of \u03b3-ray emission from simulated galaxy clusters, the lower bound of fCR is often taken as pmin = 780 MeV/c, as noted above. The open triangles in Figure 4 show ECR,2/Esh and \u03b7 calculated with this pmin, otherwise adopting the same analytic spectrum given in Equations (2)-(3). For Ms = 2.25, the acceleration e\ufb03ciency with pmin = 780 MeV/c is smaller by a factor of 3.3 than that with pmin = pinj. But the two estimations are similar for Ms \u22734. The e\ufb03ciency with pmin = 780 MeV/c is \u03b7 \u223c0.01 in the range of Ms = 3 \u22125, while \u03b7 \u223c10\u22123 for Ms = 2.25. If this result is extended to the case of Ms \u223c6, \u03b7 would be still close to 0.01, which is about three times smaller than the e\ufb03ciency reported by CS14a (i.e., \u03b7 \u22480.036 at Ms \u223c6.3). Note that this estimate is somewhat larger than the upper limit of \u03b7 \u227210\u22123, quoted to be consistent with the non-detection of \u03b3-ray emission from galaxy clusters by Vazza et al. (2016). On the other hand, as HRKM18 shown, \u03b7 may be very small and negligible for shocks with Ms < 2.25, for which the fraction of the total shock dissipation in the ICM was shown to be substantial (e.g., Ryu et al. 2003). Hence, the consistency of our model for proton acceleration with the non-detection of cluster \u03b3-rays should be further examined by considering the details of the characteristics of shocks in simulated galaxy clusters. 4. SUMMARY The DSA e\ufb03ciency for CR protons at low Ms Q\u2225shocks in the high-\u03b2 plasmas of the ICM has yet to be investigated through kinetic plasma simulations. HRKM18 studied the injection and the early acceleration of protons up to pmax/mic \u22480.5 at such shocks through 1D PIC simulations, adopting reduced mass ratios of mi/me. On the other hand, CS14a, CS14b, and CPS15 carried out hybrid simulations to study the DSA of protons, but considered only high Ms shocks in \u03b2 \u22481 plasmas. Here, we revisited the test-particle DSA model for low Ms shocks with a thermal leakage injection recipe that was previously presented in Kang & Ryu (2010). Re\ufb02ecting new \ufb01ndings of recent plasma simulations, we improved the analytic DSA model by accounting for the transfer of the postshock thermal energy to the CR energy and the weakening of the subshock due to the dynamical feedback of the CR pressure to the shock structure. We \ufb01rst set up an approximate analytic solution, fCR(p), for CR protons in weak Q\u2225-shocks. We then calculated the injection fraction, \u03be, the postshock CR \fProton Acceleration in ICM Shocks 9 energy fraction, ECR,2/Esh, and the acceleration e\ufb03ciency, \u03b7, of CR protons. The main aspects of our model and the main results are summarized as follows. 1. In weak shocks with Ms \u22725, above the injection momentum, pinj = Qi pth,p, fCR(p) follows the testparticle DSA power-law, whose slope is determined by the shock compression ratio. 2. According to plasma simulations such as CS14a, CPS15, and HRKM18, as CR protons are accelerated to higher energies, the postshock gas temperature T2 and the normalization of fCR decreases (see Figure 2). Thus, in our model, while the injection momentum, pinj, is assumed to be \ufb01xed, the injection parameter increases as Qi = Qi,0/\u221aRT, where RT is the reduction factor of the postshock temperature. Then Qi determines the CR spectrum according to Equations (2)-(6). We adopt Qi,0 \u22483.3 \u22123.5, extrapolating the results of previous hybrid simulations. 3. In our model, as fCR(p) extends to higher pmax/mpc \u226b1, \u03be \ufb01rst increases and then decreases due to the reduction of T2 and the increase of Qi, although \u03b7 monotonically increases and approaches a time-asymptotic value. Such a behavior was previously seen in \ufb02uid DSA simulations (e.g., Kang et al. 2002). 4. Both \u03be and ECR,2/Esh depend on Qi,0 and also the lower bound of the integrals, pmin, especially in the case of very weak shocks (see Figure 4). For pmin \u2248pinj and Qi,0 = 3.5, the CR acceleration e\ufb03ciency ranges as \u03b7 \u22483.5 \u00d7 10\u22123 \u22120.01 for 2.25 \u2272Ms \u22725.0. If pmin \u2248780 MeV/c is adopted, it decreases to \u03b7 \u2248 1.1 \u00d7 10\u22123 \u22120.01 for the same Mach number range. If Qi,0 = 3.3 is adopted, \u03b7 becomes larger by a factor of 1.5 \u22122, compared to the case with Qi,0 = 3.5. 5. In subcritical shocks with Ms < 2.25, protons may not be e\ufb03ciently injected into DSA, so we expect that \u03b7 would be negligible at these very weak shocks (HRKM18). In a parallel paper (Ha et al. 2019), we will investigate the \u03b3-ray emission as well as the neutrino emission from simulated galaxy clusters due to the inelastic collisions of CR protons and ICM thermal protons, based on the analytic CR proton spectrum proposed in this paper. In particular, we will check whether the prediction for \u03b3-ray emission complies with the upper limits imposed by Fermi LAT observations. We thank the anonymous referee for critical comments that help us improve this paper from its initial form. D.R. and J.-H. H. were supported by the National Research Foundation of Korea (NRF) through grants 2016R1A5A1013277 and 2017R1A2A1A05071429. H.K. was supported by the Basic Science Research Program of the NRF through grant 2017R1D1A1A09000567." + }, + { + "url": "http://arxiv.org/abs/0910.3361v1", + "title": "Intergalactic Magnetic Field and Arrival Direction of Ultra-High-Energy Protons", + "abstract": "We studied how the intergalactic magnetic field (IGMF) affects the\npropagation of super-GZK protons that originate from extragalactic sources\nwithin the local GZK sphere. Toward this end, we set up hypothetical sources of\nultra-high-energy cosmic-rays (UHECRs), virtual observers, and the magnetized\ncosmic web in a model universe constructed from cosmological structure\nformation simulations. We then arranged a set of reference objects mimicking\nactive galactic nuclei (AGNs) in the local universe, with which correlations of\nsimulated UHECR events are analyzed. With our model IGMF, the deflection angle\nbetween the arrival direction of super-GZK protons and the sky position of\ntheir actual sources is quite large with the mean value of $<\\theta > \\sim\n15^{\\circ}$ and the median value of $\\tilde \\theta \\sim 7 - 10^{\\circ}$. On the\nother hand, the separation angle between the arrival direction and the sky\nposition of nearest reference objects is substantially smaller with $ \\sim\n3.5 - 4^{\\circ}$, which is similar to the mean angular distance in the sky to\nnearest neighbors among the reference objects. This is a direct consequence of\nour model that the sources, observers, reference objects, and the IGMF all\ntrace the matter distribution of the universe. The result implies that\nextragalactic objects lying closest to the arrival direction of UHECRs are not\nnecessary their actual sources. With our model for the distribution of\nreference objects, the fraction of super-GZK proton events, whose closest AGNs\nare true sources, is less than 1/3. We discussed implications of our findings\nfor correlation studies of real UHECR events.", + "authors": "Dongsu Ryu, Santabrata Das, Hyesung Kang", + "published": "2009-10-18", + "updated": "2009-10-18", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "astro-ph.CO" + ], + "main_content": "Introduction The nature and origin of ultra-high-energy cosmic rays (UHECRs), especially above the so-called Greisen-Zatsepin-Kuz\u2019min (GZK) energy of EGZK \u224850 EeV (1 EeV = 1018 eV), has been one of most perplexing puzzles in astrophysics over \ufb01ve decades and still remains to be understood (see Nagano & Watson 2000, for a review). The highest energy CR detected so far is the Fly\u2019s Eye event with an estimated energy of \u223c300 EeV (Bird et al. 1994). At these high energies protons and nuclei cannot be con\ufb01ned and accelerated e\ufb00ectively within our Galaxy, so the sources of UHECRs are likely to be extragalactic. At energies higher than EGZK, it is expected that protons lose energy and nuclei are photo-disintegrated via the interactions with the cosmic microwave background radiation (CMB) along their trajectories in the intergalactic space (Greisen 1966; Zatsepin & Kuz\u2019min 1966; Puget et al. 1976). The former is known as the GZK e\ufb00ect. So a signi\ufb01cant suppression in the energy spectrum above EGZK could be regarded as an observational evidence for the extragalactic origin of UHECRs (see, e.g., Berezinsky et al. 2006). However, the accurate measurement of the UHECR spectrum is very di\ufb03cult, partly because of extremely low \ufb02ux of UHECRs. But a more serious hurdle is the uncertainties in the energy calibration inherent in detecting and modeling extensive airshower events (e.g., Nagano & Watson 2000; Watson 2006). Nevertheless, both the Yakutsk Extensive Air Shower Array (Yakutsk) and the High Resolution Fly\u2019s Eyes (HiRes) reported observations of the GZK suppression (Egorova et al. 2004; Abbasi et al. 2008a), while the Akeno Giant Air Shower Array (AGASA) claimed a con\ufb02icting \ufb01nding of no suppression (Shinozaki & Teshima 2004). More recent data from the Pierre Auger Observatory (Auger) support the existence of the GZK suppression (Abraham et al. 2008b; Sch\u00a8 ussler et al. 2009). Below EGZK, however, the four experiments reported the \ufb02uxes that are di\ufb00erent from each other by up to a factor of several, implying the possible existence of systematic errors in their energy calibrations (Berezinsky 2009). The overall sky distribution of the arrival directions of UHECRs below EGZK seems to support the isotropy hypothesis (see, e.g., Nagano & Watson 2000). This is consistent with the expectation of uniform distribution of extragalactic sources; the interaction length (i.e. horizon distance) of protons below EGZK is a few Gpc and the universe can be considered homogeneous and isotropic on such a large scale. The horizon distance for super-GZK events, however, decreases sharply with energy and RGZK \u223c100 Mpc for E = 100EeV \f\u2013 3 \u2013 (Berezinsky & Grigor\u2019eva 1988). The matter distribution inside the local GZK horizon (RGZK) is inhomogeneous. Since powerful astronomical objects are likely to form at deep gravitational potential wells, we expect the distribution of the UHECR sources would be inhomogeneous as well. Hence, if super-GZK proton events point their sources, their arrival directions should be anisotropic. The anisotropy of super-GZK events, hence, has been regarded to provide an important clue that unveils the sources of UHECRs. So far, however, the claims derived from analyses of di\ufb00erent experiments are often tantalizing and sometimes con\ufb02icting. For instance, with an excessive number of pairs and one triplet in the arrival direction of CRs above 40 EeV, the AGASA data support the existence of small scale clustering (Hayashida et al. 1996; Takeda et al. 1999). On the other hand, the HiRes stereo data are consistent with the hypothesis of null clustering (Abbasi et al. 2004, 2009). The auto-correlation analysis of the Auger data reported a weak excess of pairs for E > 57 EeV (Abraham et al. 2008a). In addition, the Auger Collaboration found a correlation between highest energy events and the large scale structure (LSS) of the universe using nearby active galactic nuclei (AGNs) in the V\u00b4 eron-Cetty & V\u00b4 eron (2006) catalog (The Pierre Auger Collaboration 2007; Abraham et al. 2008a; Hague et al. 2009) as well as using nearby objects in di\ufb00erent catalogs (Aublin et al. 2009). A correlation between highest AGASA events with nearby galaxies from SDSS was reported (Takami et al. 2009). The HiRes data, however, do not show such correlation of highest energy events with nearby AGNs (Abbasi et al. 2008b), but instead show a correlation with distant BL Lac objects (Abbasi et al. 2006). The interpretation of anisotropy and correlation analyses is, however, complicated owing to the intervening galactic magnetic \ufb01eld (GMF) and intergalactic magnetic \ufb01eld (IGMF); the trajectories of UHECRs are de\ufb02ected by the magnetic \ufb01elds as they propagate through the space between sources and us, and hence, their arrival directions are altered. Even with considerable observational and theoretical e\ufb00orts, however, the nature of the GMF and the IGMF is still poorly constrained. Yet, models for the GMF generally assume a strength of \u223ca few \u00b5G and a coherence length of \u223c1 kpc for the \ufb01eld in the Galactic halo (see, e.g., Stanev 1997), and predict the de\ufb02ection of UHE protons due to the GMF to be \u03b8 \u223ca few degrees (see, e.g., Takami & Sato 2008). The situation for the IGMF in the LSS has been confusing. Adopting a model for the IGMF with the average strength of \u27e8B\u27e9\u223c100 nG in \ufb01laments, Sigl et al. (2003) showed that the de\ufb02ection of UHECRs due to the IGMF could be very large, e.g., \u03b8 > 20\u25e6for protons above 100 EeV. On the other hand, Dolag et al. (2005) adopted a model with \u27e8B\u27e9\u223c0.1 nG in \ufb01laments and showed that the de\ufb02ection should be negligible, e.g., \u03b8 \u226a1\u25e6for protons with 100 EeV. Recently, Ryu et al. (2008) proposed a physically motivated model for the IGMF, in \f\u2013 4 \u2013 which a part of the gravitational energy released during structure formation is transferred to the magnetic \ufb01eld energy as a result of the turbulent dynamo ampli\ufb01cation of weak seed \ufb01elds in the LSS of the universe. In the model, the IGMF follows largely the matter distribution in the cosmic web, and the strength and coherence length are predicted to be \u27e8B\u27e9\u223c10 nG and \u223c1 Mpc for the \ufb01eld in \ufb01laments. Such \ufb01eld in \ufb01laments is expected to induce the Faraday rotation (Cho & Ryu 2009), which is consistent with observation (Xu et al. 2006). With this model IGMF, Das et al. (2008) (Paper I hereafter) calculated the trajectories of UHE protons (E > 10 EeV) that were injected at extragalactic sources associated with the LSS in a simulated model universe. We then estimated that only \u223c35 % of UHE protons above 60 EeV would arrive at us with \u03b8 \u22645\u25e6and the average value of de\ufb02ection angle would be \u27e8\u03b8\u27e9\u223c15\u25e6. Note that the de\ufb02ection angle of \u27e8\u03b8\u27e9\u223c15\u25e6is much larger than the angular window of 3.1\u25e6used by the Auger collaboration in the study of the correlation between highest energy UHECR events and nearby AGNs (The Pierre Auger Collaboration 2007; Abraham et al. 2008a; Hague et al. 2009). In this contribution, as a follow-up work of Paper I, we investigate the e\ufb00ects of the IGMF on the arrival direction of super-GZK protons above 60 EeV coming from sources within 75 Mpc. The limiting parameters for energy and source distance are chosen to match the recent analysis of the Auger collaboration. Without knowing the true sources of UHECRs, the statistics that can be obtained with observational data from experiments are limited; some statistics that are essential to reveal the nature of sources are di\ufb03cult or even impossible to be constructed. On the other hand, with data from simulations, any statistics can be explored. In that sense, simulations complement experiments. Here, with the IGMF suggested by Ryu et al. (2008), we argue that the large de\ufb02ection angle of super-GZK protons due to the IGMF is not inconsistent with the anisotropy and correlation recently reported by the Auger collaboration. However, the large de\ufb02ection angle implies that the nearest object to a UHECR event in the sky is not necessarily its actual source. In Section 2, we describe our models for the LSS of the universe, IGMF, observers, and sources of UHECRs, reference objects for correlation study, and simulations. In Section 3, we present the results, followed by a summary and discussion in Section 4. 2. Models and Simulations In our study, the following elements are necessary: 1) a model for the IGMF on the LSS, 2) a set of virtual observers that represent \u201cus\u201d, an observer at the Earth, in a statistical way, 3) a set of hypothetical sources of UHE protons with a speci\ufb01ed injection spectrum, and 4) a set of reference objects with which we performed a correlation study of simulated \f\u2013 5 \u2013 events. In Paper I, we described in detail how we set up 1), 2), and 3) by using data from cosmological structure formation simulations. Below, we brie\ufb02y summarize models for 1), 2), and 3) and explain in details the reason to introduce \u201creference objects\u201d in this study. 2.1. Large Scale Structure of the Universe We assumed a concordance \u039bCDM model with the following parameters: \u2126BM = 0.043, \u2126DM = 0.227, and \u2126\u039b = 0.73, h \u2261H0/(100 km/s/Mpc) = 0.7, and \u03c38 = 0.8. The model universe for the LSS was generated through simulations in a cubic region of comoving size 100h\u22121(\u2261143) Mpc with 5123 grid zones for gas and gravity and 2563 particles for dark matter, using a PM/Eulerian hydrodynamic cosmology code described in Ryu et al. (1993). The simulations have a uniform spatial resolution of 195.3h\u22121 kpc. The standard set of gasdynamic variables, the gas density, \u03c1g, temperature, T, and the \ufb02ow velocity, v, were used to calculate the quantities required in our model such as the X-ray emission weighted temperature TX, the vorticity, \u03c9, and the turbulent energy density, \u03b5turb, at each grid. 2.2. Intergalactic Magnetic Field We adopted the IGMF from the model by Ryu et al. (2008); the model proposes that turbulent-\ufb02ow motions are induced via the cascade of the vorticity generated at cosmological shocks during the formation of the LSS of the universe, and the IGMF is produced as a consequence of the ampli\ufb01cation of weak seed \ufb01elds of any origin by the turbulence. Then, the energy density (or the strength) of the IGMF can be estimated with the eddy turnover number and the turbulent energy density as follow: \u03b5B = \u03c6 \u0012 t teddy \u0013 \u03b5turb. (1) Here, the eddy turnover time is de\ufb01ned as the reciprocal of the vorticity at driving scales, teddy \u22611/\u03c9driving (\u03c9 \u2261\u2207\u00d7 v), and \u03c6 is the conversion factor from turbulent to magnetic energy that depends on the eddy turnover number t/teddy. The eddy turnover number was estimated as the age of universe times the magnitude of the local vorticity, that is, tage \u03c9. The local vorticity and turbulent energy density were calculated from cosmological simulations for structure formation described above. A functional form for the conversion factor was derived from a separate, incompressible, magnetohydrodynamic (MHD) simulation of turbulence dynamo. For the direction of the IGMF, we used that of the passive \ufb01elds from cosmological \f\u2013 6 \u2013 simulations, in which magnetic \ufb01elds were generated through the Biermann battery mechanism (Biermann 1950) at cosmological shocks and evolved passively along with \ufb02ow motions (Kulsrud et al. 1997; Ryu et al. 1998). In principle, if we had performed full MHD simulations, we could have followed the ampli\ufb01cation of the IGMF through turbulence dynamo. In practice, however, the currently available computational resources do not allow a numerical resolution high enough to reproduce the full development of MHD turbulence. Since the numerical resistivity is larger than the physical resistivity by many orders of magnitude, the growth of magnetic \ufb01elds is saturated before dynamo action becomes fully operative (see, e.g., Kulsrud et al. 1997). This is the reason why we adopted the model of Ryu et al. (2008) to estimate the strength of the IGMF, but we still used the the passive \ufb01elds from cosmological simulations to model the \ufb01eld direction. Figure 1 shows the distribution of magnetic \ufb01eld strength in a slice of (143 Mpc)2 in our model universe. It shows that the IGMF is structured like the matter in the cosmic web. As a matter of fact, the distribution of the IGMF is very well correlated with that of matter. The strongest magnetic \ufb01eld of B \u22730.1\u00b5G is found in and around clusters, while the \ufb01eld is weaker in \ufb01laments, sheets, and voids. In \ufb01laments which are mostly composed of the warm-hot intergalactic medium (WHIM) with T = 105 \u2212107 K, the IGMF has \u27e8B\u27e9\u223c10 nG and \u27e8B2\u27e91/2 \u223ca few \u00d7 10 nG (Ryu et al. 2008). Note that the de\ufb02ection of UHECRs arises mostly due to the \ufb01eld in \ufb01laments (see Paper I). The energy density of the IGMF in \ufb01laments is \u03b5B \u223c10\u221216 ergs cm\u22123, which is a few times smaller than the gas thermal energy density and an order of magnitude smaller than the gas kinetic energy density there.1 The IGMF in \ufb01laments induces the Faraday rotation; the root-mean-square (rms) value of rotation measure (RM) is predicted to be a few rad m\u22121 (Cho & Ryu 2009). That is consistent with the values of RM toward the Hercules and Perseus-Pisces superclusters reported in Xu et al. (2006).2 2.3. Observer Locations In the study of the arrival direction of UHECRs, the IGMF around us, that is, in the Local Group, is important too. It would have been ideal to place \u201cthe observer\u201d where the 1We note that our model does not include a possible contribution to the IGMF from galactic black holes, AGN feedback (see, e.g., Kronberg et al. 2001); so our model may be regarded to provide a baseline for the IGMF. With the contribution, the real IGMF might be even stronger, resulting in even larger de\ufb02ection (see Section 3.1). 2The values of |RM| in Xu et al. (2006) is an order of magnitude larger than the value above, a few rad m\u22121. However, Xu et al. (2006) quoted the path length, which is about two orders of magnitude larger. \f\u2013 7 \u2013 IGMF is similar to that in the Local group. Unfortunately, however, little is known about the IGMF in the Local Group. Hence, instead, we placed \u201cvirtual\u201d observers based on the X-ray emission weighted temperature TX. The groups of galaxies that have the halo temperature similar to that of the Local Group, 0.05 keV < kTX < 0.5 keV (Rasmussen & Pedersen 2001), were identi\ufb01ed. About 1400 observer locations were chosen by the temperature criterion. In reality, there should be only one observer on the Earth. But in our modeling we could choose a number of observer locations to represent statistically \u201cus\u201d without loss of generality, since the simulated universe is only one statistical representation of the real universe. Then, we modeled observers as a sphere of radius 0.5h\u22121 Mpc located at the center of host groups, in order to reduce the computing time to a practical level. The distribution of handful observers are shown schematically in Figure 1. One can see that the observers (groups) are not distributed uniformly, but instead they are located mostly along \ufb01laments. 2.4. AGNs as Reference Objects As noted in Introduction, the Auger Collaboration recently reported a correlation between the direction of their highest energy events and the sky position of nearby AGNs from the 12th edition of the V\u00b4 eron-Cetty & V\u00b4 eron (2006) (VCV) catalog; the correlation has the maximum signi\ufb01cance for UHECRs with E \u227360 EeV and AGNs with distance D \u227275 Mpc (The Pierre Auger Collaboration 2007; Abraham et al. 2008a; Hague et al. 2009). There are about 450 AGNs with D \u227275 Mpc (more precisely, 442 AGNs with redshift z \u22640.018, for which the maximum signi\ufb01cance of the correlation was found) in the VCV catalog. In that study it is not known which subclass of those AGNs or what fraction of them are really true sources of UHECRS. Here, we regard those AGNs as \u201creference objects\u201d, with which correlation studies are performed. In order to compare our correlation study with that of the Auger collaboration, we speci\ufb01ed the following condition to determine \u201cmodel\u201d reference objects in the simulated universe: 1) the number of the objects within 75 Mpc from each observer should be on average \u223c500, and 2) their spatial distribution should trace the LSS in a way similar to the AGN distribution in the real universe. To set up the location of such reference objects, we identi\ufb01ed \u201cclusters\u201d of galaxies with kTX \u22730.1 keV in the simulated universe. Of course some of these clusters with kTX \u2272a few keV should be classi\ufb01ed as groups of galaxies. But for simplicity we call all of them as clusters. The reason behind this selection condition is that the gas temperature is directly related with the depth of gravitational potential well; the hottest gas resides in the densest, most nonlinear regions of the LSS where the most luminous and energetic objects (e.g. AGNs) form through frequent mergers of galaxies. We \f\u2013 8 \u2013 then assumed that each cluster hosts one reference object at its center. For each observer, we generated a list of reference objects inside a sphere of radius 75 Mpc, whose number is on average \u223c500; the exact number of reference objects varies somewhat for di\ufb00erent observers. Then, each observer has its own sky distribution of reference objects, with which we studied the correlation of simulated events. Although our reference objects could be any astronomical objects that trace the LSS, hereafter we refer them as \u201cmodel AGNs\u201d, because the selection criteria were chosen to match the number of AGNs with that from the VCV catalog. Figure 1 shows the schematic distribution of handful model AGNs at the center of host clusters. Obviously the host clusters (and the AGNs) are not uniformly distributed either. In our set-up, the distance to the model AGNs, D, can be arbitrarily small. In reality, however, the closest AGN in the VCV catalog is NGC 404 at D \u223c3 Mpc in the constellation Andromeda (Karachentsev et al. 2004). So the model AGNs with the distance from each observer D < Dmin \u22613 Mpc were excluded. We checked the angular distance Q between a given reference object to its nearest neighboring object. For a set of 442 objects (the number of the AGNs with z \u22640.018 in the VCV catalog), if they are distributed isotropically over the sky of 4\u03c0 radian, the average value of Q would be \u27e8Qiso\u27e9\u224811\u25e6. With the 442 AGNs from the VCV catalog, on the other hand, \u27e8QVCV\u27e9= 3.55\u25e6. The fact that \u27e8QVCV\u27e9< \u27e8Qiso\u27e9means that the distribution of the AGNs from the VCV catalog is not isotropic, but highly clustered, following the matter distribution in the LSS of the universe. We note that \u27e8QVCV\u27e9= 3.55\u25e6is similar to the angular window of 3.1\u25e6used in the Auger study. Clearly this agreement is not accidental, but rather consequential. For the sets of \u223c500 model AGNs in our simulations, the average angular distance is \u27e8QAGN\u27e9= 3.68\u25e6\u00b11.66\u25e6. The error was estimated with \u27e8QAGN\u27e9for \u223c1400 observers. That fact that \u27e8QAGN\u27e9\u223c\u27e8QVCV\u27e9indicates that the spatial clustering of our model AGNs is on average comparable to that of the AGNs from the VCV catalog. This provides a justi\ufb01cation for our selection criteria for model AGNs in the simulated universe. We note that Q is an intrinsic property of the distribution of the reference objects in the sky and has nothing to do with UHECRs. 2.5. Sources of UHECRs Although AGN is one of viable candidates that would produce UHECRs (see, e.g., Nagano & Watson 2000, for the list of viable candidates), there is no compelling reason that all the nearby AGNs are the sources of UHECRs. In this paper, we considered three models with di\ufb00erent numbers of sources, Nsrc (see Table 1), to represent di\ufb00erent subsets of AGNs. \f\u2013 9 \u2013 1) Among AGNs, radio galaxies are considered to be the most promising sources of UHECRs (see, e.g., Biermann & Strittmatter 1987), and there are 28 known radio galaxies within D \u226475 Mpc. So in Model C, we considered on average 28 model AGNs located at 28 hottest host clusters (kTx \u22730.8keV) within a sphere of radius 75 Mpc as true sources of UHECRs. 2) Based on the ratio of singlet to doublet events, on the other hand, Abraham et al. (2008a) argued that the lower limit on the number of sources of UHECRs would be around 61. Following this claim, in Model B, we regarded on average 60 model AGNs located at 60 hottest host clusters (kTx \u22730.55keV) as true sources of UHECRs. 2) In Model A, we regarded all the model AGNs (reference objects) as true sources of UHECRs. 2.6. Simulations of Propagation of UHE Protons At sources, UHE protons were injected with power-law energy spectrum; Ninj(Einj) \u221d E\u2212\u03b3 inj for 6\u00d71019 eV \u2264Einj \u22641021 eV, where \u03b3 is the injection spectral index. We considered the two cases of \u03b3 = 2.7 and 2.4. At each source, protons were randomly distributed over a sphere of radius 0.5h\u22121 Mpc, and then launched in random directions. We then followed the trajectories of UHE protons in our model universe with the IGMF, by numerically integrating the equations of motion; dr dt = v, dp dt = e (v \u00d7 B) , (2) where p is the momentum. During propagation UHE protons interact with the CMB, and the dominant processes for energy loss are the pion and pair productions. The energy loss was treated with the continuous-loss approximation (Berezinsky et al. 2006). The adiabatic loss due to the cosmic expansion was ignored. Table 1. Models of di\ufb00erent numbers of sources Model Nsrc host clusters \u27e8\u03b8\u27e9a \u02dc \u03b8 \u27e8Ssim\u27e9a \u02dc Ssim A \u223c500 kTX \u22730.1keV 13.98 7.01 3.58 2.80 B 60 kTX \u22730.55keV 15.33 8.80 3.97 3.19 C 28 kTX \u22730.8keV 17.76 10.45 4.23 3.43 aDe\ufb02ection angle \u03b8 and separation angle S are de\ufb01ned in Section 3 \f\u2013 10 \u2013 We let UHE protons continue the journey, visiting several observers during \ufb02ight, until the energy falls to 60 EeV. At observers, the events with E \u226560 EeV were recorded and analyzed. 3. Results 3.1. De\ufb02ection Angle In Paper I, we considered the de\ufb02ection angle, \u03b8, between the arrival direction of UHECR events and the sky position of their sources (see Figure 2). Obviously this angle can be calculated only when the true sources are known, which is not the case in experiments. In the simulated universe, our model AGNs and observers are located in strongly magnetized regions. As illustrated in Figure 1, UHECRs \ufb01rst have to escape from magnetic halos surrounding sources, then travel through more or less void regions (path 1) or through \ufb01laments (path 2), and \ufb01nally penetrate into magnetic halos around observers, to reach observers. So the degree of de\ufb02ection depends not only on the magnetic \ufb01eld along trajectories but also on the \ufb01elds at host clusters and groups of sources and observers. Since the gas temperature, the depth of gravitational potential well, and the magnetic \ufb01eld energy density are related as kTX \u221d\u03a6 \u221d\u03b5B in our model, hotter clusters and groups would have stronger \ufb01elds. So we expect that if sources and observers are located at hotter hosts, \u03b8 would be larger on average. Figure 3 shows \u03b8 versus D\u03b8 for UHE proton events recorded at observers in our simulations. Here, D\u03b8 denotes the distance to the actual sources of events. The top, middle, and bottom panels are for Models A, B, and C, respectively. Only the case of injection spectral index \u03b3 = 2.7 is presented. The case of \u03b3 = 2.4 is similar. Each dot represents one simulated event, and there are about 105 events in each Model. The upper circles connected with dotted lines represent the mean values of \u03b8 in the distance bins of [D\u03b8, D\u03b8 +\u2206D\u03b8]. The mean de\ufb02ection angle averaged over all the simulated events is \u27e8\u03b8\u27e9= 13.98\u25e6, 15.33\u25e6, and 17.76\u25e6for \u03b3 = 2.7 in Model A, Model B, and Model C, respectively. The lower circles connected with solid lines represent the median values of \u03b8. The median value for all the simulated events is \u02dc \u03b8 = 7.01\u25e6, 8.80\u25e6, and 10.45\u25e6for \u03b3 = 2.7 in Model A, Model B, and Model C, respectively. The values of \u27e8\u03b8\u27e9and \u02dc \u03b8 for \u03b3 = 2.4 are similar. The marks connected with vertical solid lines on the both sides of median values are the \ufb01rst and third quartiles in the distance bins, which provide a measure of the dispersion of \u03b8. We note the following points. (1) With our IGMF, the mean de\ufb02ection angle of UHE protons due to the IGMF is quite large, much larger than the angular window of 3.1\u25e6used \f\u2013 11 \u2013 in the Auger correlation study. It is also much larger than the mean de\ufb02ection angle that is expected to result from the Galactic magnetic \ufb01eld, which is a few degrees (Takami & Sato 2008). (2) The mean de\ufb02ection angle is largest in Model C and smallest in Model A. Recall that in Model C sources are located only at 28 hottest clusters, while in Model A all 500 clusters include sources (see Table 1). The UHECR events from hotter clusters tend to experience more de\ufb02ection, as noted above. So the mean of de\ufb02ection angles in the model with hotter host clusters is larger. (3) The mean value of \u03b8 has a minimum at D\u03b8,min \u223c20\u221230 Mpc, which compares to a typical length of \ufb01laments. As we pointed out in Paper I, this is a consequence of the structured magnetic \ufb01elds that are concentrated along \ufb01laments and at clusters. In an event with D\u03b8 < D\u03b8,min, the source and observer are more likely to belong to the same \ufb01lament, and so the particle is more likely to travel through strongly magnetized regions and su\ufb00er large de\ufb02ection (see the path 2 in Figure 1). In the opposite regime, the source and observer are likely to belong to di\ufb00erent \ufb01laments, so the particle would travel through void regions (see the path 1 in Figure 1). (4) For the events with D\u03b8 > D\u03b8,min, the mean and dispersion of \u03b8 increase with D\u03b8. Such trend is expected, since in the di\ufb00usive transport model of the propagation of UHECRs, the de\ufb02ection angle increases with distance as \u03b8rms \u221d\u221aD\u03b8 (see, e.g., Kotera & Lemoine 2009, and references therein). (5) There are more events from nearby sources than from distant sources, although all the sources inject the same number of UHECRs in our model. The smaller number of events for larger D\u03b8 should be mostly a consequence of energy loss due to the interaction with the CMB. 3.2. Separation Angles In this study, we also consider the separation angle, S, between the arrival direction of UHECR events and the sky position of nearest reference objects (see Figure 2). The angle can be calculated with observation data, once a class of reference objects (e.g. AGNs, galaxies gamma-ray bursts, and etc.) is speci\ufb01ed. For example, The Pierre Auger Collaboration (2007) took the AGNs within 75 Mpc in the VCV catalog as the reference objects for their correlation study. However, for a given UHECR event, the nearest AGN in the sky may not be the actual source; hence, the separation angle between a UHECR event and its nearest AGN is not necessarily the same as the de\ufb02ection angle of the event (Hillas 2009; Ryu et al. 2009). We obtained S for simulated events with our model reference objects (AGNs). Figure 4 shows S versus DS. Here, DS denotes the distance to nearest AGNs. Again only the case of \u03b3 = 2.7 is presented, and the case of \u03b3 = 2.4 is similar (see Figure 5). The circles connected with solid line represent the mean values of S for the events with nearest AGNs \f\u2013 12 \u2013 in the distance bins of [DS, DS + \u2206DS]. The mean separation angle averaged over all the simulated events is \u27e8Ssim\u27e9= 3.58\u25e6, 3.97\u25e6, and 4.23\u25e6for \u03b3 = 2.7 in Models A, B, and C, respectively. We note the following points. (1) The mean separation angle is much smaller than the mean de\ufb02ection angle, \u27e8Ssim\u27e9\u223c(1/4)\u27e8\u03b8\u27e9(see the next section for further discussion). (2) The mean separation angle is largest in Model C and smallest in Model A, although the di\ufb00erence of \u27e8Ssim\u27e9among the models is less than that of \u27e8\u03b8\u27e9. With larger de\ufb02ection angles in Model C, there is a higher probability for a event to be found further away from the region where model AGNs are clustered, so S is on average larger as well. (3) Similarly as in the \u03b8 versus D\u03b8 distribution, the distribution of \u27e8Ssim\u27e9has the minimum at around DS \u223c35 Mpc. This is again a signature of the \ufb01lamentary structures of the LSS. (4) Contrary to D\u03b8, there are more events with larger DS than with smaller DS. It is simply because there are more AGNs with larger DS. 3.3. Comparison with the Auger Data We also obtained S for the 27 Auger events of highest energies, published in Abraham et al. (2008a), with 442 nearby AGNs from the VC catalog. In Figure 5, we compare the S versus Ds distribution for the Auger data with that of our simulations. The upper circles connected with dashed/dot-dashed lines represent the mean values of Ssim of simulated events as in Figure 4, but this time both cases of \u03b3 = 2.7 and 2.4 are presented. The lower circles connected with solid/dotted lines represent the median values of Ssim. The di\ufb00erence between the cases of \u03b3 = 2.7 and 2.4 is indeed small. The median value of Ssim for all the simulated events is \u02dc Ssim = 2.80\u25e6, 3.19\u25e6, and 3.43\u25e6for \u03b3 = 2.7 in Models A, B, and C, respectively. Asterisks denote the Auger events. The mean separation angle for the Auger events is \u27e8SAuger\u27e9= 3.23\u25e6 for 26 events, excluding one event with large S (\u224827\u25e6), while \u27e8SAuger\u27e9= 4.13\u25e6for all the 27 events. We note that \u27e8SAuger\u27e9\u223c\u27e8Ssim\u27e9, even though the mean de\ufb02ection angle is much larger than the mean separation angle in our simulations, that is, \u27e8\u03b8\u27e9\u226b\u27e8Ssim\u27e9. In all the models, about a half of the Auger events lie within the quartile marks: 15, 13, and 13 events for Models A, B, and C, respectively. With \u27e8\u03b8\u27e9\u223c15\u25e6in our simulations, one might naively expect that such large de\ufb02ection would erase the anisotropy in the arrival direction and the correlation between UHECR events and AGNs (or the LSS of the universe). However, we argue that the large de\ufb02ection does not necessarily lead to the general isotropy of UHECR arrival direction, if the agent of de\ufb02ection, the IGMF, traces the local LSS. Suppose that UHECRs are ejected from sources inside the Local Supercluster. Some of them will \ufb02y along the Supergalactic plane and arrive \f\u2013 13 \u2013 at the Earth; their trajectories would be de\ufb02ected by the magnetic \ufb01eld between sources and us, but the arrival directions still point toward the Supergalactic plane. Others may be de\ufb02ected into void regions, and then they will have less chance to get re\ufb02ected back to the direction toward us due to lack of the turbulent IGMF there. In a simpli\ufb01ed picture, we may regard the irregularities in the IGMF as the \u2018scatters\u2019 of UHECRs; then the last scattering point will be the arrival direction of UHECRs (see Kotera & Lemoine 2009, for a description of de\ufb02ection of UHECRs based on this picture). As a result, even with large de\ufb02ection, we still see more UHECRs from the LSS of clusters, groups, and \ufb01laments, and fewer UHECRs from void regions where both sources and scatters are underpopulated. Consequently, the anisotropy in the arrival direction of UHECRs can be maintained and the arrival direction still follows the LSS of the universe. Below the GZK energy, the proton horizon reaches out to a few Gpc, so the source distribution should look more or less isotropic and the arrival directions should not show a correlation with nearby AGNs. Thus, we do not expect to see anisotropy and correlation for UHECRs with such energy. In Section 2.4, we showed that the degree of clustering of our model AGNs is similar to that of AGNs from the VCV catalog; the mean of the angular distance Q between a given AGN to its nearest neighboring AGN is similar, \u27e8QAGN\u27e9\u223c\u27e8QVCV\u27e9. In both cases, AGNs follow the matter distribution in the LSS, highly structured and clustered. We point that if along with the reference objects, the CR sources and the IGMF also follow the matter distribution, with \u27e8\u03b8\u27e9\u226b\u27e8S\u27e9, \u27e8S\u27e9\u223c\u27e8Q\u27e9is expected. The result that \u27e8SAuger\u27e9\u223c\u27e8Ssim\u27e9 \u223c\u27e8QAGN\u27e9\u223c\u27e8QVCV\u27e9is indeed consistent with such expectation. This means, however, that the statistics of S re\ufb02ect mainly on the distribution of reference objects, rather than the de\ufb02ection angle. To further compare the Auger data with our simulations, we plot the cumulative fraction of events, F(\u2264log S), versus log S for the simulated events (lines) and the Auger events (open circles) in Figure 6. The solid and dotted lines are for the cases of \u03b3 = 2.7 and 2.4, respectively, and the di\ufb00erence between the two cases is again small. The KolmogorovSmirnov (K-S) test yields the maximum di\ufb00erence of D = 0.17, 0.23, and 0.26 between the Auger data and the simulation data (\u03b3 = 2.7) in Models A, B, and C, respectively; the signi\ufb01cance level of the null hypothesis that the two distributions are statistically identical is P \u223c0.37, 0.09, and 0.04 for Models A, B, and C, respectively. So the null hypothesis that the two distributions for our simulated events and the Auger events are statistically identical cannot be rejected, especially for Model A. This would be a justi\ufb01cation for our models of the IGMF, sources of UHECRs, and reference objects. Also we see that Model A with more sources is preferred over Models B and C with fewer sources. But this does not \f\u2013 14 \u2013 necessary mean that all the AGNs would be the actual sources of UHECRs. We note that the number of the Auger events used, 27, is still limited. In addition, we consider only UHE protons in this paper. Hence, before we argue the above statements for sure, we will need more observational events and need to know the composition of UHECRs (see Summary and Discussion for further discussion on composition). 3.4. Probability of Finding True Sources With \u27e8\u03b8\u27e9\u226b\u27e8Ssim\u27e9, there is a good chance that the AGNs found closest to the direction of UHECRs are not the actual sources of UHECRs. To illustrate this point, we \ufb01rst show the distribution of DS versus D\u03b8 in Figure 7. For some events, the closest AGNs are the actual sources. They are represented by the diagonal line. Around the diagonal line, a noticeable fraction of events are found. Those are the events for which the closest AGNs are found around the true sources; both sources and close-by AGNs are clustered as a part of the LSS of the universe. For the events away from the diagonal line, it is more likely that DS > D\u03b8. It is because there are more AGNs with larger DS; away from true sources, observed events are more likely to pick up closest AGNs with larger DS. To quantify the consequence of \u27e8\u03b8\u27e9\u226b\u27e8Ssim\u27e9, we calculated the fraction of true identi\ufb01cation, f, as the ratio of the number of events for which nearest AGNs are their true sources to the total number of simulated events. This is a measure of the probability to \ufb01nd the true sources of UHECRs, when nearest candidates are blindly chosen (which is the best we can do with observed data). In Figure 8, we show the fraction as a function of separation angle, S. The fraction is largest in Model A with largest Nsrc, and smallest in Model C with smallest Nsrc. At S \u223c2\u25e6the fraction is about 50 % for Model A, close to 40 % for Model B, and a little above 30 % for Model C, but only 20 \u221230 % at S = 3\u25e6\u22124\u25e6. As the separation angle increases, the fraction decreases gradually to \u223c10 %, indicating lower probabilities to \ufb01nd true sources at larger separation angles. On average, we should expect that in less than 1 out of 3 events, the true sources of UHECRs can be identi\ufb01ed, if our model for the IGMF is valid and UHECRs are protons. 4. Summary and Discussion In the search for the nature and origin of UHECRs, understanding the propagation of charged particles through the magnetized LSS of the universe is important. At present, the details of the IGMF are still uncertain, mainly due to limited available information from \f\u2013 15 \u2013 observation. Here, we adopted a realistic model universe that was described by simulations of cosmological structure formation; our simulated universe represents the LSS, which is dominated by the cosmic web of \ufb01laments interconnecting clusters and groups. The distribution of the IGMF in the LSS of the universe was obtained with a physically motivated model based on turbulence dynamo (Ryu et al. 2008). To investigate the e\ufb00ects of the IGMF on the arrival direction of UHECRs, we further adopted the following models. Virtual observers of about 1400 were placed at groups of galaxies, which represent statistically the Local Group in the simulated model universe. Then, we set up a set of about 500 AGN-like \u201creference objects\u201d within 75 Mpc from each observer, at clusters of galaxies (deep gravitational potential wells) along the LSS. They represent a class of astronomical objects with which we performed a correlation analysis for simulated UHECR events. We considered three models, in which subsets of the reference objects were selected as AGN-like sources of UHECRs (see Table 1). UHE protons of E \u226560 EeV with power-low energy spectrum were injected at those sources, and the trajectories of UHE protons in the magnetized cosmic web were followed. At observer locations, the events with E \u226560 EeV from sources within a sphere of radius 75 Mpc were recorded and analyzed. To characterize the clustering of the reference objects, we calculated the angular distance, Q, from a given reference object to its nearest neighbor. The mean value for our model AGNs in the simulated universe is \u27e8QAGN\u27e9= 3.68\u25e6\u00b1 1.66\u25e6, while that for 442 AGNs from the VCV catalog is \u27e8QVCV\u27e9= 3.55\u25e6. This demonstrates that the two samples have a similar degree of clustering and are highly structured (e.g. \u27e8Qiso\u27e9\u224811\u25e6for the isotropic distribution). With our model IGMF, the de\ufb02ection angle, \u03b8, between the arrival direction of UHE protons and the sky position of their actual sources, is quite large with the mean \u27e8\u03b8\u27e9\u223c14 \u2212 17.5\u25e6and the median \u02dc \u03b8 \u223c7\u221210\u25e6, depending on models with di\ufb00erent numbers of sources (see Table 1). On the other hand, the separation angle between the arrival direction and the sky position of nearest reference objects is substantially smaller with the mean \u27e8Ssim\u27e9\u223c3.5 \u22124\u25e6 and the median \u02dc Ssim = 2.8 \u22123.5\u25e6. That is, we found that while \u27e8\u03b8\u27e9\u223c4\u27e8Ssim\u27e9, \u27e8Ssim\u27e9is similar to \u27e8QAGN\u27e9. For the Auger events of highest energies in Abraham et al. (2008a), with 442 nearby AGNs from the VCV catalog as the reference objects, the mean separation angle is \u27e8SAuger\u27e9= 3.23\u25e6for the 26 events, excluding one event with large S (\u224827\u25e6), while \u27e8SAuger\u27e9= 4.13\u25e6for all the 27 events. Hence, \u27e8SAuger\u27e9\u223c\u27e8QVCV\u27e9\u223c\u27e8Ssim\u27e9\u223c\u27e8QAGN\u27e9. This implies that the separation angle from the Auger data would be determined primarily by the distribution of reference objects (AGNs), and may not represent the true de\ufb02ection angle. We further tested whether the distributions of separation angle, S, for our simulated events and for the Auger events are statistically comparable to each other. According the \f\u2013 16 \u2013 Kolmogorov-Smirnov test for the cumulative fraction of events, F(\u2264log S), versus log S, the signi\ufb01cance level of the null hypothesis that the two distributions are drawn from the identical population is as large as P \u223c0.37 for Model A (see Table 1). Thus, we argued that our simulation data, especially in Model A, are in a fair agreement with the Auger data. This test also showed that the model with more sources (Model A) is preferred over the models with fewer sources (Models B and C). The fact that \u27e8\u03b8\u27e9\u226b\u27e8Ssim\u27e9implies that the AGNs found closest to the direction of UHECRs may not be the true sources of UHECRs. We estimated the probability of \ufb01nding the true sources of UHECRs, when nearest reference objects are blindly chosen: f(S) is the ratio of the number of true source identi\ufb01cations to the total number of simulated events. This probability is \u223c50\u221230 % at S \u223c2\u25e6, but decreases to \u223c10 % at larger separation angle. On average, in less than 1 out of 3 events, the true sources of UHECRs can be identi\ufb01ed in our simulations, when nearest reference objects are chosen. The distribution of \u03b8 versus D\u03b8 shows a bimodal pattern in which \u03b8 is on average larger either for nearby sources (for D\u03b8 \u227215 Mpc) or for distant sources (for D\u03b8 \u227330 Mpc) with the minimum at the intermediate distance of D\u03b8,min \u223c20 \u221230 Mpc. The distribution of S versus Ds shows a similar, but weaker sign of the bimodal pattern. This behavior is a characteristic signature of the magnetized cosmic web of the universe, where \ufb01laments are the most dominant structure. When a large number of super-GZK events are accumulated, we may \ufb01nd the signature of the cosmic web of \ufb01laments in the S versus Ds distribution. Finally, we address the limitations of our work. (1) We worked in a simulated universe with speci\ufb01c models for the elements such as the IGMF, observers, sources, and reference objects, but not in the real universe. So we could make only statistical statements. (2) It has been shown previously that adopting di\ufb00erent models for the IGMF, very di\ufb00erent de\ufb02ection angles are obtained (see Sigl et al. 2003; Dolag et al. 2005; Das et al. 2008). We argue that our model for the IGMF is most plausible, since it is a physically motivated model based on turbulence dynamo without involving an arbitrary normalization (Ryu et al. 2008). Nevertheless, our IGMF model should be con\ufb01rmed further by observation. (3) The sources of UHECRs may not be objects like AGNs, but could be objects extinguished a while ago, such as gamma-ray bursts (see, e.g., Vietri 1995; Waxman 1995), or sources spread over space like cosmological shocks (see, e.g., Kang et. al. 1996; Kang et al. 1997). The injection energy spectrum of power-law with cut-o\ufb00at an arbitrary maximum energy (see Section 2.6) would be unrealistic. The IGMF in the Local Group (see Paper I), although currently little is known, might be strong enough to substantially de\ufb02ect the trajectories of UHECRs. All of those will have e\ufb00ects on the quantitative results, which should be investigated further. (4) Recently, the Auger collaboration disclosed the analysis, which suggests a substantial fraction \f\u2013 17 \u2013 of highest energy UHECRs might be iron nuclei (Unger et al. 2007; Wahberg et al. 2009). This is in contradiction with the analysis of the HiRes data, which indicates highest energy UHECRs would be mostly protons (Sokolsky & Thomson 2007). The issue of composition still needs to be settled down among experiments. Iron nuclei, on the way from sources to us, su\ufb00er much larger de\ufb02ection than protons. Hence, if a substantial fraction of UHECRs is iron, some of our \ufb01ndings will change, a question which should be investigated in the future. The authors would like to thank P. L. Biermann for stimulating discussion. The work was supported by the Korea Research Foundation (KRF-2007-341-C00020) and the Korea Foundation for International Cooperation of Science and Technology (K2070202001607E0200-01610)." + }, + { + "url": "http://arxiv.org/abs/0806.2179v1", + "title": "Shock Waves in the Large-Scale Structure of the Universe", + "abstract": "Cosmological shock waves are induced during hierarchical formation of\nlarge-scale structure in the universe. Like most astrophysical shocks, they are\ncollisionless, since they form in the tenuous intergalactic medium through\nelectromagnetic viscosities. The gravitational energy released during structure\nformation is transferred by these shocks to the intergalactic gas as heat,\ncosmic-rays, turbulence, and magnetic fields. Here we briefly describe the\nproperties and consequences of the shock waves in the context of the\nlarge-scale structure of the universe.", + "authors": "Dongsu Ryu, Hyesung Kang", + "published": "2008-06-13", + "updated": "2008-06-13", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction Shock waves are ubiquitous in astrophysical environments; from solar winds to the largest scale of the universe (Ryu et al. 2003). In the current paradigm of the cold dark matter (CDM) cosmology, the largescale structure of the universe forms through hierarchical clustering of matter. Deepening of gravitational potential wells causes gas to move supersonically. Cosmological shocks form when the gas accretes onto clusters, \ufb01laments, and sheets, or as a consequence of the chaotic Dongsu Ryu Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764, Korea email: ryu@canopus.cnu.ac.kr Hyesung Kang Department of Earth Sciences, Pusan National University, Pusan 609-735, Korea email: kang@uju.es.pusan.ac.kr \ufb02ow motions of the gas inside the nonlinear structures. The gravitational energy released during the formation of large-scale structure in the universe is transferred by these shocks to the intergalactic medium (IGM). Cosmological shocks are collisionless shocks which form in a tenuous plasma via collective electromagnetic interactions between baryonic particles and electromagnetic \ufb01elds (Quest 1988). They play key roles in governing the nature of the IGM through the following processes: in addition to the entropy generation, cosmicrays (CRs) are produced via di\ufb00usive shock acceleration (DSA) (Bell 1978; Blandford and Ostriker 1978), magnetic \ufb01elds are generated via the Biermann battery mechanism (Kulsrud et al. 1997) and Weibel instability (Medvedev et al. 2006) and also ampli\ufb01ed by streaming CRs (Bell 2004), and vorticity is generated at curved shocks (Binney 1974). Cosmological shocks in the intergalactic space have been studied in details using various hydrodynamic simulations for the cold dark matter cosmology with cosmological constant (\u039bCDM) (Ryu et al. 2003; Pfrommer et al. 2006; Kang et al. 2008). In this contribution, we describe the properties of cosmological shocks and their implications for the intergalactic plasma from a simulation using a PM/Eulerian hydrodynamic cosmology code (Ryu et al. 1993) with the following parameters: \u2126BM = 0.043, \u2126DM = 0.227, and \u2126\u039b = 0.73, h \u2261H0/(100 km/s/Mpc) = 0.7, and \u03c38 = 0.8. A cubic region of comoving size 100 h\u22121 Mpc was simulated with 10243 grid zones for gas and gravity and 5123 particles for dark matter, allowing a uniform spatial resolution of \u2206l = 97.7h\u22121 kpc. The simulation is adiabatic in the sense that it does not include radiative cooling, galaxy/star formation, feedbacks from galaxies/stars, and reionization of the IGM. A temperature \ufb02oor was set to be the temperature of cosmic background radiation. \f2 Fig. 1 Two-dimensional images showing x-ray emissivity (top left), locations of shocks with color-coded shock speed Vs (top right), perpendicular component of vorticity (bottom left), and magnitude of vorticity (bottom right) in the region of (25 h\u22121Mpc)2 around a galaxy cluster at present (z = 0). Color codes Vs from 15 (green) to 1,800 km s\u22121 (red). 2 Properties of Cosmological Shocks As a post-processing step, shocks in the simulated volume are identi\ufb01ed by a set of criteria based on the shock jump conditions. Then the locations and properties of the shocks such as shock speed (Vs), Mach number (M), and kinetic energy \ufb02ux (fkin) are calculated. In the top panels of Figure 1, we compare the locations of cosmological shocks with the x-ray emissivity in the region around a cluster of galaxies, both of which are calculated from the simulation data at redshift z = 0. External shocks encompass this complex nonlinear structure and de\ufb01ne the outermost boundaries up to \u223c10 h\u22121 Mpc from the cluster core, far beyond the region observable with x-ray of size \u223c1 h\u22121 Mpc. Internal shocks are found within the region bounded by external shocks. External shocks have high Mach numbers of up to M \u223c103 due to the low temperature of the accreting gas in the void region. Internal shocks, on the other hand, have mainly low Mach numbers of M \u22723, because the gas inside nonlinear structures has been previously heated by shocks and so has high temperature. The frequency of cosmological shocks in the simulated volume is represented by the quantity S, the area of shock surfaces per unit comoving volume, in other words, the reciprocal of the mean comoving distance between shock surfaces. In the top left panel of Figure 2, we show S(Vs) per unit logarithmic shock speed interval at z = 0. We note that the frequency of low speed \fShock Waves in the Large-Scale Structure of the Universe 3 Fig. 2 (Top left) Reciprocal of the mean comoving distance between shock surfaces at z = 0 in units of 1/(h\u22121Mpc). (Top right) Kinetic energy \ufb02ux passing through shock surfaces per unit comoving volume at z = 0 in units of 1040 ergs s\u22121 (h\u22121Mpc)\u22123. (Bottom left) Thermal energy dissipated (dotted line) and CR energy generated (solid line) at shock surfaces, integrated from z = 5 to 0. (Bottom right) Cumulative energy distributions. The energies in the bottom panels are normalized to the total thermal energy at z = 0. All quantities are plotted as a function of shock speed Vs. shocks with Vs < 15 km s\u22121 is overestimated here, since the temperature of the intergalactic medium is unrealistically low in the adiabatic simulation without the cosmological reionization process. Although shocks with Vs \u223ca few \u00d7 10 km s\u22121 are most common, those with speed up to several \u00d7103 km s\u22121 are present at z = 0. The mean comoving distance between shock surfaces is 1/S \u223c3 h\u22121Mpc when averaged over the entire universe, while it is \u223c1 h\u22121Mpc inside the nonlinear structures of clusters, \ufb01laments, and sheets. In order to evaluate the energetics of cosmological shocks, the incident shock kinetic energy \ufb02ux, fkin = (1/2)\u03c11V 3 s , is calculated. Here \u03c11 is the preshock gas density. Then the average kinetic energy \ufb02ux through shock surfaces per unit comoving volume, F, is calculated. The top right panel of Figure 2 shows F(Vs) per unit logarithmic shock speed interval. Energetically the shocks with Vs > 103 km s\u22121, which form in the deepest gravitation potential wells in and around clusters of galaxies, are most important. Those responsible for most of shock energetics are the internal shocks with relatively low Mach number of M \u223c2 \u22124 in the hot IGM, because they form in the high-density gas inside nonlinear structures. On the other hand, external Fig. 3 Gas thermalization e\ufb03ciency, \u03b4(M), and CR acceleration e\ufb03ciency, \u03b7(M), as a function of Mach number. Symbols are the values estimated from numerical simulations based on a DSA model and dotted and dashed lines are the \ufb01ts. Solid line is for the gas thermalization e\ufb03ciency for shocks without CRs. shocks typically form in accretion \ufb02ows with the lowdensity gas in voids, so the amount of the kinetic energy passed through the external shocks is rather small. 3 Energy Dissipation at Cosmological Shocks In addition to the gas entropy generation, the acceleration of CRs is an integral part of collisionless shocks, in which electromagnetic interactions between plasma and magnetic \ufb01elds provide the necessary viscosities. Suprathermal particles are extracted from the shock-heated thermal particle distribution (Malkov and Drury 2001). With 10\u22124 \u221210\u22123 of the particle \ufb02ux passing through the shocks injected into the CR population, up to \u223c60% of the kinetic energy of strong quasi-parallel shocks can be converted into CR ions and the nonlinear feedback to the underlying \ufb02ow can be substantial (Kang and Jones 2005). At perpendicular shocks, however, the CR injection and acceleration are expected to be much less e\ufb03cient, compared to parallel shocks, since the transport of low energy particles normal to the average \ufb01eld direction is suppressed. So the CR acceleration depends sensitively on the mean magnetic \ufb01eld orientation. Time-dependent simulations of DSA at quasi-parallel shocks with a thermal leakage injection model and a Bohm-type di\ufb00usion coe\ufb03cient have shown that the evolution of CR modi\ufb01ed shocks becomes self-similar, after the particles are accelerated to relativistic energies and the precursor compression reaches a timeasymptotic state (Kang and Jones 2005, 2007). The self-similar evolution of CR modi\ufb01ed shocks depends somewhat weakly on the details of various particle-wave interactions, but it is mainly determined by the shock Mach number. Based on this self-similar evolution, we can estimate the gas thermalization e\ufb03ciency, \u03b4(M), and the CR acceleration e\ufb03ciency, \u03b7(M), as a function \f4 shock Mach number, which represent the fractions of the shock kinetic energy transferred into the thermal and CR energies, respectively. Figure 3 shows the results of such DSA simulations (Kang et al. 2008). From the \ufb01gure, we expect that at weak shocks with M \u22723 the energy transfer to CRs should be \u227210 % of the shock kinetic energy at each shock passage, whereas at strong shocks with M \u22733 the transfer is very e\ufb03cient and the \ufb02ow should be signi\ufb01cantly modi\ufb01ed by the CR pressure. Adopting the e\ufb03ciencies shown in Figure 3, the thermal and CR energy \ufb02uxes dissipated at each cosmological shock can be estimated as \u03b4(M)fkin and \u03b7(M)fkin, respectively. Then the total energies dissipated through the surfaces of cosmological shocks during the largescale structure formation of the universe can be calculated (Ryu et al. 2003; Kang et al. 2008). The bottom panels of Figure 2 show the thermal and CR energies, integrated from from z = 5 to 0, normalized to the total gas thermal energy at z = 0, as a function of shock speed. Here we assume CRs are freshly injected at shocks without a pre-existing population. The shocks with Vs > 103 km s\u22121 are most responsible for the shock dissipation into heat and CRs. The \ufb01gure shows that the shock dissipation can count most of the gas thermal energy in the IGM (Kang et al. 2005). The ratio of the total CR energy, YCR(> Vs,min), to the total gas thermal energy, Yth(> Vs,min), dissipated at cosmological shocks throughout the history of the universe is about 0.4 (where Vs,min is the minimum shock speed), giving a rough estimate for the energy density of CR protons relative to that of thermal gas in the IGM as \u03b5CRp \u223c0.4\u03b5therm. Because of uncertainties in the DSA model such as the \ufb01eld obliquity and the injection e\ufb03ciency, however, it is not meant to be an accurate estimate of the CR energy in the IGM. Yet the results imply that the IGM could contain a dynamically signi\ufb01cant CR population. 4 Turbulence Induced by Cosmological Shocks Vorticity can be generated in the IGM either directly at curved cosmological shocks or by the baroclinity of \ufb02ow. The baroclinity is resulted mostly from the entropy variation induced at cosmological shocks. Therefore, the baroclinic vorticity generation also can be attributed to the presence of cosmological shocks. A quantitative estimation of the vorticity in the IGM was made using the data of the simulation described in \u00a71 (Ryu et al. 2008). The bottom panels of Figure 1 show the distribution of the vorticity around a cluster complex. The distribution closely matches that of shocks, as expected. Fig. 4 Volume fraction with given temperature and vorticity magnitude (top left), with given gas density and vorticity magnitude (top right), with given temperature and magnetic \ufb01eld strength (bottom left), and with given gas density and magnetic \ufb01eld strength (bottom right) at present (z = 0). Here, tage is the present age of the universe. The top panels of Figure 4 show the magnitude of the vorticity in the simulated volume as a function of gas temperature and density. Here the vorticity magnitude is given in units of the reciprocal of the age of the universe, 1/tage. There is a clear trend that the vorticity is larger in hotter and denser regions. At the present epoch, the rms vorticity is \u03c9rmstage \u223c10 to 30 in the regions associated with clusters/groups of galaxies (T > 107 K) and \ufb01laments (105 < T < 107 K), whereas it is on the order of unity in sheetlike structures with 104 < T < 105 K and even smaller in voids with T < 104 K. With teddy = 1/\u03c9 interpreted as the local eddy turnover time, \u03c9 \u00d7 tage represents the number of eddy turnovers of vorticity in the age of the universe. It takes a few turnovers for vorticity to decay into smaller eddies and develop into turbulence. So with \u03c9rmstage \u223c 10 \u221230, the \ufb02ows in clusters/groups and \ufb01laments is likely to be in the state of turbulence. On the other hand, with \u03c9rmstage \u22721 the \ufb02ow in sheetlike structures and voids is expected to be mostly non-turbulent. In order to estimate the energy associated with the turbulence in the IGM, the curl component of \ufb02ow motions, \u20d7 vcurl, which satis\ufb01es the relation \u20d7 \u2207\u00d7 \u20d7 vcurl \u2261 \u20d7 \u2207\u00d7 \u20d7 v, is extracted from the velocity \ufb01eld. As vorticity cascades and develops into turbulence, the energy (1/2)\u03c1v2 curl is transferred to turbulent motions, \fShock Waves in the Large-Scale Structure of the Universe 5 so it can be regarded as the turbulence energy, \u03b5turb. As shown in Ryu et al. (2008), \u03b5turb < \u03b5therm in clusters/groups. In particular, the mass-averaged value is \u27e8\u03b5turb/\u03b5therm\u27e9mass = 0.1 \u22120.3 in the intracluster medium (ICM), which is in good agreement with the observationally inferred values in cluster core regions (Schuecker et al. 2004). In \ufb01laments and sheets, this ratio is estimated to be 0.5 \u2272\u27e8\u03b5turb/\u03b5therm\u27e9\u22722 and it increases with decreasing temperature. 5 Intergalactic Magnetic Field How have the intergalactic magnetic \ufb01elds (IGMFs) arisen? The general consensus is that there was no viable mechanism to produce strong, coherent magnetic \ufb01elds in the IGM prior to the formation of large-scale structure and galaxies (Kulsrud and Zweibel 2008). However, it is reasonable to assume that weak seed \ufb01elds were created in the early universe. A number of mechanisms, including the Biermann battery mechanism (Kulsrud et al. 1997) and Weibel instability (Medvedev et al. 2006) working at early cosmological shocks, have been suggested (Kulsrud and Zweibel 2008). The turbulence described in \u00a74 then can amplify the seed \ufb01elds in the IGM through the stretching of \ufb01eld lines, a process known as the turbulence dynamo. In this scenario the evolution of the IGMFs should go through three stages: (i) the initial exponential growth stage, when the back-reaction of magnetic \ufb01elds is negligible; (ii) the linear growth stage, when the back-reaction starts to operate; and (iii) the \ufb01nal saturation stage (Cho and Vishniac 2000; Cho et al. 2008). In order to estimate the strength of the IGMFs resulted from the dynamo action of turbulence in the IGM, we model the growth and saturation of magnetic energy as: \u03b5B \u03b5turb = \uf8f1 \uf8f2 \uf8f3 0.04 \u00d7 exp [(t\u2032 \u22124)/0.36] for t\u2032 < 4, (0.36/41) \u00d7 (t\u2032 \u22124) + 0.04 for 4 < t\u2032 < 45, 0.4 for t\u2032 > 45, based on a simulation of incompressible magnetohydrodynamic turbulence (Ryu et al. 2008; Cho et al. 2008). Here t\u2032 = t/teddy is the number of eddy turnovers. This provides a functional \ufb01t for the fraction of the turbulence energy, \u03b5turb, transfered to the magnetic energy, \u03b5B, as a result of the turbulence dynamo. The above formula is convoluted to the data of the simulation described in \u00a71, setting t\u2032 \u2261\u03c9\u00d7tage, and the strength of the IGMFs is calculated as B = (8\u03c0\u03b5B)1/2. The resulting magnetic \ufb01eld strength is presented in the bottom panels of Figure 4. On average the IGMFs are stronger in hotter and denser regions. The strength of the IGMFs is B \u22731\u00b5G inside clusters/groups (the mass-averaged value for T > 107 K), \u223c0.1\u00b5G around clusters/groups (the volume-averaged value for T > 107 K), and \u223c10 nG in \ufb01laments (with 105 < T < 107 K) at present. The IGMFs should be much weaker in sheetlike structures and voids. But as noted above, turbulence has not developed fully in such low density regions, so our model is not adequate to predict the \ufb01eld strength there. We note that in addition to the turbulence dynamo, other processes such as galactic winds driven by supernova explosions and jets from active galactic nuclei can further strengthen the magnetic \ufb01elds to the IGM (for references, see Ryu et al. 2008) 6" + } + ], + "Hyesung Kang": [ + { + "url": "http://arxiv.org/abs/1901.04173v2", + "title": "Electron Preacceleration in Weak Quasi-perpendicular Shocks in High-beta Intracluster Medium", + "abstract": "Giant radio relics in the outskirts of galaxy clusters are known to be lit up\nby the relativistic electrons produced via diffusive shock acceleration (DSA)\nin shocks with low sonic Mach numbers, $M_{\\rm s}\\lesssim3$. The particle\nacceleration at these collisionless shocks critically depends on the kinetic\nplasma processes that govern the injection to DSA. Here, we study the\npreacceleration of suprathermal electrons in weak, quasi-perpendicular\n($Q_\\perp$) shocks in the hot, high-$\\beta$ ($\\beta = P_{\\rm gas}/P_{\\rm B}$)\nintracluster medium (ICM) through two-dimensional particle-in-cell simulations.\n\\citet{guo2014a,guo2014b} showed that in high-$\\beta$ $Q_\\perp$-shocks, some of\nincoming electrons could be reflected upstream and gain energy via shock drift\nacceleration (SDA). The temperature anisotropy due to the SDA-energized\nelectrons then induces the electron firehose instability (EFI), and oblique\nwaves are generated, leading to a Fermi-like process and multiple cycles of SDA\nin the preshock region. We find that such electron preacceleration is effective\nonly in shocks above a critical Mach number $M_{\\rm ef}^*\\approx2.3$. This\nmeans that in ICM plasmas, $Q_\\perp$-shocks with $M_{\\rm s}\\lesssim2.3$ may not\nefficiently accelerate electrons. We also find that even in $Q_\\perp$-shocks\nwith $M_{\\rm s}\\gtrsim2.3$, electrons may not reach high enough energies to be\ninjected to the full Fermi-I process of DSA, because long-wavelength waves are\nnot developed via the EFI alone. Our results indicate that additional electron\npreaccelerations are required for DSA in ICM shocks, and the presence of fossil\nrelativistic electrons in the shock upstream region may be necessary to explain\nobserved radio relics.", + "authors": "Hyesung Kang, Dongsu Ryu, Ji-Hoon Ha", + "published": "2019-01-14", + "updated": "2019-03-16", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Weak shocks with low sonic Mach numbers, Ms \u22723, form in the hot intracluster medium (ICM) during major merges of galaxy clusters (e.g., Gabici & Blasi 2003; Ryu et al. 2003; Ha et al. 2018a). Radiative signatures of those merger shocks have been detected in X-ray and radio observations (e.g., Markevitch & Vikhlinin 2007; van Weeren et al. 2010; Bruggen et al. 2012; Brunetti & Jones 2014). In the case of the so-called radio relics, the radio emission has been interpreted as the synchrotron radiation from the relativistic electrons accelerated via di\ufb00usive shock acceleration (DSA) in the shocks. Hence, the sonic Mach numbers of relic shocks, Mradio (radio Corresponding author: Hyesung Kang hskang@pusan.ac.kr Mach number), have been inferred from the radio spectral index (e.g., van Weeren et al. 2010, 2016), based on the DSA test-particle power-law energy spectrum (e.g., Bell 1978; Blandford & Ostriker 1978; Drury 1983). In X-ray observations, the sonic Mach numbers, MX (X-ray Mach number), have been estimated for merger-driven shocks, using the discontinuities in temperature or surface brightness (e.g., Markevitch et al. 2002; Markevitch & Vikhlinin 2007). While Mradio and MX are expected to match, Mradio has been estimated to be larger than MX in some radio relics (e.g., Akamatsu & Kawahara 2013). In the case of the Toothbrush radio relic in merging cluster 1RXS J060303.3, for instance, van Weeren et al. (2016) estimated that Mradio \u22482.8 while MX \u22481.2\u22121.5. In the socalled reaccelertion model, weak shocks with \u223cMX are presumed to sweep through fossil electrons with powerarXiv:1901.04173v2 [astro-ph.HE] 16 Mar 2019 \f2 Kang et al. law energy spectrum, Nfossil \u221d\u03b3\u2212p (\u03b3 is the Lorentz factor), and then the radio spectra with observed spectral indices, \u03b1sh = (p \u22121)/2, are supposed to be generated (e.g., Kang 2016a,b). This model may explain the discrepancy between Mradio and MX in some cases. However, it may not be realistic to assume the presence of fossil electrons with \ufb02at power-law spectra up to \u03b3 \u223c104 over length scales of 400 \u2212500 kpc, since such high-energy electrons cool with time scales of \u223c100 Myr (Kang et al. 2017). On the other hand, with mock Xray and radio observations of radio relics using simulated clusters, Hong et al. (2015) argued that the surfaces of merger shocks are highly inhomogeneous in terms of Ms (see, also Ha et al. 2018a), and X-ray observations preferentially pick up the parts with lower Ms (higher shock energy \ufb02ux), while radio emissions manifest the parts with higher Ms (higher electron acceleration). As a result, MX could be be smaller than Mradio. However, the true origins of this discrepancy have yet to be understood. For the full description of radio relics, hence, it is necessary to \ufb01rst understand shocks in the ICM. They are collisionless shocks, as in other astrophysical environments (e.g., Brunetti & Jones 2014). The physics of collisionless shocks involves complex kinetic plasma processes well beyond the MHD Rankine-Hugoniot jump condition. DSA, for instance, depends on various shock parameters including the sonic Mach number, Ms, the plasma beta, \u03b2 = Pgas/PB (the ratio of thermal to magnetic pressures), and the obliquity angle between the upstream background magnetic \ufb01eld direction and the shock normal, \u03b8Bn (see Balogh & Truemann 2013). In general, collisionless shocks can be classi\ufb01ed by the obliquity angle as quasi-parallel (Q\u2225, hereafter) shocks with \u03b8Bn \u227245\u25e6and quasi-perpendicular (Q\u22a5, hereafter) shocks with \u03b8Bn \u227345\u25e6. In situ observations of Earth\u2019s bow shock indicate that protons are e\ufb00ectively accelerated at the Q\u2225-portion, while electrons are energized preferentially in the Q\u22a5-con\ufb01guration (e.g., Gosling et al. 1980). In such shocks, one of key processes for DSA is particle injection, which involves the re\ufb02ection of particles at the shock ramp, the excitation of electromagnetic waves/turbulences by the re\ufb02ected particles, and the energization of particles through ensuing wave-particle interactions (e.g., Treumann & Jaroschek 2008; Treumann 2009). Since the thickness of the shock transition zone is of the order of the gyroradius of postshock thermal ions, both ions and electrons need to be preaccelerated to suprathermal momenta greater than a few times the momentum of thermal ions, pth,i, in order to di\ufb00use across the shock transition layer and fully participate in the \ufb01rst-order Fermi (Fermi-I, hereafter) process of DSA (e.g., Kang et al. 2002; Caprioli et al. 2015). Here, pth,i = \u221a2mikBTi2, Ti2 is the postshock ion temperature and kB is the Boltzmann constant. Hereafter, the subscripts 1 and 2 denote the preshock and postshock quantities, respectively. Kinetic processes in collisionless shocks can be studied through, for instance, particle-in-cell (PIC) and hybrid plasma simulations (e.g., Caprioli & Spitkovsky 2014a; Guo et al. 2014a,b; Park et al. 2015). Previous studies have mostly focused on shocks in \u03b2 \u22721 plasmas, where the A\ufb02v\u00b4 en Mach number MA is about the same as Ms (MA \u2248\u221a\u03b2Ms), investigating shocks in solar wind and the interstellar medium (ISM) (see also Treumann 2009, and references therein). If plasmas have very low\u03b2 (sometimes referred as cold plasmas), even the thermal motions of particles can be neglected . In hot ICM plasmas, on the other hand, \u03b2 \u223c100 (e.g., Ryu et al. 2008; Porter et al. 2015), and shocks have low sonic Mach numbers of Ms \u22723, but relatively high Alfv\u00b4 en Mach numbers up to MA \u224830. In such shocks, kinetic processes are expected to operate di\ufb00erently from low-\u03b2 shocks. Recently, we investigated proton acceleration in weak (Ms \u22482 \u22124) \u201cQ\u2225-shocks\u201d in high-\u03b2 (\u03b2 = 30 \u2212100) ICM plasmas through one-dimensional (1D) and twodimensional (2D) PIC simulations (Ha et al. 2018b, Paper I, hereafter). The main \ufb01ndings can be recapitulated as follows. (1) Q\u2225-shocks with Ms \u22732.3 develop overshoot-undershoot oscillations in their structures and undergo quasi-cyclic reformation, leading to a signi\ufb01cant amount of incoming protons being re\ufb02ected at the shock. The backstreaming ions excite resonant and nonresonant waves in the foreshock region, leading to the generation of suprathermal protons that can be injected to the Fermi-I process. (2) Q\u2225-shocks with Ms \u22722.3, on the other hand, have relatively smooth and steady structures. The development of suprathermal population is negligible in these shocks. (3) In Q\u22a5-shocks, a substantial fraction of incoming ions are re\ufb02ected and gain energy via shock drift acceleration (SDA), but the energized ions advect downstream along with the background magnetic \ufb01eld after about one gyromotion without being injected to the Fermi-I acceleration. (4) For the description of shock dynamics and particle acceleration in high-\u03b2 plasmas, the sonic Mach number is the more relevant parameter than the Alfv\u00b4 en Mach number, since the re\ufb02ection of particles is mostly controlled by Ms. As a sequal, in this work, we explore the electron preacceleration in low Mach number, Q\u22a5-shocks in high\u03b2 ICM plasmas. Such shocks are thought be the agents of radio relics in merging clusters. Previously, \fElectron Preacceleration in Weak ICM Shocks 3 the pre-energization of thermal electrons at collisionless shocks (i.e., the injection problem), which involves kinetic processes such as the excitation of waves via microinstabilities and wave-particle interactions, was studied through PIC simulations (e.g., Amano & Hoshino 2009; Riquelme & Spitkovsky 2011; Matsukiyo et al. 2011; Guo et al. 2014a,b; Park et al. 2015, and references therein). For instance, Amano & Hoshino (2009) showed that in high-MA Q\u22a5-shocks with \u03b2 \u223c1, strong electrostatic waves are excited by Buneman instability and con\ufb01ne electrons in the shock foot region, where electrons gain energy by drifting along the motional electric \ufb01eld (shock sur\ufb01ng acceleration, SSA). On the other hand, Riquelme & Spitkovsky (2011) found that in Q\u22a5shocks with MA \u2272(mi/me)1/2 (where mi/me is the ion-to-electron mass ratio) and \u03b2 \u223c1, the growth of oblique whistler waves in the shock foot by modi\ufb01ed twostream instabilities (MTSIs) may play important roles in con\ufb01ning and pre-energizing electrons. Matsukiyo et al. (2011) showed through 1D PIC simulations that in weak shocks with the fast Mach number, Mf \u22482 \u22123, and \u03b2 \u22483, a fraction of incoming electrons are accelerated and re\ufb02ected through SDA and form a suprathermal population. However, this \ufb01nding was refuted later by Matsukiyo & Matsumoto (2015) who showed through 2D PIC simulations of shocks with similar parameters that electron re\ufb02ection is suppressed due to shock surface ripping. Guo et al. (2014a,b, GSN14a and GSN14b, hereafter) carried out comprehensive studies for electron preacceleration and injection in Ms = 3, Q\u22a5-shocks in plasmas with \u03b2 = 6 \u2212200 using 2D PIC simulations. In particular, GSN14a presented the \u201crelativistic SDA theory\u201d for oblique shocks, which can be brie\ufb02y summarized as follows. The incoming electrons that satisfy criteria (i.e., with pitch angles larger than the loss-cone angle) are re\ufb02ected and gain energy through SDA at the shock ramp. The energized electrons backstream along the background magnetic \ufb01eld lines with small pitch angles, generating the temperature anisotropy of Te\u2225> Te\u22a5. GSN14b then showed that the \u201celectron \ufb01rehose instability\u201d (EFI) is induced by the temperature anisotropy, and oblique waves are excited (Gary & Nishimura 2003). The electrons are scattered back and forth between magnetic mirrors at the shock ramp and self-generated upstream waves (a Fermi-I type process), being further accelerated mostly through SDA. At this stage, the electrons are still suprathermal and do not have su\ufb03cient energies to di\ufb00use downstream of the shock; instead, they stay upstream of the shock ramp. The authors named this process as a \u201cFermilike process\u201d, as opposed to the full, bona \ufb01de Fermi-I process. GSN14a also pointed out that SSA does not operate in weak ICM shocks because of the suppression of Buneman instability in hot plasmas, and that in high\u03b2 shocks the preacceleration via SDA dominates over the energization through interactions with the oblique whistler waves generated via MTSIs in the shock foot. For electron preacceleration in weak ICM shocks, however, there are still issues to be further addressed. Most of all, there should be a critical Mach number, below which the preacceleration is not e\ufb03cient. Even though electrons are pre-energized at shocks with Ms \u22483, as shown in GSN14a and GSN14b, it is not clear whether they could be further accelerated by the full Fermi-I process of DSA. We will investigate these issues using 2D PIC simulations in this paper. The paper is organized as follows. Section 2 includes the descriptions of simulations, along with the de\ufb01nitions of various parameters involved. In Section 3, we give a brief review on the background physics of Q\u22a5shocks, in order to facilitate the understandings of our simulation results in the following section. Next, in Section 4, we present shock structures and electron preacceleration in simulations, and examine the dependence of our \ufb01ndings on various shock parameters. A brief summary is given in Section 5. 2. NUMERICS Simulations were performed using TRISTAN-MP, a parallelized electromagnetic PIC code (Buneman 1993; Spitkovsky 2005). The geometry is 2D planar, while all the three components of particle velocity and electromagnetic \ufb01elds are followed. The details of simulation setups can be found in Paper I, and below some basic de\ufb01nitions of parameters and features are described in order to make this paper self-contained. In Paper I, we used the variable v, for example, v0 and vsh, to represents \ufb02ow velocities. We here, however, use the variable u for \u201c\ufb02ow\u201d velocities, while v is reserved for \u201cparticle\u201d velocities. Plasmas, which are composed of ions and electrons of Maxwellian distributions, move with the bulk velocity u0 = \u2212u0\u02c6 x toward a re\ufb02ecting wall at the leftmost boundary (x = 0), and a shock forms and propagates toward the +\u02c6 x direction. Hence, simulations are performed in the rest frame of the shock downstream \ufb02ow. For the given preshock ion temperature, Ti, the \ufb02ow Mach number, M0, is related to the upstream bulk velocity as M0 \u2261u0 cs1 = u0 p 2\u0393kBTi1/mi , (1) where cs1 is the sound speed in the upstream medium and \u0393 = 5/3 is the adiabatic index. Thermal equilib\f4 Kang et al. Table 1. Model Parameters of Simulations Model Name Ms MA u0/c \u03b8Bn \u03b2 Te1 = Ti1[K(keV)] mi/me Lx[c/wpe] Ly[c/wpe] \u2206x[c/wpe] tend[w\u22121 pe ] tend[\u2126\u22121 ci ] M2.0 2.0 18.2 0.027 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.15 2.15 19.6 0.0297 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.3 2.3 21 0.0325 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.5 2.5 22.9 0.035 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.75 2.75 25.1 0.041 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M3.0 3.0 27.4 0.047 63\u25e6 100 108(8.6) 100 1.2 \u00d7 104 80 0.1 2.26 \u00d7 105 60 M2.15-\u03b853 2.15 19.6 0.0297 53\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.15-\u03b873 2.15 19.6 0.0297 73\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.3-\u03b853 2.3 21 0.0325 53\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.3-\u03b873 2.3 21 0.0325 73\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.1 1.13 \u00d7 105 30 M2.0-\u03b250 2.0 12.9 0.027 63\u25e6 50 108(8.6) 100 7 \u00d7 103 80 0.1 8.0 \u00d7 104 30 M2.3-\u03b250 2.3 14.8 0.0325 63\u25e6 50 108(8.6) 100 7 \u00d7 103 80 0.1 8.0 \u00d7 104 30 M3.0-\u03b250 3.0 19.4 0.047 63\u25e6 50 108(8.6) 100 7 \u00d7 103 80 0.1 8.0 \u00d7 104 30 M2.0-m400 2.0 18.2 0.013 63\u25e6 100 108(8.6) 400 7 \u00d7 103 80 0.1 1.5 \u00d7 105 10 M2.3-m400 2.3 21 0.016 63\u25e6 100 108(8.6) 400 7 \u00d7 103 80 0.1 1.5 \u00d7 105 10 M3.0-m400 3.0 27.4 0.023 63\u25e6 100 108(8.6) 400 7 \u00d7 103 80 0.1 1.5 \u00d7 105 10 M2.3-r2 2.3 21 0.0325 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.05 3.8 \u00d7 104 10 M2.3-r0.5 2.3 21 0.0325 63\u25e6 100 108(8.6) 100 7 \u00d7 103 80 0.2 3.8 \u00d7 104 10 rium is assumed for incoming plasmas, i.e., Ti1 = Te1, where Te1 is the preshock electron temperature. In typical PIC simulations, because of severe requirements for computational resources, reduced ion-to-electron mass ratios, mi/me < 1836, are assumed. Here, we consider the mass ratio of mi/me = 100 and 400; electrons have the rest mass of me = 511 keV/c2, while \u201cions\u201d have reduced masses emulating the proton population. In the limit of high \u03b2, the upstream \ufb02ow speed in the shock rest frame can be expressed as ush \u2248u0 \u00b7 r/(r \u22121), where r = (\u0393 + 1)/(\u0393 \u22121 + 2/M 2 s ) is the shock compression ratio, and the sonic Mach number, Ms, of the induced shock is given as Ms \u2261ush cs1 \u2248M0 r r \u22121. (2) The magnetic \ufb01eld carried by incoming plasmas, B0, lies in the x-y plane and the angle between B0 and the shock normal direction is the obliquity angle \u03b8Bn, as de\ufb01ned in the Introduction. The initial electric \ufb01eld in the \ufb02ow frame is zero everywhere, but the motional electric \ufb01eld, E0 = \u2212u0/c \u00d7 B0, is induced along +\u02c6 z direction, where c is the speed of light. The strength of B0 is parameterized by \u03b2 as \u03b2 = 8\u03c0nkB(Ti1 + Te1) B2 0 = 2 \u0393 M 2 A M 2 s , (3) where MA \u2261ush/uA is the Alfv\u00b4 en Mach number of the shock. Here, uA = B0/\u221a4\u03c0nmi is the Alfv\u00b4 en speed, and n = ni = ne are the number densities of incoming ions and electrons. We consider \u03b2 = 50 and 100, along with kBT1 = kBTi1 = kBTe1 = 0.0168mec2 = 8.6 keV (or Ti1 = Te1 = 108 K), relevant for typical ICM plasmas (Ryu et al. 2008; Porter et al. 2015). The fast Mach number of MHD shocks is de\ufb01ned as Mf \u2261ush/uf, where the fast wave speed is u2 f = {(c2 s1 + u2 A) + [(c2 s1 + u2 A)2 \u22124c2 s1u2 A cos2 \u03b8Bn]1/2}/2. In the limit of high \u03b2 (i.e., cs1 \u226buA), Mf \u2248Ms. The model parameters of our simulations are summarized in Table 1. We adopt \u03b2 = 100, \u03b8Bn = 63\u25e6, and mi/me = 100 as the \ufb01ducial values of the parameters. The incident \ufb02ow velocity, u0, is speci\ufb01ed to induce shocks with Ms \u22482\u22123, which are characteristic for cluster merger shocks (e.g., Ha et al. 2018a), as noted in the Introduction. Models with di\ufb00erent Ms are named with the combination of the letter \u2018M\u2019 and sonic Mach numbers (for example, the M2.0 model has Ms = 2.0). Models with parameters di\ufb00erent from the \ufb01ducial values have the names that are appended by a character for the speci\ufb01c parameter and its value. For example, the M2.3-\u03b873 model has \u03b8Bn = 73\u25e6, while the M2.3-m400 model has mi/me = 400. Simulations are presented in units of the plasma skin depth, c/wpe, and the electron plasma oscillation period, w\u22121 pe , where wpe = p 4\u03c0e2n/me is the electron plasma frequency. The Lx and Ly columns of Table 1 denote the x and y\u2212sizes of the computational domain. Except for the M3.0 model (see below), the longitudinal and transverse lengths are Lx = 7 \u00d7 103c/wpe and Ly = 80c/wpe, respectively, which are represented by a grid of cells with size \u2206x = \u2206y = 0.1c/wpe. The last two columns show the end times of simulations in units of w\u22121 pe and the ion gyration period, \u2126\u22121 ci , where \u2126ci = eB0/mic is the ion gyrofrequency. The ratio of the two periods scales as \fElectron Preacceleration in Weak ICM Shocks 5 wpe/\u2126ci \u221d(mi/me)\u221a\u03b2. For most models, simulations run up to tendwpe \u22481.13 \u00d7 105, which corresponds to tend\u2126ci \u224830 for \u03b2 = 100 and mi/me = 100. The M3.0 model extends twice longer up to tendwpe \u22482.26 \u00d7 105 or tend\u2126ci \u224860, and correspondingly has a longer longitudinal dimension of Lx = 1.2 \u00d7 104c/wpe. Comparison models with smaller \u03b2, M2.0-\u03b250, M2.30-\u03b250, and M3.0\u03b250, also go up to tend\u2126ci \u224830 (tendwpe \u22488.0 \u00d7 104). The models with mi/me = 400, on the other hand, are calculated only up to tend\u2126ci \u224810 (tendwpe \u22481.5\u00d7105). Models with di\ufb00erent \u2206x/(c/wpe), M2.3-r2 and M2.3r0.5, are also considered to inspect the e\ufb00ects of spatial resolution. In each cell, 32 particles (16 per species) are placed. The time step is \u2206t = 0.045[w\u22121 pe ]. Compared to the reference model reported by GSN14a and GSN14b, our \ufb01ducial models have higher \u03b2 (100 versus 20) and lower Ti1 = Te1 (108 K versus 109 K). As a result, our simulations run for a longer time, for instance, \u03c9petend \u22481.13 \u00d7 105 to reach tend\u2126ci \u224830. And our shocks are less relativistic. More importantly, this work also includes weaker shock models with Ms < 3.0, while GSN14a and GSN14b considered only shocks with Ms = 3.0. 3. PHYSICS OF Q\u22a5-SHOCKS 3.1. Critical Mach Numbers The structures and time variations of collisionless shocks are primarily governed by the dynamics of re\ufb02ected ions and the waves excited by the relative drift between re\ufb02ected and incoming ions. In theories of collisionless shocks, hence, a number of critical shock Mach numbers have been introduced to describe ion re\ufb02ection and upstream wave generation (see Balogh & Truemann 2013, for a review). Although the main focus of this paper is the electron acceleration at Q\u22a5-shocks, we here present a brief review on the \u201cshock criticalities\u201d due to re\ufb02ected ions. The re\ufb02ection of ions has been often linked to the \u201c\ufb01rst critical Mach number\u201d, M \u2217 f (\u03b2, \u03b8Bn); it was found for un2 = cs2 by applying the RankineHugoniot jump relation to fast MHD shocks, i.e., the condition that the downstream \ufb02ow speed normal to the shock surface equals the downstream sound speed (e.g., Edmiston & Kennel 1984). In supercritical shocks with Mf > M \u2217 f , the shock kinetic energy can not be dissipated enough through resistivity and wave dispersion, and hence a substantial fraction of incoming ions should be re\ufb02ected upstream in order to sustain the shock transition from the upstream to the downstream. In subcritcal shocks below M \u2217 f , on the other hand, the resistivity alone can provide enough dissipation to support a stable shock structure. In collisionless shocks, however, the re\ufb02ection of ions occurs at the shock ramp due to the magnetic de\ufb02ection and the cross shock potential drop, the physics beyond the \ufb02uid description. Hence, it should be investigated with simulations resolving kinetic processes. In Q\u2225-shocks, the \ufb01rst critical Mach number also denotes the minimum Mach number, above which kinetic processes trigger overshoot-undershoot oscillations in the density and magnetic \ufb01eld, and the shock structures may become non-stationary under certain conditions. The re\ufb02ection of ions is mostly due to the deceleration by the shock potential drop, and resonant and nonresonant waves are excited via streaming instabilities induced by re\ufb02ected ions (e.g., Caprioli & Spitkovsky 2014a,b). Such processes depend on shock parameters. For instance, in shocks with higher Ms and hence higher shock kinetic energies, the structures tend to more easily \ufb02uctuate and become unsteady. In high-\u03b2 plasmas, on the other hand, shocks could be stabilized against certain instabilities owing to fast thermal motions, which can subdue the relative drift between re\ufb02ected and incoming particles; thus, theoretical analyses based on the cold plasma assumption could be modi\ufb01ed in high-\u03b2 plasmas. In Paper I, we found that M \u2217 f \u22482.3 for Q\u2225shocks in ICM plasmas with \u03b2 \u2248100, which is higher than the \ufb02uid prediction by Edmiston & Kennel (1984). In Q\u2225-shocks, the kinetic processes involved in determining M \u2217 f are also parts of the preacceleration of ions and hence the injection to the Fermi-I process of DSA. In Q\u22a5-shocks, both ions and electrons are re\ufb02ected through the magnetic de\ufb02ection; the two populations are subject to deceleration by the magnetic mirror force due to converged magnetic \ufb01eld lines at the shock transition. In addition, the shock potential drop decelerates incident ions while it accelerates electrons toward the downstream direction. The re\ufb02ected particles gain energy through the gradient drift along the motional electric \ufb01eld at the shock surface (SDA). Most of re\ufb02ected ions, however, are trapped mostly at the shock foot before they advect downstream with the background magnetic \ufb01eld after about one gyromotion. As a result, streaming instabilities are not induced in the upstream, and hence the ensuing CR proton acceleration is ine\ufb00ective, as previously reported with hybrid and PIC simulations (e.g., Caprioli & Spitkovsky 2014a,b, and Paper I). However, still the dynamics of re\ufb02ected ions is primarily responsible for the main features of the transition zone of Q\u22a5-shocks (e.g., Treumann & Jaroschek 2008; Treumann 2009). For instance, the current due to the drift motion of re\ufb02ected ions generates the magnetic foot, ramp, and overshoot. And the charge separation due to re\ufb02ected ions generates the ambipolar electric shock potential drop at the shock ramp. \f6 Kang et al. In Q\u22a5-shocks, the accumulation of re\ufb02ected ions at the upstream edge of the foot may lead to the cyclic self-reformation of shock structures over ion gyroperiods and result in the excitation of low-frequency whistler waves in the shock foot region (e.g., Matsukiyo & Scholer 2006; Scholer & Burgess 2007). This leads to the so-called \u201csecond or whistler critical Mach number\u201d, M \u2217 w \u2248(1/2) p mi/me cos \u03b8Bn in the \u03b2 \u226a1 limit (Kennel et al. 1985; Krasnoselskikh et al. 2002). In subcritical shocks with Mf < M \u2217 w, linear whistler waves can phasestand in the shock foot upstream of the ramp. Dispersive whistler waves were found far upstream in interplanetary, subcritical shocks (e.g., Oka et al. 2006). Those waves interact with the upstream \ufb02ow and contribute to the energy dissipation, e\ufb00ectively suppressing the shock reformation. Above M \u2217 w, stationary linear wave trains cannot stand in the region ahead of the shock ramp. The \u201cthird or nonlinear whistler critical Mach number\u201d, M \u2217 nw \u2248 p mi/2me cos \u03b8Bn in the \u03b2 \u226a1 limit, was introduced to describe the non-stationarity of shock structures. Krasnoselskikh et al. (2002) predicted that in supercritical shocks with Mf > M \u2217 nw, nonlinear whistler waves turn over because of the gradient catastrophe, leading to the non-stationarity of the shock front and quasi-periodic shock-reformation (see Scholer & Burgess 2007). However, Hellinger et al. (2007) showed through 2D hybrid and PIC simulations that phase-standing oblique whistlers can be emitted in the foot even in supercritical Q\u22a5-shocks, so the shockreformation is suppressed, in 2D. In fact, the nonstationarity and self-reformation of shock structures are an important long-standing problem in the study of collisionless shocks, which has yet to be fully understood (e.g., Scholer et al. 2003; Lembe\u00b4 ge et al. 2004; Matsukiyo & Scholer 2006). In the \u03b2 \u226a1 limit (i.e., in cold plasmas), M \u2217 w = 2.3 and M \u2217 nw = 3.2 for the \ufb01ducial parameter values adopted for our PIC simulations (mi/me = 100 and \u03b8Bn = 63\u25e6). Hence, in some of the models considered here, M \u2217 w < Ms < M \u2217 nw, so the whistler waves induced by re\ufb02ected ions could be be con\ufb01ned within the shock foot without overturning. However, these critical Mach numbers increase to M \u2217 w = 9.7 and M \u2217 nw = 13.8 for the true ratio of mi/me = 1836. So in the ICM, weak Q\u22a5-shocks are expected to be subcritical with respect to the two whistler critical Mach numbers, and so they would not be subject to self-reformation. The con\ufb01rmation of these critical Mach numbers, or improved estimations for \u03b2 \u22731, through numerical simulations is very challenging, as noted above. The excitation of oblique whistler waves and the suppression of shock-reformation via surface ripping require at least 2D simulations (e.g., Lembe\u00b4 ge & Savoini 2002; Burgess 2006). The additional degree of freedom in higher dimensional simulations tends to stabilize some instabilities revealed in lower dimensional simulations. Moreover, simulation results are often dependent on mi/me, and the magnetic \ufb01eld con\ufb01guration, i.e., whether B0 is in-plane or o\ufb00-plane (Lembe\u00b4 ge et al. 2009). And adopting the realistic ratio of mi/me in PIC simulations is computationally very expensive, as pointed in the previous section. As mentioned in the Introduction, GSN14a and GSN14b showed that in high-\u03b2, Q\u22a5-shocks, electrons can be preaccelerated via multiple cycles of SDA due to the scattering by the upstream waves excited via the EFI. We here additionally introduce the \u201cEFI critical Mach number\u201d, M \u2217 ef, above which the electron preacceleration is e\ufb00ective. We seek it in the next section along with the relevant kinetic processes involved. The space physics and ISM communities have been mainly interested in shocks in low-\u03b2 plasmas (\u03b2 \u22721), and hence the analytic relations simpli\ufb01ed for cold plasmas are often quoted (e.g., the dispersion relation for fast magnetosonic waves used by Krasnoselskikh et al. (2002)). In such works, MA is commonly used to characterize shocks. However, in hot ICM plasmas, shocks have Ms \u2248Mf \u226aMA, and magnetic \ufb01elds play dynamically less important roles. Moreover, the ion re\ufb02ection at the shock ramp is governed mainly by Ms rather than MA (e.g., Paper I). Thus, in the rest of this paper, we will use the sonic Mach number Ms to characterize shocks. 3.2. Energization of Electrons As mentioned in the Introduction, GSN14a discussed the relativistic SDA theory for electrons in Q\u22a5-shocks, which involves the electron re\ufb02ection at the shock and the energy gain due to the drift along the motional electric \ufb01eld. In a subsequent paper, GSN14b showed that the electrons can induce the EFI, which leads to the excitation of oblique waves. The electrons return back to the shock due to the scattering by those self-excited upstream waves and are further accelerated through multiple cycles of SDA (Fermi-like process). Below, we follow these previous papers to discuss how the physical processes depend on the parameters such as Ms, \u03b8Bn, and T1 for the shocks considered here (Table 1). Inevitably, we cite below some of equations presented in GSN14a and GSN14b. 3.2.1. Shock Drift Acceleration GSN14a derived the criteria for electron re\ufb02ection by considering the dynamics of electrons in the so-called de Ho\ufb00mannTeller (HT, hereafter) frame, in which the \ufb02ow velocity is parallel to the background magnetic \ufb01eld \fElectron Preacceleration in Weak ICM Shocks 7 -1.0 -0.5 0.0 0.5 1.0 0.0 0.5 1.0 1.5 20 40 60 0 20 40 60 80 2.0 2.2 2.4 2.6 2.8 3.0 0.0 0.1 0.2 0.3 || / v c (d) (c) M3.0 M2.0 v c \u22a5 \u0001 \u2206 2.0 2.2 2.4 2.6 2.8 3.0 10 15 20 s M s M [%] R \u0001 \u00d7 \u2206 [%] R 1 2 3 M2.0 M2.15 M2.3 M2.5 M2.75 M3.0 Bn \u0002 (b) (a) I Figure 1. (a) Velocity diagram to analyze the electron re\ufb02ection in weak ICM shocks; v\u2225and v\u22a5are the electron velocity components, parallel and perpendicular to the background magnetic \ufb01eld, respectively, in the upstream rest frame. The black solid half-circle shows v = c, while the black dashed half-circle shows v = vth,e. The red (for the M2.0 model with Ms = 2 and \u03b8Bn = 63\u25e6) and blue (for the M3.0 model with Ms = 3 and \u03b8Bn = 63\u25e6) vertical lines draw the re\ufb02ection condition for v\u2225 in Equation (4), while the red and blue solid curves left to the vertical lines draw the re\ufb02ection condition for v\u22a5in Equation (5). The red and blue dashed curves right to the vertical lines draw the post-re\ufb02ection velocity given in Equations (25)-(26) of GSN14a with the boundary values for the pre-re\ufb02ection velocity given in Equations (4)-(5) of this paper. Electrons located in the region bounded by the colored vertical and solid lines are re\ufb02ected to the region right to the vertical lines bounded by the dashed lines. (b) The fraction of re\ufb02ected electrons, R in percentage (black), and the average energy gain via a single SDA, \u27e8\u2206\u03b3\u27e9in units of mec2/kBT (red), as a function of Ms. The solid lines are for Q\u22a5-shocks with \u03b8Bn = 63\u25e6, while the dashed lines are for Q\u2225-shocks with \u03b8Bn = 13\u25e6. (c) R \u00b7 \u27e8\u2206\u03b3\u27e9as a function of \u03b8Bn for di\ufb00erent Ms. (d) The EFI parameter, I, in Equation (7) as a function of Ms for models with \u03b8Bn = 63\u25e6(black circles). The red squares are for models with \u03b8Bn = 73\u25e6, while the blue triangles are for \u03b8Bn = 53\u25e6. The instability condition is I > 0. and hence the motional electric \ufb01eld disappears both upstream and downstream of the shock (de Ho\ufb00mann & Teller 1950). In the HT frame, the upstream \ufb02ow has ut = ush sec \u03b8Bn along the background magnetic \ufb01eld. Hereafter, v\u2225and v\u22a5represent the velocity components of incoming electrons, parallel and perpendicular to the background magnetic \ufb01eld, respectively, in the upstream rest frame, and \u03b3t \u2261(1\u2212u2 t/c2)\u22121/2 is the Lorentz factor of the upstream \ufb02ow in the HT frame. The re\ufb02ection criteria can be written as v\u2225< ut (4) (Equation (19) of GSN14a), and v\u22a5\u2273\u03b3t tan \u03b10 \u00b7 \u0002 (v\u2225\u2212ut)2 + 2c2 cos2 \u03b10\u2206\u03c6 \u00b7 G \u00b7 F + {c2 cos2 \u03b10 \u00b7 G \u2212(v\u2225\u2212ut)2}\u2206\u03c62\u00031/2, (5) assuming that the normalized cross-shock potential drop is \u2206\u03c6(x) \u2261e[\u03c6HT(x) \u2212\u03c6HT 0 ]/mec2 \u226a1. Here, G \u2261 (1 \u2212v\u2225ut/c2)2, F \u2261[1 \u2212(v\u2225\u2212ut)2/(Gc2 cos2 \u03b10)]1/2, and \u03b10 \u2261sin\u22121(1/ \u221a b) with the magnetic compression ratio b \u2261B(x)HT/BHT 0 . The superscript HT denotes the quantities in the HT frame. Note that for \u2206\u03c6(x) = \f8 Kang et al. 0, Equation (5) becomes the same as Equation (20) of GSN14a. In Figure 1(a), the red and blue solid lines mark the boundaries of the re\ufb02ection criteria in Equations (4) and (5) for the M2.0 and M3.0 models, respectively. The red and blue dashed curves right to the vertical lines are the post-re\ufb02ection velocities calculated with Equations (25) and (26) of GSN14a by inserting the boundary values of Equations (4) and (5). For b and \u2206\u03c6, the values estimated at the shock surface from simulation data were used. The solid black half-circle shows v \u2261(v2 \u2225+ v2 \u22a5)1/2 = c, while the dashed black half-circle shows v = vth,e, where vth,e = p 2kBTe1/me is the electron thermal speed of the incoming \ufb02ow. As in GSN14b, we estimated semi-analytically the amount of the incoming electrons that satisfy the re\ufb02ection condition, that is, those bounded by the colored solid curves and the colored vertical lines together with the black circle in Figure 1(a). In Figure 1(b), the fraction of the re\ufb02ected electrons, R, estimated for Q\u22a5-shocks with \u03b8Bn = 63\u25e6is shown by the black \ufb01lled circles connected with the black solid line, while R for Q\u2225-shocks with \u03b8Bn = 13\u25e6is shown by the black \ufb01lled circles connected with the black dashed line. In Q\u22a5shocks, the re\ufb02ection fraction, R, is quite high and increases with Ms, ranging \u223c20 \u221225 % for 2 \u2264Ms \u22643. In Q\u2225-shocks, R is also high, ranging \u223c17 \u221220 %, for 2.15 \u2264Ms \u22643, but drops sharply at Ms = 2. We point out that the electron re\ufb02ection becomes ine\ufb00ective for superluminal shocks with large obliquity angles (i.e., ush/ cos \u03b8Bn \u2265c), since the electrons streaming upstream along the background \ufb01eld cannot outrun the shocks (see GSN14b). The obliquity angle for the superluminal behavior is \u03b8sl \u2261arccos(ush/c) = 86\u25e6for the shock in M3.0 with mi/me = 100 and T1 = 108 K, and it is larger for smaller Ms. This angle is larger than \u03b8Bn of our models in Table 1, and hence all the shocks considered here are subluminal. For given T1 and Ms, the re\ufb02ection of electrons is basically determined by b(x) and \u2206\u03c6(x), which quantify the magnetic de\ufb02ection and the acceleration at the shock potential drop. Both b(x) and \u2206\u03c6(x) increase with increasing \u03b8Bn. Larger b enhances the electron re\ufb02ection (positive e\ufb00ect), while larger \u2206\u03c6(x) suppresses it (negative e\ufb00ect). In GSN14b, shocks are semi-relativistic with \u2206\u03c6 \u223c0.1 \u22120.5, and hence the negative e\ufb00ect of the potential drop is substantial. However, in our models, shocks are less relativistic because of the lower temperature adopted, and \u2206\u03c6 \u223cmiu2 sh/2mec2 \u226a1. As a result, the magnetic de\ufb02ection dominates over the acceleration by the cross-shock potential, leading to higher R at higher \u03b8Bn. GSN14b showed that SDA becomes ine\ufb03cient for ut \u2273vth,e (cos \u03b8Bn \u2272cos \u03b8limit = Ms p me/mi), which is more stringent than the superluminal condition (ut > c). So the electron re\ufb02ection fraction begins to decrease for \u03b8Bn \u227360\u25e6in their models. Although not shown here, in our models, R monotonically increases with the obliquity angle for a given Ms, because the adopted \u03b8Bn (\u226473\u25e6) is smaller than the limiting obliquity angle, \u03b8limit, for Ms = 2 \u22123 and mi/me = 100. The re\ufb02ected electrons gain the energy via SDA. We estimated the energy gain from a single SDA cycle as \u2206\u03b3 \u2261\u03b3r \u2212\u03b3i = 2ut(ut \u2212v\u2225) c2 \u2212u2 t \u03b3i, (6) where \u03b3i and \u03b3r are the Lorentz factors for the prere\ufb02ection and post-re\ufb02ection electron velocities, respectively (Equation (24) of GSN14a). For given T1 (or given cs1), ut and \u2206\u03b3 depend on Ms and \u03b8Bn. For the shocks considered here, \u03b3i \u22481 and ut \u226ac, so \u2206\u03b3 \u22482[(ut/c)2 \u2212utv\u2225/c2]. In Figure 1(b), the red \ufb01lled circles connected with the red solid line show the average energy gain, \u27e8\u2206\u03b3\u27e9in units of mec2/kBTe, estimated for Q\u22a5-shocks with \u03b8Bn = 63\u25e6. The red \ufb01lled circles with the red dashed line show the quantity for Q\u2225-shocks with \u03b8Bn = 13\u25e6. Here, the average was taken over the incoming electrons of Maxwellian distributions, so \u27e8\u2206\u03b3\u27e9 shown are the representative values during the initial development stage of suprathermal particles. In addition, the product of R and \u27e8\u2206\u03b3\u27e9is plotted as a function of \u03b8Bn for di\ufb00erent Ms in Figure 1(c). For the models in Table 1, R was calculated using b and \u2206\u03c6 estimated at the shock surface from simulation data, as mentioned above. For the rest, the values of b and \u2206\u03c6 for the models with \u03b8Bn = 13\u25e6presented in Paper I were adopted for Q\u2225-shocks, while the values for the models with \u03b8Bn = 63\u25e6presented in this work were adopted for Q\u22a5-shocks. Figures 1(b)-(c) show that more electrons are re\ufb02ected and higher energies are achieved at higher Ms and larger \u03b8Bn. 3.2.2. Electron Firehose instability GSN14b performed periodic-box simulations with beams of streaming electrons in order to isolate and study the EFI due to the re\ufb02ected and SDAenergized electrons. They found the followings: nonpropagating (\u03c9r \u22480), oblique waves with wavelengths \u223c(10\u221220)c/wpe are excited dominantly, \u03b4Bz is stronger than \u03b4Bx and \u03b4By (the initial magnetic \ufb01eld is in the x-y plane), and both the growth rate and the dominant wavelength of the instability are not sensitive to the mass ratio mi/me. These results are consistent with the expectations from the previous investigations of oblique EFI (e.g., Gary & Nishimura 2003). \fElectron Preacceleration in Weak ICM Shocks 9 1200 1400 1600 1800 2000 2200 2400 2600 5 10 15 20 25 30 1400 1600 1800 2000 2200 2400 2600 2800 5 10 15 20 25 30 1400 1600 1800 2000 2200 2400 2600 2800 5 10 15 20 25 30 1600 1800 2000 2200 2400 2600 2800 3000 5 10 15 20 25 30 0 B B pe [c/ ] x \u0001 0 B B (d) M3.2-2D (c) M3.0 (b) M2.3 (a) M2.0 pe [c/ ] x \u0001 Figure 2. Stack plots of the total magnetic \ufb01eld strength, averaged over the transverse direction, B, in the M2.0, M2.3, and M3.0 models from t\u2126ci = 20 (bottom) to t\u2126ci = 30 (top). The M3.2-2D model represents the Q\u2225-shock with Ms = 3.2 and \u03b8Bn = 13\u25e6, taken from Paper I. Here, B0 is the magnetic \ufb01eld strength far upstream. The EFI criterion in weakly magnetized plasmas can be de\ufb01ned as I \u22611 \u2212Te\u22a5 Te\u2225 \u22121.27 \u03b20.95 e\u2225 > 0, (7) where \u03b2e\u2225\u22618\u03c0nekBTe\u2225/B2 0 is the electron beta parallel to the initial magnetic \ufb01eld (Equation (10) of GSN14b). Equation (7) indicates that the instability parameter, I, is larger for higher \u03b2e\u2225for a given value of Te\u22a5/Te\u2225. For higher Ms, R is larger and Te\u22a5/Te\u2225is smaller, leading to larger I. Figure 1(d) shows the instability parameter of shocks with \u03b8Bn = 63\u25e6, as a function of Ms, estimated using the velocity distributions of the electrons which are located within (0 \u22121)rL,i (rL,i is the ion Larmor radius with the upstream \ufb01eld B0) upstream from the shock position in simulation data. For the Ms = 2.0 model, I \u22720 with almost no temperature anisotropy, so the upstream plasma should be stable against the EFI. This \ufb01nding, which will be further updated with simulation results in the next section, suggests that the preacceleration of electrons due to the EFI may not operate e\ufb00ectively in very weak shocks. For Ms > 2, on the other hand, the EFI criterion is satis\ufb01ed and I increases with increasing Ms, implying that larger temperature anisotropies (Te\u2225> Te\u22a5) at higher Ms shocks induce stronger EFIs. Also the \ufb01gure indicates that I increases steeply around Ms \u22482.2 \u22122.3. Additional data points marked with the blue triangles connected with the blue dashed line (\u03b8Bn = 53\u25e6) and the red squares connected with the red dashed line (\u03b8Bn = 73\u25e6) show that Q\u22a5-shocks with higher obliquity angles are more unstable to the EFI. 4. RESULTS 4.1. Shock Structures As discussed in Section 3.1, the criticality de\ufb01ned by the \ufb01rst critical Mach, M \u2217 f , primarily governs the structures and time variations of collisionless shocks. In subcritical shocks, most of the shock kinetic energy is dissipated at the shock transition, resulting in relatively smooth and steady structures. In supercritical shocks, on the other hand, re\ufb02ected ions induce overshoot-undershoot oscillations in the shock transition and ripples along the shock surface. Q\u2225-shocks with Mf > M \u2217 f may undergo quasi-periodic reformation owing to the accumulation of upstream low-frequency waves. Q\u22a5-shocks are less prone to reformation, because re\ufb02ected ions mostly advect downstream after about one gyromotion. \f10 Kang et al. Figure 3. Ion phase-space distributions in the x \u2212pix plane for the M2.0 model (a), the M2.3 model (b), and the M3.0 models (c) at t\u2126ci \u224830. The x-coordinate is measured relative to the shock position, xsh, in units of c/wpe. The bar at the top displays the color scale for the log of the ion phase-space density (arbitrary units). In panel (d), the black circles show the fraction of re\ufb02ected ions in the shock ramp region of 0 \u2264x \u2212xsh \u226460c/wpe at t\u2126ci \u224830 for the \ufb01ducial models with mi/me = 100, while the red circles show the same fraction in 0 \u2264x \u2212xsh \u2264240c/wpe at t\u2126ci \u224810 for the three models with mi/me = 400. Figure 2 compares the magnetic \ufb01eld structure for Q\u22a5 and Q\u2225-shocks with di\ufb00erent Ms. In the Q\u22a5-shocks, the overshoot-undershoot oscillation becomes increasingly more evident for higher Ms, but the shock structure seems to be quasi-stationary without any signs of reformation. This is consistent with the fact that the nonlinear whistler critical Mach number for our \ufb01ducial models is M \u2217 nw = 3.2. On the other hand, the Q\u2225-shock in the M3.2-2D model exhibits quasi-periodic reformations. According to the \ufb02uid description of Edmiston & Kennel (1984), M \u2217 f \u22481 for Q\u22a5-shocks in high-\u03b2 plasmas, so the fraction of re\ufb02ected ions is expected be relatively high in all the shock models under consideration. As can be seen in the phase-space distribution of protons in Figure 3(a)-(c), the back-streaming ions turn around mostly within about one ion gyroradius in the shock ramp (x\u2212xsh \u227260c/wpe). Note that with mi/me = 100, in the M3.0 model, the shock ramp corresponds to the region of x \u2212xs < 60c/\u03c9pe, while the foot extends to x \u2212xs \u2248rL,i \u2248200c/\u03c9pe (e.g., Balogh & Truemann 2013). In the M2.0 model, rL,i is smaller and so the characteristic widths of the ramp and foot are accordingly smaller. Figure 3(d) shows that the ion re\ufb02ection fraction, \u03b1ref,i = nref,i/ni, increases with increasing Ms, and such trend is almost independent of the mass ratio mi/me. Here, nref,i was calculated as the number density of ions with vx > 0 in the shock rest frame in the ramp region. Since \u03b1ref,i increases abruptly at Ms \u22482.2 \u22122.3, we may regard M \u2217 f \u22482.3 as an e\ufb00ective value for the \ufb01rst critical Mach number, above which high-\u03b2 Q\u22a5-shocks re\ufb02ect a su\ufb03cient amount of incoming ions and become supercritical. From Figure 2, we can see that ensuing oscillations in shock structures appear noticeable in earnest only for Mf \u22732.3. Our estimation of M \u2217 f is higher than the prediction of Edmiston & Kennel (1984). This might be partly because in high-\u03b2 plasmas, kinetic processes due to fast thermal motions could suppress some of microinstabilities driven by the relative drift between backstreaming and incoming ions, as mentioned before. 4.2. Electron Preacceleration Re\ufb02ected electrons are energized via SDA at the shock ramp, and the consequence can be observed in the phasespace distribution of electrons in Figure 4, (a)-(c) for the M2.0 model and (e)-(g) for the M3.0 model. Since B0 is in the x \u2212y plane, electrons at \ufb01rst gain the zmomentum, pez, through the drift along the motional electric \ufb01eld, E0 = \u2212v0/c \u00d7 B0, and then the gain is distributed to pex and pey during gyration motions. In addition, re\ufb02ected electrons, streaming along the background magnetic \ufb01eld with small pitch angles in the upstream region, have larger positive py than px. Figures 4(d) and (h) also show the distributions of electron density ne (black curve) and By (red curve) around the shock transition. If electrons are accelerated via the full Fermi-I process (i.e., DSA) and in the test-particle regime, the momentum distribution follows the so-called DSA power-law: f(p) \u2248fN \u0012 p pinj \u0013\u2212q exp \" \u2212 \u0012 p pmax \u00132# , (8) where fN is the normalization factor and q(Ms) = 3r/(r \u22121) is the slope (Drury 1983; Kang & Ryu 2010). Here, pmax is the maximum momentum of accelerated electrons that increases with the shock age before any energy losses set in. The injection momentum, pinj, is \fElectron Preacceleration in Weak ICM Shocks 11 Figure 4. Electron phase-space distributions and shock structures for the M2.0 model (left panels) and the M3.0 model (right panels) at wpet \u22481.13 \u00d7 105 (t\u2126ci \u224830). The x-coordinate is measured relative to the shock position, xsh, in units of c/wpe. From top to bottom, the distributions in x \u2212pex, x \u2212pey, and x \u2212pez, and the distributions of electron number density ne and transverse magnetic \ufb01eld By in units of upstream values are shown. The bar at the top displays the color scale for the log of the electron phase-space density (arbitrary units). the minimum momentum with which electrons can diffuse across the shock and be injected to the full FermiI process as described in the Introduction. It marks roughly the boundary between the thermal and nonthermal momentum distributions. The momentum spectrum in Equation (8) can be transformed to the energy spectrum in terms of the Lorentz factor as 4\u03c0p2f(p) dp dE \u221ddN d\u03b3 \u221d(\u03b3 \u22121)\u2212s, (9) where the slope is s(Ms) = q(Ms) \u22122. For instance, s = 2.5 for Ms = 3.0, while s = 2.93 for Ms = 2.0. The injection momentum, which can be estimated as pinj \u223c3pth,i (e.g., Kang et al. 2002; Caprioli et al. 2015, Paper I), is well beyond the highest momentum that electrons can achieve in our PIC simulations. In the M3.0 model, for example, pinj corresponds to \u03b3inj \u224810, while electrons of highest momenta reach only \u03b3 \u22722 (see Figure 5). In other words, our simulations could follow only the preacceleration of suprathermal electrons, which are not energetic enough to di\ufb00use across the shock. Thus, the DSA slope, s(Ms), is not necessarily reproduced in the energy spectra of electrons. However, the development of power-law tails with s(Ms) may indicate that the preaccelerated electrons have undergone a Fermi-like process, as proposed by GSN14a and GSN14b. The upper panels of Figure 5 compare the electron energy spectra, (\u03b3 \u22121)dN/d\u03b3, taken from the upstream region of (0 \u22121)rL,i, ahead of the shock, at t\u2126ci = 10 (blue lines) and 30 (red lines), in the models with di\ufb00erent Ms. In the case of the M3.0 model, the simulation is perform longer, and the spectrum at t\u2126ci = 60 is also shown with the green line (which almost overlaps with the red line). As described in Section 3.1, re\ufb02ected electrons gain energy initially via SDA, and may continue to be accelerated via a Fermi-like process and multiple cycles of SDA, if oblique waves are excited by the EFI. Two points are noticed: (1) In the M2.0 model, the blue and red lines almost coincide, indicating almost no change of the spectrum from t\u2126ci = 10 to 30. The spectrum is similar to that of the electrons energized by a single cycle of SDA, which was illustrated in Figure 7 of GSN14a. So the Fermi-like process, followed by the EFI, may not e\ufb03ciently operate in this model. (2) The M3.0 \f12 Kang et al. 10 -4 10 -3 10 -2 10 -1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 10 -3 10 -2 10 -1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 10 -3 10 -2 10 -1 10 0 10 -3 10 -2 10 -1 10 0 10 -4 10 -3 10 -2 10 -1 10 0 (a) M2.0 (b) M2.3 s ~ 2.93 (c) M3.0 s ~ 2.5 (d) M2.3-\u0001 73 (e) M2.3-\u03b250 s ~ 2.93 (f) M3.0-\u03b250 s ~ 2.5 (g) M2.0-m400 (h) M2.3-m400 1 \u0001 \u2212 1 \u0001 \u2212 (i) M3.0-m400 1 \u0001 \u2212 ( \u0001 1 ) d N / d \u0001 ( \u0001 1 ) d N / d \u0001 ( \u0001 1 ) d N / d \u0001 Figure 5. Upstream electron energy spectra at t\u2126ci = 10 (blue lines), t\u2126ci = 30 (red), and t\u2126ci = 60 (green) in various models. The spectra were taken from the region of (0 \u22121)rL,i upstream of the shock. The black dot-dashed lines indicate the test-particle power-laws of Equation (9), while the purple dashed lines show the Maxwellian distributions in the upstream region. model, on the other hand, exhibits a further energization from t\u2126ci = 10 to 30, demonstrating the presence of a Fermi-like process. However, there is no di\ufb00erence in the spectra of t\u2126ci = 30 and 60. As a matter of fact, the energy spectrum of suprathermal electrons seems to saturate beyond t\u2126ci \u224820 (not shown in the \ufb01gure). This should be due to the saturation of the EFI and the lack of further developments of longer wavelength waves (see the next subsection for further discussions). The middle and lower panels of Figure 5 show the electron energy spectra in models with di\ufb00erent parameters. The models with mi/me = 400 were followed only up to tend\u2126ci = 10 (blue lines), because longer computing time is required for larger mi/me. Comparison of the two sets of models with di\ufb00erent values of mi/me con\ufb01rms that the EFI is almost independent of mi/me for su\ufb03ciently large mass ratios, as previously shown by Gary & Nishimura (2003) and GSN14b, and so is the electron acceleration. Figure 5(d) for the M2.3-\u03b873 model indicates that SDA and hence the EFI is more e\ufb03cient at higher obliquity angles, which is consistent with Figure 1(c). Figure 5(e) and (f) for the models with \u03b2 = 50 demonstrate that the EFI is more e\ufb03cient at higher \u03b2. All the models with Ms \u22732.3 show marginal power-lawlike tails beyond the spectra energized by a single cycle of SDA. With the M2.3-r2 and M2.3-r0.5 models, we examined how the electron energy spectrum depends on the grid resolution, although the comparison plots are not shown. Our simulations with di\ufb00erent \u2206x produced essentially the same spectra, especially for the suprathermal part. In Paper I, we calculated the injection fraction, \u03be(Ms, \u03b8Bn, \u03b2), of nonthermal protons with p \u2265pinj for Q\u2225shocks, as a measure of the DSA injection ef\ufb01ciency. Since the simulations in this paper can follow only the preacceleration stage of electrons via an upstream Fermi-like process, we de\ufb01ne and estimate the \u201cfraction of suprathermal electrons\u201d as follows: \u03b6 \u22611 n2 Z pmax pspt 4\u03c0\u27e8f(p)\u27e9p2dp, (10) where \u27e8f(p)\u27e9is the electron distribution function, averaged over the upstream region of (0 \u22121)rL,i, ahead of \fElectron Preacceleration in Weak ICM Shocks 13 2.0 2.2 2.4 2.6 2.8 3.0 10 -4 10 -3 \u03b6( M s ) s M \u2126cit=10 \u2126cit=15 \u2126cit=20 \u2126cit=30 Figure 6. Suprathremal fraction, \u03b6, de\ufb01ned in Equation (10), as a function of Ms for the \ufb01ducial models (\u03b8Bn = 63\u25e6) at t\u2126ci = 10 (blue circles), 15 (cyan circles), 20 (green circles), and t\u2126ci = 30 (red circles). The triangles are for the models with \u03b8Bn = 53\u25e6at t\u2126ci = 10 (blue) and 30 (red), while the squares are for the models with \u03b8Bn = 73\u25e6at t\u2126ci = 10 (blue) and 30 (red). the shock. For the \u201csuprathermal momentum\u201d, above which the electron spectrum changes from Maxwellian to power-law-like distribution, we use pspt \u22483.3pth,e. Note that pspt \u2248pinj(mi/me)\u22121/2. For the M3.0 model, for instance, pspt corresponds to \u03b3 \u22481.25. Di\ufb00erent choices of pspt result in di\ufb00erent values of \u03b6, of course, but the dependence on the parameters such as Ms and \u03b8Bn, does not change much. In Figure 6, the circles connected with solid lines show the suprathermal fraction, \u03b6(Ms), for the \ufb01ducial models with \u03b8Bn = 63\u25e6at t\u2126ci = 10 \u221230. This fraction is expected to increase with increasing Ms, since the EFI parameter, I, is larger for higher Ms (see Figure 1(d)). Moreover, it increases with time until t\u2126ci \u224820 due to a Fermi-like process, as shown in Figure 5, except for the M2.0 model where the increase in time is insigni\ufb01cant. However, \u03b6 seems to stop growing for t\u2126ci \u227320, indicating the saturation of electron preacceleration. This is related with the reduction of temperature anisotropy via electron scattering and the ensuing decay of EFIinduced waves, which will be discussed more in the next section. The red solid line in Figure 6 is represented roughly by \u03b6 \u221dM 4 s in the range of 2.3 \u2272Ms \u22643, but it drops rather abruptly below 2.3, deviating from the power-law behavior. We note that the Mach number dependence of \u03b6 is steeper than that of the ion injection fraction for Q\u2225-shocks, which is roughly \u03be \u221dM 1.5 s , as shown in Paper I. This implies that the kinetic processes involved in electron preacceleration might be more sensitive to Ms (see Section 3.2). Figure 6 also shows \u03b6 for models with \u03b8Bn = 53\u25e6(triangles) and \u03b8Bn = 73\u25e6(squares). For shocks with larger \u03b8Bn, the re\ufb02ection of electrons and the average SDA energy gain are larger, resulting in larger I, as shown in Figure 1 (c) and (d). Hence, \u03b6 should be larger at higher obliquity angle. However, for \u03b8Bn > \u03b8limit \u224873 \u221278\u25e6, \u03b6 should begin to decrease, as mentioned in Section 3.2.1. Based on the above results, we propose that the preacceleration of electrons is e\ufb00ective only in Q\u22a5-shocks with Ms \u22732.3 in the hot ICM, that is, M \u2217 ef \u22482.3. We point out that this is close to the \ufb01rst critical Mach number for ion re\ufb02ection, M \u2217 f \u22482.3, estimated from the Mach number dependence of the fraction of re\ufb02ected ions, \u03b1ref,i, shown in Figure 3(d). As shown in Figure 2, overshootundershoot oscillations develop in the shock transition, owing to a su\ufb03cient amount of re\ufb02ected ions, in shocks with Ms \u22732.3; with larger magnetic \ufb01eld compression due to the oscillations, more electrons are re\ufb02ected and energized via SDA (see Section 3.2.1). Hence, we expect that the electron re\ufb02ection is directly linked with the ion re\ufb02ection, so M \u2217 ef would be related with M \u2217 f . Note that the critical Mach number, M \u2217 f \u22482.3, is also similar to the \ufb01rst critical Mach number for ion re\ufb02ection and injection to DSA in Q\u2225-shocks in high-\u03b2 plasmas, M \u2217 s \u22482.25 (Paper I). 4.3. Upstream Waves The nature and origin of upstream waves in collisionless shocks have long been investigated through both analytical and simulation studies with the help of in situ observations of Earth\u2019s bow shock. In Q\u2225shocks with low Mach numbers, magnetosonic waves such as phase-standing whistlers and long-wavelength whistlers are known to be excited by backstreaming ions via an ion/ion beam instability (e.g., Krauss-Varban & Omidi 1991). Especially, in supercritical Q\u2225-shocks, the foreshock region is highly turbulent with largeamplitude waves and the shock transition can undergo quasi-periodic reformation due to the nonlinear interaction of accumulated waves and the shock front (see Paper I and Figure 2(d)). In Q\u22a5-shocks, a su\ufb03cient amount of incoming protons can be re\ufb02ected at the shock, which in turn may excite fast magnetosonic waves. As discussed in Section 3.1, two whistler critical Mach numbers, M \u2217 w and M \u2217 nw, are related with the upstream emission of whistler waves and the nonlinear breaking of whistler waves in the shock foot. In some of our models, M \u2217 w < Ms < M \u2217 nw, and hence whistler waves are con\ufb01ned within the shock foot and shock reformation does not occur. \f14 Kang et al. 0 20 40 60 80 (c) M2.0-\u03b4B z / B 0 (b) M2.0-\u03b4B y / B 0 y [ c / \u03c9p e ] -1.0 -0.5 0.0 0.5 1.0 (f) M3.0-\u03b4B z / B 0 (e) M3.0-\u03b4B y / B 0 (d) M3.0-\u03b4B x / B 0 -0.3 0.2 0.7 1.2 1.7 2.2 2.7 3.2 (a) M2.0-\u03b4B x / B 0 0 20 40 60 80 y [ c / \u03c9p e ] -1.0 -0.5 0.0 0.5 1.0 0 20 40 60 80 0 20 40 60 x [ c / \u03c9p e ] y [ c / \u03c9p e ] 0 20 40 60 80 x [ c / \u03c9p e ] 0 20 40 60 80 0 20 40 60 y [ c / \u03c9p e ] x [ c / \u03c9p e ] Figure 7. Magnetic \ufb01eld \ufb02uctuations, \u03b4Bx in (a) and (d), \u03b4By in (b) and (e), and \u03b4Bz in (c) and (f), normalized to B0, in the upstream region of 0 < (x \u2212xsh)wpe/c < 100 at wpet \u22482.63 \u00d7 104 (t\u2126ci \u22487) for the M2.0 model (top panels) and the M3.0 model (bottom panels). In supercritical shocks with Ms \u22732.3, we expect to see the following three kinds of waves: (1) nearly phasestanding whistler waves with kc/\u03c9pi \u223c1 (kc/\u03c9pe \u223c0.1) excited by re\ufb02ected ions (e.g., Hellinger et al. 2007; Scholer & Burgess 2007), where wpi = p 4\u03c0e2n/mi is the ion plasma frequency (wpi = 0.1wpe for me/mi = 100), (2) phase-standing oblique waves with kc/\u03c9pe \u223c0.4 and larger \u03b8Bk (the angle between the wave vector k and B0) excited by the EFI, and (3) propagating waves with kc/\u03c9pe \u223c0.3 and smaller \u03b8Bk, also excited by the EFI (e.g., Hellinger et al. 2014). Here, we focus on the waves excited by the EFI described in Section 3.2.2. Previous studies on the EFI and the EFI-induced waves showed the following characteristics (Gary & Nishimura 2003; Camporeale & Burgess 2008; Hellinger et al. 2014; Lazar et al. 2014, GSN14b). (1) The magnetic \ufb01eld \ufb02uctuations in the EFI-induced waves are predominantly along the direction perpendicular to both k and B0, i.e., |\u03b4Bz| is larger than |\u03b4Bx| and |\u03b4By| in our geometry. (2) Phase-standing oblique waves with almost zero oscillation frequencies (\u03c9r \u22480) have higher growth rates than propagating waves (\u03c9r \u0338= 0) . (3) Nonpropagating modes decay to propagating modes with longer wavelengths and smaller \u03b8Bk. (4) The EFIinduced waves scatter electrons, resulting in the reduction of electrons temperature anisotropy, which in turn leads to the damping of the waves. Figure 7 shows the distribution of magnetic \ufb01eld \ufb02uctuations, \u03b4B, in the upstream region for the M2.0 and M3.0 models. The epoch shown, t\u2126ci \u22487 (wpet \u2248 2.63 \u00d7 104) is early; yet, in the M3.0 model, waves are well developed (see also Figure 9), while the energization of electrons is still undergoing (see Figure 5). For the supercritical shock of the M3.0 model, we interpret that there are ion-induced whistlers in the shock ramp region of 0 \u2272x \u2212xs \u227260c/\u03c9pe, while EFI-induced oblique waves are present over the whole region shown. As shown in GSN14b, the EFI-excited waves are oblique with \u03b8Bk \u223c60\u25e6, and |\u03b4Bz| > |\u03b4Bx| and |\u03b4By|. The increase in \u03b4By toward x \u2212xsh = 0 is due to the compression in the shock ramp. In the subcritical shock of the M2.0 model, on the other hand, the fractions of re\ufb02ected ions and electrons are not su\ufb03cient for either the emission of whistler waves or the excitation of EFI-induced waves, so no substantial waves are present in the shock foot. This is consistent with the instability condition shown in Figure 1 (d). Figure 8 compares \u03b4Bz in six di\ufb00erent models at t\u2126ci \u224810. The wave amplitude increases with increasing Ms, and the EFI seems only marginal in the M2.3 models. This result con\ufb01rms our proposal for the \u201cEFI \fElectron Preacceleration in Weak ICM Shocks 15 0 20 40 60 80 (f) M2.3-\u03b250 (e) M2.3-\u03b873 (d) M2.3-\u03b853 (c) M3.0 (b) M2.3 y [ c / \u03c9p e ] -0.8 -0.4 0.0 0.4 0.8 (a) M2.0 0 20 40 60 80 y [ c / \u03c9p e ] 0 20 40 60 80 0 20 40 60 x [ c / \u03c9p e ] y [ c / \u03c9p e ] 0 20 40 60 80 x [ c / \u03c9p e ] 0 20 40 60 80 0 20 40 60 x [ c / \u03c9p e ] y [ c / \u03c9p e ] \u03b4B z / B 0 Figure 8. Magnetic \ufb01eld \ufb02uctuations, \u03b4Bz, normalized to B0, in the upstream region of 0 < (x \u2212xsh)wpe/c < 100 at wpet \u22483.76 \u00d7 104 (t\u2126ci \u224810) for six di\ufb00erent models. critical Mach number\u201d M \u2217 ef \u22482.3, presented in Section 4.2. Moreover, this \ufb01gure corroborates our \ufb01ndings that the EFI is more e\ufb03cient at larger \u03b8Bn and higher \u03b2. From \u03b4Bz of the M2.3 and M3.0 models in Figure 7 and 8, the dominant waves in the shock foot seem to have \u03bb \u223c15 \u221220c/\u03c9pe, so they are consistent with the EFIinduced waves (GSN14b). Figure 9 shows the time evolution of the average of the magnetic \ufb01eld \ufb02uctuations, \u03b4B2 z/B2 0 \u000b , and the magnetic energy power, PBz(k) \u221d|\u03b4Bz(k)|2k, of upstream waves for the M3.0 model. According to the linear analysis by Camporeale & Burgess (2008), the growth rate of the EFI peaks at kmaxc/\u03c9pe \u223c0.4 for \u03b2e\u2225= 10 and Te\u22a5/Te\u2225= 0.7. Thus, we interpret that the powers in the range of kc/\u03c9pe \u223c0.2 \u22120.3 are owing to the oblique waves induced by the EFI, while those of kc/\u03c9pe \u22720.15 are contributed by the phase-standing whistler waves induced by re\ufb02ected ions. Moreover, through periodic box simulations of the EFI, Camporeale & Burgess (2008) and Hellinger et al. (2014) demonstrated that initially nonpropagating oblique modes grow and then saturate, followed by the transfer of wave energy into propagating modes with longer wavelengths and smaller \u03b8Bk. Figure 8 of Camporeale & Burgess (2008) and Figure 4 of Hellinger et al. (2014) show that a cycle of the EFI-induced wave growth and decay occurs with the time scales of t\u2126ce \u223cseveral \u00d7 100. We suggest that the oscillatory behaviors of the excited waves with the time scales of twpe \u223c2 \u00d7 104 \u22124 \u00d7 104 (t\u2126ce \u223c500 \u22121000) shown in Figure 9 would be related to those characteristics of the EFI. Figure 9(c) illustrates such a cycle during the period of twpe \u22481.8 \u22122.1 \u00d7 105: excitation with kmaxc/wpe \u22480.3 \u2192inverse cascade with kmaxc/wpe \u22480.2 \u2192damping of waves. Our results indicate that the EFI-induced waves do not further develop into longer wavelength modes with \u03bb \u226b\u03bbmax, where \u03bbmax \u224815 \u221220c/\u03c9pe is the wavelength of the maximum linear growth. Note that \u03bbmax is close to the gyroradius of electrons with \u03b3 \u22722. Thus, the acceleration of electrons via resonant scattering by the EFI-induced waves is saturated. As a consequence, the energization of electrons stops at the suprathermal stage (\u03b3 < 2) and does not proceed all the way to the DSA injection momentum (\u03b3inj \u224810). We interpret that this result should be due to the intrinsic properties of the EFI, rather than the limitations or artifacts of our simulations, as shown by the studies of Camporeale & Burgess (2008) and Hellinger et al. (2014). Hence, we here conclude that the preacceleration via the EFI alone may not explain the injection of electrons to DSA in weak ICM shocks. However, the conclusion needs to be further veri\ufb01ed through a more detailed study of the EFI and EFI-induced waves for high-\u03b2 ICM plasmas, \f16 Kang et al. 2 4 6 8 10 12 14 16 18 20 22 10-2 10-1 10-1 100 10-2 10-1 100 < \u03b4B z 2 / B 0 2 > 2 4 6 8 10 12 14 16 18 20 22 k c / \u03c9p e t [ 1 0 4 \u03c9 1 p e ] 0.1 t [ 1 0 4 \u03c9 1 p e ] ( b ) 10 -2 10 -1 10 0 0.2 0.4 0.8 P B z ( k ) ( c ) k c / \u03c9p e \u03c9pet=176720 \u03c9pet=184240 \u03c9pet=191760 \u03c9pet=199280 \u03c9pet=206800 ( a ) Figure 9. (a) Time evolution of \u03b4B2 z/B2 0 \u000b , the square of the magnetic \ufb01eld \ufb02uctuations normalized to the background magnetic \ufb01eld, averaged over the square region of 80 \u00d7 80(c/\u03c9pe)2 covering 0 < (x \u2212xsh)wpe/c < 80, for the M3.0 model. (b) Time evolution of PBz(k) \u221d|\u03b4Bz(k)|2k, the magnetic energy power of \u03b4Bz in the square region of 80\u00d780(c/\u03c9pe)2 for the M3.0 model. (c) PBz(k) versus kc/wpe at \ufb01ve di\ufb00erent time epochs. including kinetic linear analyses and numerical simulations, which we leave for a future work. 5. SUMMARY In Q\u22a5-shocks, a substantial fraction of incoming particles are re\ufb02ected at the shock ramp. Most of re\ufb02ected ions are advected downstream along with the underlying magnetic \ufb01eld after about one gyromotin, but yet the structures of the shocks are primarily governed by the dynamics of re\ufb02ected ions. Especially in supercritical shocks, the accumulation of re\ufb02ected ions in the shock ramp generates overshoot-undershoot oscillations in the magnetic \ufb01eld, ion/electron densities, and electric shock potential. Re\ufb02ected electrons, on the other hand, can stream along the background magnetic \ufb01eld with small pitch angles in the upstream region. As presented in GSN14a and GSN14b, the SDA re\ufb02ected electrons produce the temperature anisotropy, Te\u2225> Te\u22a5, which induces the EFI; the EFI in turn excites oblique waves in the upstream region. Electrons are then scattered between the shock ramp and the upstream waves, and gain energies via a Fermi-like process involving multiple cycles of SDA. All these processes depend most sensitively on Ms among a number of shock parameters; for instance, the development of the EFI and the energization of electrons are expected to be ine\ufb03cient in very weak shocks with Ms close to unity. In this paper, we studied through 2D PIC simulations the preacceleration of electrons facilitated by the EFI in Q\u22a5-shocks with Ms \u22723 in the high-\u03b2 ICM. Various shock parameters are considered, as listed in Table 1. Our \ufb01ndings can be summarized as follows: 1. For ICM Q\u22a5-shocks, ion re\ufb02ection and overshootundershoot oscillations in the shock structures become increasingly more evident for Ms \u22732.3, while the shock structures seem relatively smooth and quasi-stationary for lower Mach number shocks. Hence we suggest that the e\ufb00ective value of the \ufb01rst critical Mach number would be M \u2217 f \u22482.3, which is higher than previously estimated from the MHD Rankine-Hugoniot jump condition by Edmiston & Kennel (1984). 2. Since electron re\ufb02ection is a\ufb00ected by ion re\ufb02ection and the ensuing growth of overshoot-undershoot oscillation, the EFI critical Mach number, M \u2217 ef \u22482.3, seems to be closely related with M \u2217 f . Oscillations in the shock structures enhance the magnetic mirror in the shock ramp, providing a favorable condition for the e\ufb03cient re\ufb02ection of electrons. Only in shocks with Ms > M \u2217 ef, the re\ufb02ection and SDA of electrons are e\ufb03cient enough to generate su\ufb03cient temperature anisotropies, which can trigger the EFI and the excitation of oblique waves. 3. We presented the fraction of suprathermal electrons, \u03b6(Ms, \u03b8Bn), de\ufb01ned as the number fraction of electrons with p \u2265pspt = 3.3pth,e in the upstream energy spectrum. The suprathermal fraction increases with increasing Ms, roughly as \u03b6 \u221dM 4 s for the \ufb01ducial models. Below M \u2217 ef \u22482.3, \u03b6 drops sharply, indicating ine\ufb03cient electron preacceleration in low Mach number shocks. This fraction also increases with increasing \u03b8Bn. For shocks with larger \u03b8Bn, the re\ufb02ection of electrons and the average SDA energy gain are larger, and hence \u03b6 is larger. 4. In the supercritical M3.0 model, the suprthermal tail of electrons extends to higher \u03b3 in time, but it saturates beyond t\u2126ci \u224820 with the highest energy of \u03b3 \u22722. In order for suprathermal electrons to be in\fElectron Preacceleration in Weak ICM Shocks 17 jected to DSA, their energies should reach at least to \u03b3inj \u227310. We interpret that such saturation is due to the lack of wave powers with long wavelengths. The maximum growth of the EFI in the linear regime is estimated to be at \u03bbmax \u224815 \u221220c/\u03c9pe. The EFI becomes stablized owing to the reduction of electron temperature anisotropy, before waves with \u03bb \u226b\u03bbmax develop. This implies that the preacceleration of electrons due to a Fermi-like process and multiple cycles of SDA, facilitated by the upstream waves excited via the EFI, may not proceed all the way to DSA in high-\u03b2, Q\u22a5-shocks. Our results indicate that processes other than those considered in this paper may be crucial to understand the origin of radio relics in galaxy clusters. For instance, in the reacceleration model, pre-existing fossil electrons are assumed (e.g., Kang 2016a,b). Especially, fossil electrons with \u03b3 \u223c10 \u2212100 could be scattered by ion-induced waves and/or pre-existing turbulent waves and participate to DSA. Park et al. (2015), for instance, showed through 1D PIC simulations that electrons can be injected to DSA and accelerated via the full Fermi-I process even in Q\u2225with MA \u2248Ms = 20 and \u03b8Bn = 30\u25e6. In addition, if shock surfaces are highly non-uniform with varying Ms and \u03b8Bn (e.g., Hong et al. 2015; Ha et al. 2018a), the features of Q\u22a5and Q\u2225-shocks may be mixed up, facilitating the upstream environment of abundant waves for electron scattering. However, all these processes need to be investigated in details before their roles are discussed, and we leave such investigations for future works. The authors thank the anonymous referee for constructive comments. H.K. was supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF) through grant 2017R1D1A1A09000567. D.R. and J.-H. H. were supported by the NRF through grants 2016R1A5A1013277 and 2017R1A2A1A05071429. J.-H. H. was also supported by the Global PhD Fellowship of the NRF through 2017H1A2A1042370." + }, + { + "url": "http://arxiv.org/abs/1802.03189v1", + "title": "Effects of Alfvenic Drift on Diffusive Shock Acceleration at Weak Cluster Shocks", + "abstract": "Non-detection of $\\gamma$-ray emission from galaxy clusters has challenged\ndiffusive shock acceleration (DSA) of cosmic-ray (CR) protons at weak\ncollisionless shocks that are expected to form in the intracluster medium. As\nan effort to address this problem, we here explore possible roles of Alfv\\'en\nwaves self-excited via resonant streaming instability during the CR\nacceleration at parallel shocks. The mean drift of Alfv\\'en waves may either\nincrease or decrease the scattering center compression ratio, depending on the\npostshock cross-helicity, leading to either flatter or steeper CR spectra. We\nfirst examine such effects at planar shocks, based on the transport of Alfv\\'en\nwaves in the small amplitude limit. For the shock parameters relevant to\ncluster shocks, Alfv\\'enic drift flattens the CR spectrum slightly, resulting\nin a small increase of the CR acceleration efficiency, $\\eta$. We then consider\ntwo additional, physically motivated cases: (1) postshock waves are isotropized\nvia MHD and plasma processes across the shock transition and (2) postshock\nwaves contain only forward waves propagating along with the flow due to a\npossible gradient of CR pressure behind the shock. In these cases, Alfv\\'enic\ndrift could reduce $\\eta$ by as much as a factor of 5 for weak cluster shocks.\nFor the canonical parameters adopted here, we suggest $\\eta\\sim10^{-4}-10^{-2}$\nfor shocks with sonic Mach number $M_{\\rm s}\\approx2-3$. The possible reduction\nof $\\eta$ may help ease the tension between non-detection of $\\gamma$-rays from\ngalaxy clusters and DSA predictions.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2018-02-09", + "updated": "2018-02-09", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Weak shocks with sonic Mach number typically Ms \u2272 a few are expected to form in the intracluster medium (ICM) during the course of hierarchical clustering of the large-scale structure of the Universe (e.g. Ryu et al. 2003; Kang et al. 2007). The presence of such shocks has been established by X-ray and radio observations of many merging clusters (e.g. Markevitch & Vikhlinin 2007; Br\u00a8 uggen et al. 2012; Brunetti & Jones 2014). In particular, di\ufb00use radio sources known as radio relics, located mostly in cluster outskirts, could be explained by cosmic-ray (CR) electrons (re-)accelerated via di\ufb00usive shock acceleration (DSA) at quasi-perpendicular shocks (e.g. van Weeren et al. 2010; Kang et al. 2012; Kang 2017). Although both CR electrons and protons are known to be accelerated at astrophysical shocks such as Earth\u2019s bow shocks and supernova remnant shocks (e.g., Bell 1978; Drury 1983; Blandford & Eichler 1987), the \u03b3-ray emission from galaxy clusters, which would be a unique signature of CR protons, has not been detected with high signi\ufb01cance so far (Ackermann et al. 2014, 2016; Brunetti 2017). In galaxy clusters, di\ufb00use \u03b3-ray emission can arise from inelastic collisions of CR protons with thermal protons, which produce neutral pions, followed by the decay of pions into \u03b3-ray photons (e.g., Miniati et al. 2001; Brunetti & Jones 2014; Brunetti 2017). Using cosmological hydrodynamic simulations, the \u03b3-ray emission has been estimated by modeling the production of CR protons at cluster shocks in several studies (e.g., Ensslin et al. 2007; Pinzke & Pfrommer 2010; Vazza et al. 2016). In particular, Vazza et al. (2016) tested several di\ufb00erent prescriptions for DSA e\ufb03ciency by comparing \u03b3-ray \ufb02ux from simulated clusters with Fermi-LAT upper limits of observed clusters. They found that non-detection of \u03b3ray emission could be understood, only if the CR proton acceleration e\ufb03ciency at weak cluster shocks is on average less than 10\u22123 for shocks with Ms = 2\u22125. On the other hand, recent hybrid plasma simulations demonstrated that about 5 \u221215% of the shock kinetic energy is expected to be transferred to the CR proton energy at quasi-parallel shocks with a wide range of Alfv\u00b4 en Mach numbers, MA, (Caprioli & Spitkovsky 2014a). So there seems to exist a tension between the CR proton acceleration e\ufb03ciency predicted by DSA theory and \u03b3-ray observations of galaxy clusters. It is well established that CR protons streaming along magnetic \ufb01eld lines upstream of parallel shock resonantly excite A\ufb02v\u00b4 en waves with wavenumber k \u223c1/rg via two-stream instability, where rg is the proton Larmor radius (Wentzel 1974; Bell 1978; Lucek & Bell 2000; Schure et al. 2012). These A\ufb02v\u00b4 en waves are circularly Figure 1. Flow velocity con\ufb01guration in the shock rest frame for a 1D planar shock with the background magnetic \ufb01eld parallel to the shock normal (parallel shock). Here, the subscripts 1 and 2 are for preshock and postshock quantities, respectively. The shock faces to the right, so the preshock \ufb02ow speed is u = \u2212u1. After upstream backward waves (moving anti-parallel to the \ufb02ow in the \ufb02ow rest frame) cross the shock, both transmitted backward waves and re\ufb02ected forward waves are advected downstream. The convection speeds of waves, Wb1, Wb2, and Wf2, are given in the shock rest frame. polarized in the same sense as the proton gyromotion, i.e., left-handed circularly polarized when they propagate parallel to the background magnetic \ufb01eld. The waves act as scattering centers that can scatter CR particles in pitch-angle both upstream and downstream of the shock, leading to the Fermi \ufb01rst order (Fermi I) acceleration at parallel shocks (Bell 1978). Since CRs are scattered and isotropized in the mean wave frame, the spectral index \u0393 of the CR energy spectrum, N(E) \u221dE\u2212\u0393, is determined by the convection speed of scattering centers in the shock rest frame, u + uw, instead of the gas \ufb02ow speed, u (Bell 1978). Here, uw is the mean speed of scattering centers in the local \ufb02uid frame, or the speed of so-called Alfv\u00b4 enic drift. The direction and amplitude of Alfv\u00b4 enic drift depend on the di\ufb00erence between the intensity of forward waves (moving parallel to the \ufb02ow) and that of backward waves (moving anti-parallel to the \ufb02ow), i.e., (\u03b4Bf)2 \u2212(\u03b4Bb)2 (Skilling 1975). If forward and back waves have the same intensity or if waves are completely isotropized, i.e., (\u03b4Bf)2 = (\u03b4Bb)2, then uw \u22480. A nonresonant instability due to the electric current associated with CRs escaping upstream is also known to operate on small wavelengths (Bell 2004; Schure et al. 2012). The excited waves are not Alfv\u00b4 en waves, and have a circular polarization opposite to the sense of the proton gyromotion, i.e., are right-handed circularly polarized when they propagate parallel to the background magnetic \ufb01eld. This nonresonant instability is more un\fEffects of Alfv\u00b4 enic Drift on Diffusive Shock Acceleration 3 Figure 2. Radial pro\ufb01les of the gas density, \ufb02ow speed, and CR pressure of a model spherical SNR shock that expands outward. Owing to the positive (negative) gradient of PCR, forward (backward) waves are expected to be dominant in the postshock (preshock) region, as illustrated in this \ufb01gure. So the mean convection velocities of scattering centers point away from the shock both in the upstream and downstream rest frames. stable at higher k\u2019s (smaller wavelengths), and the ratio of the growth rates of non-resonant to resonant instability is roughly, \u0393nonres/\u0393res \u223cMA/30 (Caprioli & Spitkovsky 2014b). In cluster outskirts where the magnetic \ufb01eld is observed to have B \u223c1 \u00b5G (e.g., Govoni & Feretti 2004), shocks have MA \u227230 (see below), so resonant instability is expected to be dominant there. Since we here are interested in cluster shocks, we focus mainly on Alfv\u00b4 en waves excited by resonant streaming instability. Bell (1978) noted that resonant instability would produce mostly backward waves in the preshock region, because CR protons streaming upstream excite waves that move parallel to the streaming direction (that is, travel upstream away from the shock in the upstream rest frame), and any forward waves pre-existing in the preshock \ufb02ow would be damped due to the gradient of the CR distribution in the shock precursor (Wentzel 1974; Skilling 1975; Lucek & Bell 2000). Then, the Alfv\u00b4 enic drift speed in the preshock region may be approximated as uw1 \u2248+VA1, where VA = B0/\u221a4\u03c0\u03c1 is the local Alfv\u00b4 en speed. See Figure 1 for the velocity con\ufb01guration in the shock rest frame. Hereafter, the subscripts 1 and 2 refer to the quantities in the preshock and postshock regions, respectively. Alfv\u00b4 enic drift in the postshock region was previously considered in studies of CR acceleration at strong supernova remnant (SNR) shocks (e.g., Zirakashvili & Ptuskin 2008, 2012; Caprioli et al. 2009; Lee et al. 2012; Kang 2013). Those studies suggested that owing to the positive gradients of the CR pressure, PCR, forward waves (moving away from the shock toward the center of supernova explosion) could be dominant in the postshock region, then uw2 \u2248\u2212VA2 (see Figure 2). The e\ufb00ects of Alfv\u00b4 enic drift should be substantial, only if the Alfv\u00b4 en speed is a signi\ufb01cant fraction of the \ufb02ow speed. In SNR shocks, for instance, the Alfv\u00b4 en Mach number is MA = u1/VA \u223c20 \u2212200, depending on the density of the background medium, yet the Alfv\u00b4 enic drift e\ufb00ects could be appreciable (e.g., Caprioli et al. 2009; Kang 2013). For the ICM in cluster outskirts, the sound and Alfv\u00b4 en speeds are given as cs \u22481.14 \u00d7 103 km s\u22121(kBT/5 keV)1/2 and VA \u2248 184 km s\u22121(B/1 \u00b5G)(nH/10\u22124 cm\u22123)\u22121/2, respectively, so \u03b2 \u2261 \u0012 cs VA \u00132 \u224840 \u0010 nH 10\u22124 cm\u22123 \u0011 \u0012 kBT 5.2 keV \u0013 \u0012 B 1 \u00b5G \u0013\u22122 , (1) where kB is the Boltzmann constant. For Ms \u22482 \u22123, the Alfv\u00b4 en Mach number of cluster shocks ranges MA = \u221a\u03b2Ms \u224813 \u221219, which is smaller than that of SNR shocks. Thus, we expect that the Alfv\u00b4 enic drift could have non-negligible e\ufb00ects on DSA at cluster shocks. Note that this de\ufb01nition of \u03b2 di\ufb00ers from the usual plasma beta by a factor of 1.2 for the gas adiabatic index \u03b3 = 5/3; the plasma beta of the ICM has been estimated to be \u223c50\u2212100 (e.g., Ryu et al. 2008; Porter et al. 2015). The transmission and re\ufb02ection of upstream Alfv\u00b4 en waves at shocks can be calculated by solving conservation equations across the shock transition (e.g., Campeanu & Schlickeiser 1992; Vainio & Schlickeiser 1998, 1999; Caprioli et al. 2009). Vainio & Schlickeiser (1998), for instance, used the conservation of mass \ufb02ux, transverse momentum, and tangential electric \ufb01eld \f4 Kang & Ryu to calculate them, in the small wave amplitude limit (b \u2261\u03b4B/B \u226a1) in the one-dimensional (1D) planeparallel geometry. They showed that after purely backward waves cross the shock, forward waves are also generated in the postshock region. Vainio & Schlickeiser (1999) (hereafter VS99) extended the work by including the pressure and energy \ufb02ux of waves across the shock. The transmission and re\ufb02ection of Alfv\u00b4 en waves and so the ensuing CR spectrum are governed by MA, \u03b2, b, and the properties of upstream waves. For certain shock parameters, the e\ufb00ective compression ratio, rsc, which is de\ufb01ned as the velocity jump of scattering centers (see Section 3), can be even larger than the gas compression ratio, r, leading to a \ufb02atter CR energy spectrum. In this paper, we \ufb01rst estimate the e\ufb00ects of Alfv\u00b4 enic drift on the DSA of protons for 1D planar shocks in high beta (\u03b2 \u22651) plasmas, with the transport of Alfv\u00b4 en waves across the shock transition described in VS99. We then consider two other cases, which are physically motivated: (1) postshock waves are isotropized, i.e., uw2 \u22480, and (2) forward waves are dominant in the postshock region, i.e., uw2 \u2248\u2212VA2. We examine the Alfv\u00b4 enic drift e\ufb00ects in these cases too. In the next section, the transmission and re\ufb02ection of upstream Alfv\u00b4 en waves at 1D planar shocks are described. In Section 3, the e\ufb00ects of the drift of Alfv\u00b4 en waves are discussed with the power-law CR proton spectrum in the test-particle limit. A brief summary including implications of our results at weak cluster shocks is given in Section 4. 2. TRANSMISSION AND REFLECTION OF ALFV\u00b4 EN WAVES AT SHOCKS VS99 derived necessary jump conditions for the transport of Alfv\u00b4 en waves across parallel shocks, whose con\ufb01guration is illustrated in Figure 1. We here repeat some of them to make this paper self-contained. The shock moves to the right, so the preshock and postshock \ufb02ow speeds in the shock rest frame are u1 = \u2212u1\u02c6 x and u2 = \u2212u2\u02c6 x, respectively. The background magnetic \ufb01eld is given as B0 = \u2212B0\u02c6 x. CR protons streaming upstream along B0 excite backward waves that travel anti-parallel to the background \ufb02ow in the local \ufb02uid frame. The shock ampli\ufb01es the incoming backward waves and also generates forward waves in the postshock region. The convection speed of backward waves is Wb1,2 = \u2212(u1,2 \u2212VA1,2) < 0 (to the left) both upstream and downstream of a parallel shock for the high beta plasmas with \u03b2 \u22651 considered here. We consider nondispersive, circularly-polarized Alfv\u00b4 en waves with small amplitudes (b \u2261\u03b4B/B \u226a1), propagating along the mean background magnetic \ufb01eld, B0, at 1D planar shocks. Note that the formulae below do not di\ufb00erentiate the handedness of wave polarization, since the conservation equations do not depend on it. The relation for the gas compression ratio, r, across the shock jump can be derived from the RankineHugoniot condition including the pressure and energy \ufb02ux of waves, and is given as the following cubic equation, b2M 2 Ar{(\u03b3 \u22121)r2 + [M 2 A(2 \u2212\u03b3) \u2212(\u03b3 + 1)]r + \u03b3M 2 A} +(M 2 A \u2212r)2{2r\u03b2 \u2212M 2 A[\u03b3 + 1 \u2212(\u03b3 \u22121)r]} = 0, (2) for a given set of parameters, Ms, \u03b2, and b (VS99). Here, \u03b3 = 5/3 is used for the ICM gas. The bottom-left panel of Figure 3 shows the solution of Equation (2), r, for three beta\u2019s (\u03b2 = 1, 10, and 80) and b = 0.1 in the Mach number range of Ms \u22725. Since the background magnetic \ufb01eld is parallel to the shock \ufb02ow (i.e., parallel shocks) and the transverse components of wave \ufb01elds are small (\u03b4B = 0.1B0), r is almost identical to the gas compression ratio of gasdynamic shocks, rgas = (\u03b3 + 1)M 2 s /{(\u03b3 \u22121)M 2 s + 2}, regardless of \u03b2. In fact, r would deviate from rgas, only if b is substantially large or \u03b2 is small. In the same panel, two such cases with (b = 0.3 & \u03b2 = 1) and (b = 0.1 & \u03b2 = 0.5) are shown for comparison, with the green and magenta lines, respectively, to illustrate such dependence. Following VS99, the cross-helicity is de\ufb01ned as Hc = (\u03b4Bf)2 \u2212(\u03b4Bb)2 (\u03b4Bf)2 + (\u03b4Bb)2 , (3) where \u03b4Bb and \u03b4Bf are the magnetic \ufb01elds of backward and forward waves, respectively. In the preshock region, backward waves are expected to be dominant for CRmediated shocks (see Introduction), so we assume Hc1 \u2248 \u22121. For power-law energy spectra of waves with slope q, I(k) \u221dk\u2212q, the transmission and re\ufb02ection coe\ufb03cients for backward and forward waves, respectively, in the postshock region are derived from the equations for transverse momentum and tangential electric \ufb01eld, as follows, T \u2261\u03b4Bb 2 \u03b4B1 = r1/2 + 1 2r1/2 \u0012 r MA + Hc1 MA + r1/2Hc1 \u0013(q+1)/2 , (4) R \u2261\u03b4Bf 2 \u03b4B1 = r1/2 \u22121 2r1/2 \u0012 r MA + Hc1 MA \u2212r1/2Hc1 \u0013(q+1)/2 (5) (Vainio & Schlickeiser 1998). Note that these coe\ufb03cients are independent of the wavenumber. According to hybrid simulations of collisionless shocks by Caprioli & Spitkovsky (2014b), for shocks with MA \u227230 where \fEffects of Alfv\u00b4 enic Drift on Diffusive Shock Acceleration 5 Figure 3. Top: Transmission and re\ufb02ection coe\ufb03cients, T and R, and downstream cross-helicity, Hc2, as functions of Ms, for three cases with di\ufb00erent \u03b2\u2019s. Bottom: Gas compression ratio, r, scattering center compression ratio, rsc, and CR spectral index, \u0393, for the same cases. Here, we assume that the upstream cross-helicity is Hc1 = \u22121.0 (backward waves only) and the turbulence power spectrum is speci\ufb01ed with the slope, q = 1.0, and b = 0.1. In the panel for r, two additional cases are shown, the one with b = 0.3 & \u03b2 = 1 by the green line, and that with b = 0.1 & \u03b2 = 0.5 by the magenta line. In the panels for rsc and \u0393, the magenta lines are for the model with Hc2 = 0 (isotropic waves), while the cyan lines are for the model with Hc2 = +1 (forward waves only). The green solid lines show rgas and \u0393gas = (rgas + 2)/(rgas \u22121) for gasdynamics shocks without Alfv\u00b4 enic drift. resonant streaming instability dominantly operates, the spectrum of excited magnetic turbulence in the precursor is consistent with I(k) \u221dk\u22121. So we adopt q = 1. With these coe\ufb03cients, the downstream cross-helicity can be estimated as Hc2 = Hc1 \u00b7 T 2 \u2212R2 T 2 + R2 . (6) The top panels of Figure 3 show T, R, and Hc2, calculated with b = 0.1, q = 1, and Hc1 = \u22121. One can see that incident backward waves are ampli\ufb01ed across the shock with T > 1, while forward waves are generated with 0 < R < 1 (greater R for higher \u03b2) in the postshock region. The ensuing downstream cross-helicity ranges \u22121 < Hc2 \u2272\u22120.85 for the shocks considered here. We note that the quasi-linear treatment adopted here should break down for non-linear waves, which are expected to develop via streaming instabilities at strong shocks. 3. EFFECTS OF ALFV\u00b4 ENIC DRIFT ON DSA 3.1. Scattering Center Compression Ratio and CR Spectral Index The CR transport at shocks can be described by the di\ufb00usion-convection equation, \u2202f \u2202t + (u + uw)\u2202f \u2202x = 1 3 \u2202(u + uw) \u2202x \u00b7 p\u2202f \u2202p + \u2202 \u2202x \u0014 \u03ba(x, p)\u2202f \u2202x \u0015 + 1 p2 \u2202 \u2202p \u0012 p2Dpp \u2202f \u2202p \u0013 , (7) where f(x, p, t) is the pitch-angle-averaged phase space distribution function for CRs, u is the \ufb02ow speed, uw is the local speed of scattering centers, \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient, Dpp is the momentum di\ufb00usion coe\ufb03cient (Skilling 1975; Bell 1978; Schlickeiser 1989). The e\ufb00ects of Alfv\u00b4 enic drift enter through uw, which is here given as uw1 = Hc1VA1 and uw2 = Hc2VA2 in the preshock and postshock regions, respectively. CR particles then experience the velocity change from u1 +Hc1VA1 to u2 +Hc2VA2 across the shock, since they are isopropized in the local wave frame. Then the com\f6 Kang & Ryu Figure 4. Test particle spectrum, f(p) \u2217p4, given in Equation (10), for models with di\ufb00erent Ms\u2019s. The model parameters are Qi = 3.5, kBT1 = 5.2keV, nH1 = 10\u22124 cm\u22123, and B0 = 1 \u00b5G (\u03b2 = 40). Each curve is labeled with Ms. The slopes of the power-law CR proton distributions, anchored to the postshock Maxwellian distributions, are calculated with Equations (8) and (9) for 1D planar shocks. The solid lines represent the models with Hc2 estimated according to VS99, while the dashed lines show the models with Hc2 = 0 (isotropic waves). pression ratio of scattering centers, de\ufb01ned as the velocity jump of scattering centers, is given as rsc \u2261u1 + Hc1VA1 u2 + Hc2VA2 = r MA + Hc1 MA + r1/2Hc2 . (8) Thus, rsc can be di\ufb00erent from the gas compression ratio, r, from Equation (2), depending on the crosshelicity. The bottom-middle panel of Figure 3 shows that rsc, calculated for 1D planar shocks (VS99), depends on \u03b2 and can be greater than rgas. But for \u03b2 \u226b1, rsc \u2248rgas, since MA \u226b1. At weak cluster shocks, the CR pressure is dynamically insigni\ufb01cant, that is, shocks are in the test-particle regime, in which the CR energy spectrum, N(E), is represented by a power-law form. Then, its power-law index, \u0393, is determined by rsc as \u0393 = rsc + 2 rsc \u22121 (9) (Bell 1978). The bottom-right panel of Figure 3 shows \u0393 calculated for 1D planar shocks. Flattening of N(E) due to Alfv\u00b4 enic drift could be substantial for \u03b2 \u22481 (red solid lines), for which even \u0393 < 2 is predicted. It can be seen that for \u03b2 \u226b1 (see blue dot-dashed lines), \u0393 \u2248\u0393gas since rsc \u2248rgas. 3.2. CR Acceleration E\ufb03ciency In the test-particle regime, the amplitude of the CR proton spectrum can be \ufb01xed by setting it at the injection momentum, pinj, and then the momentum distribution function at the shock position, xs, is given as f(xs, p) = fN \u0012 p pinj \u0013\u2212(\u0393+2) exp \" \u2212 \u0012 p pcut \u00132# , (10) where fN is the normalization factor (Kang & Ryu 2010). The cuto\ufb00momentum, pcut, represents the maximum momentum of CR protons that can be accelerated within the shock age, tage, and is given as pcut \u221du2 1B0tage. As long as pcut \u226bmpc, the CR energy density does not depend on its exact value if \u0393 > 2. Here, we de\ufb01ne pinj as the minimum momentum above which protons can cross the shock transition and participate in the Fermi I acceleration process, and describe it with the injection parameter, Qi, as pinj \u2261Qi \u00b7 pth, (11) where pth = p 2mpkBT2 is the proton thermal peak momentum of the postshock gas with temperature T2 (Kang & Ryu 2010). Using hybrid simulations, Caprioli et al. (2015) demonstrated that the injection momentum increases with the shock obliquity angle, \u0398Bn, and Qi \u22483.3 \u22124.6 for quasi-parallel shocks (\u0398Bn \u227245\u25e6) with MA = 5 \u221250 and \u03b2 \u22481. The injection parameter should be a\ufb00ected by the strength of self-generated MHD turbulence, which in turn depends on MA and \u03b2, in addition to \u0398Bn. It is also expected to increase in time as the particle spectrum extends to higher energies for strong shocks with p\u22124 momentum distribution, since the CR conversion e\ufb03ciency cannot be greater than 100 %. More accurate estimation of Qi for weak cluster shocks in high \u03b2 ICM plasmas, however, could be made only through kinetic plasma simulations, but its value has not yet been precisely de\ufb01ned (see, e.g., Caprioli & Spitkovsky 2014a; Caprioli et al. 2015). Assuming that f(xs, p) is anchored to the postshock Maxwellian distribution at pinj, the normalization factor \fEffects of Alfv\u00b4 enic Drift on Diffusive Shock Acceleration 7 Figure 5. Left: Power-law slope, \u0393, calculated with the scattering center compression ratio, rsc, in Equation (8). The model parameters are nH1 = 10\u22124 cm\u22123, kBT1 = 5.2 keV, and B0 = 1 \u00b5G(\u03b2/40)\u22121/2. The value of beta is \u03b2 = 10 (black), 40 (red), and 80 (blue). The postshock cross-helicity, Hc2, is calculated by following VS99. Middle: CR injection fraction, \u03be, with Qi = 3.5 (solid lines with circles), and with Qi = 3.8 (dashed lines with triangles). Right: CR proton acceleration e\ufb03ciency, \u03b7, calculated with the test-particle spectrum in Equation (10) with Qi = 3.5 (solid lines with circles) and 3.8 (dashed lines with triangles). The green lines show \u0393gas and \u03b7 for gasdynamic shocks without Alf\u00b4 enic drift. is given as fN = nH2 \u03c01.5 p\u22123 th exp(\u2212Q2 i ), (12) where nH2 is the postshock hydrogen number density (Kang & Ryu 2010). Figure 4 illustrates how the test-particle spectrum in Equation (10) depends on the sonic Mach number, Ms. We adopt the relevant parameters for cluster shocks, kBT1 = 5.2 keV, nH1 = 10\u22124 cm\u22123, and B0 = 1 \u00b5G, resulting in \u03b2 \u224840. We set Qi = 3.5 as a representative value, since we here model mostly parallel shocks with small obliquity angles. (Below, we also consider Qi = 3.8 as a comparison case.) The CR spectra shown have the power-law indices, \u0393\u2019s, from Equations (8) and (9), which are calculated with Hc2 estimated according to VS99 for 1D planar shocks (also with Hc2 = 0, see Section 3.3). The cuto\ufb00momentum for tage = 108 yr is drawn for an illustrative purpose. With the spectrum in Equation (10) and \u0393 > 2 for weak shocks, the CR injection fraction can be estimated as \u03be \u2261 1 nH2 Z pcut pmin 4\u03c0f(rsc, p)p2dp \u2248 4 \u221a\u03c0(\u0393 \u22121)Q3 i exp(\u2212Q2 i ), (13) if we take pmin = pinj as the lower boundary of the CR momentum distribution (Kang & Ryu 2010). According to this de\ufb01nition, the CR injection fraction depends mainly on \u0393 and Qi, since normally pcut \u226bmpc. We also de\ufb01ne the CR acceleration e\ufb03ciency as the ratio of the downstream CR energy \ufb02ux to the shock kinetic energy \ufb02ux, as follows, \u03b7 \u2261fCR fkin = u2ECR (1/2)\u03c11u3 1 (14) (Kang & Ryu 2013). Here the postshock CR energy density is given as ECR = 4\u03c0mpc2 Z pcut pmin ( p p2 + 1 \u22121)f(xs, p)p2dp, (15) where the particle momentum p is expressed in units of mpc. Again, we take pmin = pinj in the calculation of ECR below. Note that in general, the CR injection fraction and the DSA e\ufb03ciency sensitively depend on how one speci\ufb01es pmin, since the CR number is dominated by nonrelativistic particles with p \u223cpinj. The left panel of Figure 5 shows the power-law slope, \u0393, estimated with Hc2, which is calculated according to VS99. Here \u03b2 varies in the ranges relevant to cluster shocks, \u03b2 = 10 \u221280, so does the background magnetic \ufb01eld as B0 = 1 \u00b5G(\u03b2/40)\u22121/2. One can see that at \f8 Kang & Ryu Figure 6. Left: Power-law slope, \u0393, calculated with the scattering center compression ratio, rsc, in Equation (8). The magenta and cyan lines are for Hc2 = 0 and Hc2 = +1, respectively, while black line shows the case with Hc2 calculated by following VS99. The model parameters are nH1 = 10\u22124 cm\u22123, kBT1 = 5.2 keV, and B0 = 1 \u00b5G (\u03b2 = 40). Middle: CR injection fraction, \u03be, with Qi = 3.5 (solid lines with circles) and 3.8 (dashed lines with triangles). Right: CR acceleration e\ufb03ciency, \u03b7, calculated with the test-particle spectrum given in Equation (10) with Qi = 3.5 (solid lines with circles) and 3.8 (dashed lines with triangles). The green lines show \u0393gas and \u03b7 for gasdynamic shocks without Alf\u00b4 enic drift. weak cluster shocks, Hc2 based on VS99 could \ufb02atten the CR spectrum slightly, compared to gasdynamic shocks without Alf\u00b4 enic drift (green line). But for \u03b2 \u226b1 the dependence of \u0393 on \u03b2 is rather weak. The middle and right panels of Figure 5 show the injection fraction, \u03be, and the CR acceleration e\ufb03ciency, \u03b7, respectively, calculated with the test-particle spectrum in Equation (10) with the slope \u0393 shown in the left panel. Here, the adopted values of kBT1 and nH1 are the same as in Figure 4. Both \u03be and \u03b7 strongly depend on Qi through the normalization factor fN, due to the exponential nature of the tail in the Maxwellian distribution. While \u03be \u221dQ3 i exp(\u2212Q2 i ) from Equation (13), the CR acceleration e\ufb03ciency can be approximated as \u03b7 \u221dQ5 i exp(\u2212Q2 i ) for weak shocks with power-law spectra much steeper than p\u22124 (dominated by nonrelativisitc particles). So \u03be decreases by a factor of 7 as Qi increases from 3.5 to 3.8, while \u03b7 decreases roughly by a factor of 6 or so. For cluster shocks with Ms \u22723, \u03be \u22723.2 \u00d7 10\u22124 and \u03b7 \u22722.2 \u00d7 10\u22122 for Qi = 3.5, while \u03be \u22724.6 \u00d7 10\u22125 and \u03b7 \u22723.6\u00d710\u22123 for Qi = 3.8. This indicates that the estimated CR injection fraction and acceleration e\ufb03ciency could easily di\ufb00er by an order of magnitude, depending on the adopted Qi. For parallel shocks with small obliquity angles (i.e., \u0398Bn \u227215\u25e6), however, we expect that Qi is unlikely to be much larger than 3.8 (Caprioli et al. 2015). 3.3. Cases with Hc2 \u22480 and Hc2 \u2248+1 The overall morphology of cluster shocks, induced mainly by merger-driven activities in turbulent ICMs, is expected to be quite complex and di\ufb00erent from simple 1D planar shocks (see, e.g., Vazza et al. 2017; Ha et al. 2017). Rather, it can be characterized by portions of spherically expanding shells, composed of multiple shocks with di\ufb00erent properties. In addition, vorticity is generated behind curved shock surfaces, leading to turbulent cascade over a wide range of length scales and turbulent ampli\ufb01cation of magnetic \ufb01elds in the postshock \ufb02ow (see, e.g. Ryu et al. 2008; Vazza et al. 2017). Then, downstream waves could be isotropized through various MHD and plasma processes in the postshock region, resulting in zero cross-helicity, Hc2 \u22480 (equal strengths of T and R). Note that Fermi II acceleration should be operative in this case, but it is expected to be much less e\ufb03cient than Fermi I acceleration. In addition, as mentioned in the Introduction, the CR particle distribution peaks at the shock (i.e., decreases downstream) in spherical shocks or even in evolving planar shocks in which the CR pressure at the shock is increasing with time. In that case, the gradient of PCR is \fEffects of Alfv\u00b4 enic Drift on Diffusive Shock Acceleration 9 expected to damp backward waves, leaving dominantly forward waves with Hc2 \u2248+1 in the postshock region (Bell 1978; Zirakashvili & Ptuskin 2008; Caprioli et al. 2009). Hence, we here quantitatively examine the e\ufb00ects of Alfv\u00b4 enic drift in these physically motivated cases with Hc2 = 0 and Hc2 = +1, as phenomenological models. In the panels for rsc and \u0393 of Figure 3, the magenta and cyan lines show Hc2 = 0 and Hc2 = +1 cases, respectively. In fact, the scattering center compression ratio is minimized for Hc2 = +1 (see Equation (8)). So this represents the case with the greatest impact of Alfv\u00b4 enic drift (the largest \u0393). Moreover, Figure 4 compares the models with Hc2 estimated according to VS99 and the models with Hc2 = 0 (isotropic waves), demonstrating how the Alfv\u00b4 enic drift may a\ufb00ect the CR spectrum. Figure 6 shows \u0393, \u03be, and \u03b7 for the cases with Hc2 = 0 (magenta lines) and Hc2 = +1 (cyan lines), for the model parameters relevant to cluster shocks and Qi = 3.5 and 3.8. The case for 1D planar shocks with \u03b2 = 40, calculated by following the VS99 approach (black lines), where \u22121 \u2272Hc2 \u2272\u22120.85, is also plotted for comparison. Again the green lines show the results for gasdynamic shocks without Alfv\u00b4 enic drift. The scattering center compression ratio, rsc, is smaller for larger Hc2, resulting in steeper \u0393, hence, smaller \u03be and \u03b7. For weak cluster shocks with 2 \u2272Ms \u22723 and isotropic downstream waves with Hc2 = 0, the CR acceleration e\ufb03ciency is 10\u22123 \u2272\u03b7 \u227210\u22122 for Qi = 3.5 and 2 \u00d7 10\u22124 \u2272\u03b7 \u22721.5 \u00d7 10\u22123 for Qi = 3.8. For the case of dominantly forward waves with Hc2 = +1, on the other hand, 7 \u00d7 10\u22124 \u2272\u03b7 \u22724 \u00d7 10\u22123 for Qi = 3.5 and 10\u22124 \u2272\u03b7 \u22727 \u00d7 10\u22124 for Qi = 3.8. Our results indicate that \u03b7 could be reduced by \u201ca factor of up to \u223c5\u201d due to Alfv\u00b4 enic drift alone. Thus, to quantify the CR acceleration e\ufb03ciency, it could be crucial not only to constrain the injection parameter Qi through plasma simulations, but also to account for Alfv\u00b4 enic drift e\ufb00ects. 4. SUMMARY We study the e\ufb00ects of Alfv\u00b4 enic drift on the DSA of CR protons at weak shocks in high beta ICM plasmas. We assume that upstream Alfv\u00b4 en waves are self-excited by CR protons via resonant streaming instability at parallel shocks (Lucek & Bell 2000; Schure et al. 2012). Such waves are mostly backward, moving anti-parallel to the background \ufb02ow (Bell 1978), so they can be characterized by the cross-helicity of Hc1 \u2248\u22121 (see Equation (3) for the de\ufb01nition of Hc). Since CR protons are scattered and isotropized in the local wave frame, the scattering center compression ratio, rsc, in Equation (8), which accounts for the mean drift of Alv\u00b4 en waves, determines the spectral index, \u0393, of the CR spectrum in the test-particle limit. We \ufb01rst consider 1D planar shocks where the transport of Alfv\u00b4 en waves across the shock transition is described in the small wave amplitude limit (b \u2261\u03b4B/B \u226a 1) (Vainio & Schlickeiser 1998, and V99). In this limit, as noted by VS99, Alfv\u00b4 enic drift may increase or decrease rsc, depending on the shock parameters. This results in the CR spectra either \ufb02atter or steeper, compared to that for gasdynamic shocks without Alfv\u00b4 enic drift. For shocks with Ms \u22723 and \u03b2 \u2261(MA/Ms)2 \u223c 40 \u221280, a mixture of backward and forward waves are present in the postshock region with the postshock crosshelicity estimated to \u22121 \u2272Hc2 \u2272\u22120.85, leading to only a slight decrease of \u0393 (see Figure 3). That is, for weak cluster shocks, rsc \u2248rgas and \u0393 \u2248\u0393gas, and so the effects of Alfv\u00b4 enic drift on the DSA e\ufb03ciency are only marginal (see Figure 5). We then consider two additional, physically motivated cases: (1) downstream waves are isotropic with Hc2 \u22480, and (2) they are dominantly forward with Hc2 \u2248+1. The former could be realistic, if waves are isotropized via a variety of MHD and plasma processes including turbulence while they cross the shock transition. The latter may be relevant, if the CR pressure distribution peaks at the shock as in spherical SNR shocks or evolving planar shocks. In these two cases, Alfv\u00b4 enic drift causes the CR spectrum to be steeper, which results in signi\ufb01cant reductions of the CR injection fraction, \u03be, and the CR acceleration e\ufb03ciency, \u03b7 (see Figure 6). In the case of Hc2 \u2248+1, for example, the CR proton acceleration ef\ufb01ciency for shocks with Ms \u22723 and \u03b2 \u224840 could be reduced by \u201ca factor of up to 5\u201d, compared to that for gasdynamic shocks. So we conclude that the Alfv\u00b4 enic drift e\ufb00ects on the DSA e\ufb03ciency could be substantial at weak cluster shocks. We note that the CR acceleration e\ufb03ciency is most sensitive to the injection momentum, or, the injection parameter, Qi, de\ufb01ned in Equation (11). Increasing Qi from 3.5 to 3.8 (about 10 %), for instance, reduces \u03b7 by a factor of 5 \u22127. For parallel shocks with small obliquity angles, we expect that Qi = 3.5\u22123.8 would be a reasonable range. Thus, in order to reliably estimate the CR proton acceleration e\ufb03ciency at weak cluster shocks, it is important to understand the kinetic plasma processes that govern the particle injection to Fermi I acceleration at collisionless shocks at high beta plasmas. We suggest \u03b7 could vary in a wide range of 10\u22124\u221210\u22122 for weak cluster shocks with Ms \u22482 \u22123, depending on Hc2, \u0398Bn, and \u03b2. Such estimate could be smaller by up to an order of magnitude than that adopted in the previous studies such as Vazza et al. (2016). So this study \f10 Kang & Ryu implies that there remains room for the DSA prediction for CR proton acceleration at cluster shocks to be compatible with non-detection of \u03b3-ray emission from galaxy clusters (Ackermann et al. 2014, 2016). Yet, we emphasize that eventually detailed quantitative studies of DSA at weak cluster shocks using kinetic plasma simulations should be crucial for solving this problem. We thank the anonymous referee for constructive comments. H.K. was supported by the Basic Science Research Program of the NRF of Korea through grant 2017R1D1A1A09000567. D.R. was supported by the NRF of Korea through grants 2016R1A5A1013277 and 2017R1A2A1A05071429. The authors also thank R. Schlickeiser for helpful comments during the initial stage of this work." + }, + { + "url": "http://arxiv.org/abs/1706.03548v1", + "title": "Particle Acceleration at Structure Formation Shocks", + "abstract": "Cosmological hydrodynamic simulations have demonstrated that shock waves\ncould be produced in the intergalactic medium by supersonic flow motions during\nthe course of hierarchical clustering of the large-scale-structure in the\nUniverse. Similar to interplanetary shocks and supernova remnants (SNRs), these\nstructure formation shocks can accelerate cosmic ray (CR) protons and electrons\nvia diffusive shock acceleration. External accretion shocks, which form in the\noutermost surfaces of nonlinear structures, are as strong as SNR shocks and\ncould be potential accelerations sites for high energy CR protons up to\n$10^{18}$ eV. But it could be difficult to detect their signatures due to\nextremely low kinetic energy flux associated with those accretion shocks. On\nthe other hand, radiative features of internal shocks in the hot intracluster\nmedium have been identified as temperature and density discontinuities in X-ray\nobservations and diffuse radio emission from accelerated CR electrons. However,\nthe non-detection of gamma-ray emission from galaxy clusters due to $\\pi^0$\ndecay still remains to be an outstanding problem.", + "authors": "Hyesung Kang", + "published": "2017-06-12", + "updated": "2017-06-12", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "Introduction In [1] shocks in the intracluster medium (ICM) appeared as candidate acceleration sites for ultra-highenergy cosmic rays (CRs) in the so-called \u2018Hillas diagram\u2019, in which the maximum energy of CR nuclei achievable by a cosmic accelerator was estimated from the con\ufb01nement condition: Emax(ZeV) \u223cz \u00b7 \u03b2a \u00b7 B\u00b5G \u00b7 LMpc, (1) where Emax is given in units of 1021 eV, z is the charge of CR nuclei, and \u03b2a = va/c, B\u00b5G, and LMpc are the characteristic speed, the magnetic \ufb01eld strength in units of microgauss, and the size in units of Mpc of the accelerator, respectively. For shocks associated with galaxy clusters with \u03b2a \u223c0.01, B\u00b5G \u223c1, LMpc \u223c1, CR protons could be accelerated up to \u223c1019 eV. [2] \ufb01rst suggested that cosmic shocks induced by the structure formation can accelerate CR protons up to Email address: hskang@pusan.ac.kr (Hyesung Kang) 1019.5 eV via di\ufb00usive shock acceleration (DSA). Independently and more or less simultaneously, [3] showed, using cosmological hydrodynamic simulations, that accretion shocks around galaxy clusters have vs \u223c3 \u00d7 103 km s\u22121, and suggested that, for the Bohm di\ufb00usion with microgauss magnetic \ufb01elds, the maximum energy of protons achieved via DSA by cluster accretion shocks is limited to \u223c60 EeV (\u03c4acc = \u03c4pion), due to the energy loss via photo-pion interactions with the cosmic background radiation (see Figure 1). Adopting simple models for magnetic \ufb01eld strength and DSA, and an analytic relation between the cluster temperature and the spherical accretion shock, [4] showed that the CR protons from a cosmological ensemble of cluster accretion shocks could make a signi\ufb01cant contribution to the observed CR \ufb02ux near 1019 eV. Observational evidence for the electron acceleration by a cluster accretion shock was \ufb01rst suggested by [5] who proposed that di\ufb00use radio relics detected in the outskirts of several clusters could be di\ufb00use synchrotron emission from fossil electrons re-energized by arXiv:1706.03548v1 [astro-ph.HE] 12 Jun 2017 \fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 2 Figure 1: Energy loss time scales for CR protons due to pairproduction (\u03c4pair, thick dashed line) and pion-production (\u03c4pion, thin dashed) on the cosmic background radiation. The thick solid line represents the time scale due to the sum of the two loss processes. Shock acceleration time scales for Bohm (\u03c4Bohm, dot-dashed) and Jokipii (\u03c4Jokippi, thin solid) di\ufb00usion at the shock with us = 103 km s\u22121 and B = 1 \u00b5G [4]. accretion shocks. Since the discovery of a shock in the Bullet cluster (1E 0657-56) [6], about a dozen of shocks have been detected as sharp discontinuities in Xray temperature or surface brightness in mainly merging clusters [7, 8]. Moreover, giant radio relics such as the Sausage relic in CIZA J2242.8+5301 and the Toothbrush relic in 1RXS J0603.3+4214 are thought to result from merger-driven shocks, since most of observed properties can be explained by synchrotron emission from shock-accelerated electrons cooling behind the shock [9, 10]. So the presence of cosmic shocks in the ICM within a few Mpc from the cluster center has been established, although shocks in lower density \ufb01laments await to be detected by future observational facilities [11, 12]. In this contribution, we review the properties of structure formation shocks, the physical processes involved in the acceleration of CR ions and electrons at collisionless shocks, and observational signatures of shocks and nonthermal particles in the ICM. 2. Properties of Structure Formation Shocks The properties and energetics of cosmological shocks have been studied extensively, using numerical simulations for the large-scale-structure (LSS) formation [e.g., 13, 14, 15, 16, 17, 18]. The average spatial frequency y(Mpc) 42 44 46 48 50 52 54 44 46 48 50 52 54 y(Mpc) 42 44 46 48 50 52 54 44 46 48 50 52 54 y(Mpc) 42 44 46 48 50 52 54 44 46 48 50 52 54 log(T) y(Mpc) 42 44 46 48 50 52 54 44 46 48 50 52 54 Mach Number Figure 2: Two-dimensional slice showing X-ray emissivity, gas density, temperature, and shock locations around a galaxy cluster in a structure formation simulation. Strong external accretion shocks form in the outer surfaces of the cluster, while weak internal shocks reside inside the virialized central region [14]. between shock surfaces is \u223c1 Mpc\u22121 inside nonlinear structures of clusters, \ufb01laments, and sheets. These shocks can be classi\ufb01ed mainly into two categories: (1) external accretions shocks with the Mach number, 3 \u2272Ms \u2272100, that form around the outermost surfaces of nonlinear structures, and (2) internal shocks mostly with Ms \u22725 that form in the hot ICM inside nonlinear structures [14]. In Figure 2, external accretion shocks encompassing the cluster coincide with the region with sharp temperature discontinuities, indicating high Mach number shocks. On the other hand, weak internal shocks within a few Mpc from the cluster center are associated with mild temperature variations. The presence of internal shocks has been con\ufb01rmed in many merging clusters, while radiative signature of external accretion shocks have not been detected so far due to very low surface brightness. Weak internal shocks with 2 \u2272Ms \u22723 have high kinetic energy \ufb02ux and are responsible for most of the shock energy dissipation into heat and nonthermal components of the ICM such as CRs, magnetic \ufb01elds, and turbulence. By adopting a DSA model of CR proton acceleration, [14] predicted that the ratio of the CR proton to gas thermal energies dissipated at all cosmological shocks through the history of the Universe could be substantial, perhaps up to 50%. However, this estimate has to be revised to signi\ufb01cantly lower values as we will \fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 3 discuss in Section 5. 3. Turbulence and Magnetic Fields in Large-ScaleStructure Magnetic \ufb01eld is one of the key elements that govern the plasma processes at collisionless shocks and radiative signatures of accelerated particles. The intergalactic space is observed to be permeated with magnetic \ufb01elds and \ufb01lled with turbulence and CRs, similar to the interstellar medium within our Galaxy [15, 19, 20, 11, 12]. Analysis of the rotation measure data for Abell clusters indicates that the mean magnetic \ufb01eld strength ranges up to several \u00b5G in the ICM [21, 22]. Using hydrodynamic and mageneto-hydrodynamic (MHD) simulations for the structure formation, it has been suggested that turbulence could be produced in the ICM by cascade of the vorticity generated behind cosmological shocks or by merger-driven \ufb02ow motions, and that the intergalactic magnetic \ufb01elds could be ampli\ufb01ed via turbulence dynamo [19, 23, 24, 25]. The seed \ufb01elds might have been injected into the ICM via galactic winds and AGN jets or originate from some primordial processes [20, 11, 12]. This turbulence dynamo scenario typically predicts that the energy budget among di\ufb00erent components in the ICM could be Eturb \u223c0.1Eth and EB \u223c0.01Eth, where Eth is the thermal energy density [19]. As shown in Figure 3, the volume-averaged magnetic \ufb01eld strength ranges 0.1 \u22121 \u00b5G in the ICM (T > 107 K) and 0.01 \u22120.1 \u00b5G in \ufb01laments (105 < T < 107 K), which seems to be consistent with observations [22, 21]. In the peripheral regions \u223c5 Mpc away from the cluster center where external accretions are expected to form, the magnetic \ufb01eld strength should be similar to that of \ufb01laments, i.e., \u223c0.01 \u22120.1 \u00b5G. The magnetic \ufb01elds should be much weaker in sheet-like structures and voids, but neither theoretical nor observational estimates are well de\ufb01ned in such low density regions. Relativistic protons and electrons with the same rigidity (R = pc/ze) are accelerated in the same way in DSA regime. But for the particle injection to the DSA process, the obliquity angle, \u0398Bn, becomes an important factor. At quasi-parallel shocks (\u0398Bn \u227245\u25e6), where the magnetic \ufb01eld direction is roughly parallel to the \ufb02ow velocity, MHD waves are self-generated due to streaming of CR protons upstream of the shock, and protons are injected/accelerated e\ufb03ciently to high energies via DSA [26, 27, 28]. At quasi-perpendicular shocks (\u0398Bn \u227345\u25e6), on the other hand, electrons tend to be re\ufb02ected at the shock front and accelerated via shock drift acceleration (SDA) and may further go through the Figure 3: Magnetic \ufb01eld ampli\ufb01cation based on turbulence dynamo in a structure formations simulation. Volume-averaged (left) and massaveraged (right) magnetic \ufb01eld strength as a function of redshift z for the intergalactic medium in four temperature ranges, T > 107 K (red, ICM), T = 105 \u2212107 K (blue, WHIM), T = 104 \u2212105 K (cyan), and T < 104 K (green), and for all (black) the gas [19]. Fermi I acceleration process, if they are scattered by plasma waves excited in the preshock region [29]. In addition to \u0398Bn, excitation of MHD/kinetic waves by plasma instabilities and wave-particle interactions at collisionless shocks depend on the shock parameters such as the plasma beta, \u03b2p = Pgas/PB, and the Alfv\u00b4 en Mach number, MA \u2248p\u03b2pMs. For the internal ICM shocks, \u03b2p \u223c50, Ms \u22723, and MA \u227220. So they are super-critical, i.e., MA > Mcrit, where the critical Mach Number is Mcirt \u223c1 \u22121.5 for high beta plasma, and some ions are re\ufb02ected specularly at the shock ramp, independent of the obliquity angle [30]. In the foreshock region, some of incoming ions and electrons are re\ufb02ected upstream, and the drift between incoming and re\ufb02ected particles may excite plasma waves via various micro-instabilities, depending on the shock parameters. For low beta plasma (\u03b2p \u22721), at high MA quasiperpendicular shocks (MA \u2273p\u03b2pmp/me/2) the Buneman instability is known to excite electrostatic waves, leading to the shock-sur\ufb01ng-acceleration of electrons in the shock foot [31]. For low MA quasi-perpendicular shocks (MA \u2272 pmp/me/2), on the other hand, the modi\ufb01ed two stream instability could generate oblique whistler waves, which result in the pre-heating of thermal electrons to a \u03ba-like suprathermal distribution [32]. 4. Electron Acceleration at Cosmological Shocks Plasma kinetic processes govern the preacceleration of electrons in the shock transition zone, which leads to the injection of CR electrons to the Fermi I process. Figure 4 shows thermal and suprathermal distributions of electrons and protons for the gas with kT \u22484.3 keV. \fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 4 Figure 4: Momentum distribution, p3 f(p), of electrons and protons for the gas with kT \u22484.3 keV in the case of the \u03ba-distributions with \u03ba = 2, 3, 5, and 10. The Maxwellian distributions are shown in black solid lines. The vertical lines indicate the range of the injection momentum of pinj = (3.5 \u22124) pth,p above which particles can be injected into the DSA process [33]. The particle momentum should be greater than a few times the postshock thermal proton momentum (pth,p) to cross the shock transition. So thermal electrons with pth,e = pth,p pme/mp need to be pre-accelerated to the injection momentum, pinj \u223c3.5pth,p, before they can start participating to the full DSA process [34]. Such injection from the thermal Maxwellian pool is expected to be very ine\ufb03cient, especially at low Mach number shocks, and depend very sensitively on the shock Mach number. But if there are suprathermal electrons with the \u03ba-like power-law tail, instead of the Maxwellian distribution, the injection and acceleration of electrons can be enhanced greatly even at weak cluster shocks [33]. As illustrated in Figure 4, the particle injection \ufb02ux at pinj is larger for a \u03ba-distribution with a smaller value of \u03ba. So the development of a \u03ba-like suprathermal distribution is critical in the electron acceleration via DSA. In the case of low MA quasi-perpendicular shocks in the high beta ICM plasma, some incoming electrons are mirror re\ufb02ected at the shock ramp and gain energy via multiple cycles of SDA, while protons can go through a few SDA cycles with only minimal energy gains [29]. In the foreshock of such weak shocks, the electron \ufb01rehose instability induces oblique magnetic waves, which in turn provide e\ufb03cient scattering necessary to energize the thermal electrons to suprathermal energies, leading to e\ufb03cient injection to the DSA process. This picture is consistent with the observational fact that the magnetic \ufb01eld obliquity is typically quasi-perpendicular at giant radio relics such as the Sausage relic [9], and the double relic in the cluster PSZ1 G108.18 [35]. Radio relics are di\ufb00use radio structures detected in the outskirts of merging galaxy clusters. Their observed properties can be best understood by synchrotron emission from relativistic electrons accelerated at mergerdriven shocks: elongated morphologies over \u223c2 Mpc, spectral aging across the relic width (behind the putative shock), integrated radio spectra of a power-law form with gradual steepening above \u223c2 GHz, and high polarization levels [9, 10, 11, 36]. The sonic Mach number of a relic shock can be estimated from either radio or X-ray observations, using the radio spectral index relation, \u03b1sh = (M2 rad + 3)/2(M2 rad \u2212 1), or the X-ray temperature jump condition, T2/T1 = (M2 X + 3)(5M2 X \u22121)/16/M2 X, respectively. In some radio relics, the two estimates are di\ufb00erent, i.e., MX < Mrad, indicating that the simple DSA origin of radio relics might not explain the observed properties [37]. For example, MX \u22481.2 \u22121.5 and Mrad \u22483.0 for the Toothbrush relic [38], while MX \u22482.7 and Mrad \u22484.6 for the Sausage relic [9, 39]. Such discrepancy could be explained by the two following scenarios based on DSA: (1) injection-dominated model in which Ms \u2248Mrad and MX is under-estimated due to projection e\ufb00ects in Xray observation [40], and (2) reacceleration-dominated model in which preexisting electrons with a \ufb02at energy spectrum is reaccelerated by a weak shock with Ms \u2248MX [41, 42]. Figure 5 illustrates that such two viable scenarios, albeit with di\ufb00erent sets of model parameters, could reproduce the observed surface brightness and spectral index pro\ufb01les of the Toothbrush relic [38]. Using structure formation simulations, [40] carried out mock observations of radio relic shocks detected in simulated clusters and showed that X-ray observations are inclined to detect weaker shocks due to projection e\ufb00ects, while radio observations tend to observe stronger shocks with \ufb02atter radio spectra. This naturally supports the injection-dominated model, in which MX tends to be smaller than Mrad for a given radio relic. The ICM is thought to contain fossil relativistic electrons left over from tails and lobes of extinct AGNs. Mildly relativistic electrons with \u03b3e \u2272102 survive for long periods of time, since the cooling time scale of electrons in B \u223c1 \u00b5G is trad \u22481010 yr \u00b7 (102/\u03b3e). They could provide seed electrons to the DSA process, which alleviates the low injection/acceleration e\ufb03ciency problem at weak cluster shocks in the case of the injectiondominated model. If we conjecture that radio relics form when the ICM shocks encounter fossil mildly relativistic electrons with \u03b3e \u2272102, then the model may explain why only about 10 % of merging clusters contain radio relics [42]. The so-called infall shocks form in the cluster outskirts when the WHIM from adjacent \ufb01laments pene\fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 5 Figure 5: DSA modeling for the Toothbrush relic: reaccelerationdominated model with a Ms \u22481.6 shock (left panels) and injectiondominated model with a Ms = 3.0 shock (right panels). Radio \ufb02ux density S \u03bd at 150 MHz (top) and at 610 MHz (middle), and the spectral index \u03b1610 150 between the two frequencies (bottom) are plotted as a function of the projected distance behind the shock (relic edge), R(kpc) [42]. The magenta dots are the observational data of the head portion of the Toothbrush relic [38]. trates deeply into the ICM [43]. They have relatively high Mach numbers (Ms \u22733) and large kinetic energy \ufb02uxes, so they could contribute to a signi\ufb01cant fraction of CR production in clusters with actively infalling \ufb01laments. So some radio relics with relatively \ufb02at radio spectra found in the cluster outskirts could be explained by these energetic infall shocks. Although there still remain a few puzzles regarding the DSA origin of radio relics, it is well received that shocks should be induced in merging galaxy clusters and radio relics could be radiative signatures of relativistic electrons accelerated at those shocks [11, 12]. 5. Proton Acceleration at Cosmological Shocks In the precursor of quasi-parallel shocks, CR protons streaming ahead of the shock are known to excite both resonant and non-resonant waves and amplify the turbulent magnetic \ufb01elds by orders of magnitude [44, 26]. According to hybrid simulations by [27], the CR proton acceleration is e\ufb03cient only for quasi-parallel shocks with \u0398Bn \u227245\u25e6, and about 6 \u221210% of the postshock energy is transferred to CR proton energy for shocks with MA \u223c5 \u221210. For quasi-perpendicular shocks, on the other hand, protons go through only a few cycles of SDA before they advect downstream away from the shock. So scattering waves are not self-generated in the preshock region, and thus the CR proton acceleration is very ine\ufb03cient. Note that for these simulations, \u03b2p \u223c1 and MA \u223cMs, so the quantitative estimates for the CR acceleration e\ufb03ciency may be di\ufb00erent for the ICM shocks in high beta plasmas. Based on cosmological simulations with magnetic \ufb01elds, the shock obliquity angle is expected to have the random orientation in the ICM and at surrounding accretions shocks [19]. Then the probability distribution function for the obliquity angle scales as P(\u0398Bn) \u221dsin \u0398Bn, so only \u223c30% of all cosmological shocks have the quasi-parallel con\ufb01guration and accelerate CR protons [45]. Using cosmological hydrodynamic simulations, the \u03b3-ray emission from galaxy clusters have been estimated by modeling the production of CR protons and electrons at structure formation shocks in several studies [e.g., 46, 47, 48, 45]. Inelastic collisions of shockaccelerated protons with thermal protons produce neutral pions, which decay into \u03b3-ray photons (hadronic origin) [46]. Inverse Compton upscattering of the cosmic background radiation by shock-accelerated primary CR electrons and by secondary CR electrons generated by decay of charged pions also provides \u03b3-ray emission (leptonic origin) [49]. It has been shown that the hadronic \u03b3-ray emission is expected to dominate over the leptonic contribution in the central ICM within the virial radius [e.g., 48]. The key parameters in predicting the \u03c00 decay \u03b3ray emission are the CR proton acceleration e\ufb03ciency, \u03b7(Ms), de\ufb01ned as the ratio of the CR energy \ufb02ux to the shock kinetic energy \ufb02ux, and the volume-averaged ratio of the CR to thermal pressure in the ICM, \u27e8XCR\u27e9= \u27e8PCR\u27e9/\u27e8Pth\u27e9[15, 50, 48]. Adopting the DSA e\ufb03ciency model in which \u03b7 \u22480.1 for Ms \u22483 given in [51], for example, [48] estimated that \u27e8XCR\u27e9\u22480.02 for Coma-like clusters. In [50], in which a thermal-leakage injection model was implemented to DSA simulations, the e\ufb03ciency is estimated to be \u03b7 \u22480.01 \u22120.1 for Ms \u22483 \u22125 shocks. Note that in this DSA model the e\ufb03ciency depends sensitively on the assumed injection model as well as Ms. Recently, [45] tested several di\ufb00erent prescriptions for the DSA e\ufb03ciency by comparing the \u03b3-ray \ufb02ux from simulated clusters with the Fermi-LAT upper-limit \ufb02ux levels of observed clusters. Even with the relatively less e\ufb03cient model based on the hybrid simulation results of [27], in which \u03b7 \u22480.05 for MA = 5 quasi-parallel shocks, and the consideration of the random magnetic \ufb01eld directions, they \ufb01nd that about 10-20 % of simu\fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 6 Figure 6: Physical processes and observational signatures expected to operate at structure formation shocks lated clusters have the predicted \u03b3-ray \ufb02ux levels above the Fermi-LAT upper limits. So the authors suggested that only if \u03b7 \u226410\u22123 for all Mach number shocks, which results in the average value of \u27e8XCR\u27e9\u22720.01 in the ICM, the predicted \u03b3-ray \ufb02uxes from simulated clusters can stay below the Fermi-LAT upper limits. This agrees with the conclusion of [52], which predicted \u27e8XCR\u27e9\u22720.0125 \u22120.014 based on the analysis of four year Fermi-LAT data. Non-detection of \u03b3-ray emission from galaxy clusters might be explained, if the CR proton acceleration is much less e\ufb03cient than expected in the current DSA theory (i.e. \u03b7 \u227210\u22123 for Ms \u223c3). In that regard, the proton acceleration at weak shocks in the low density, high beta ICM plasma needs to be investigated further, since so far most of hybrid/PIC plasma simulations have focused on strong shocks in \u03b2p \u22721 ISM and solar wind plasma. Finally, armed with our new understandings based on the recent plasma hybrid simulations [27, 28], it is worth examining if strong accretions shock can accelerate CR protons to ultra-high energies. The protons are expected to be accelerated e\ufb03ciently via DSA only in the quasiparallel portion of the outermost surfaces encompassed with accretion shocks. There magnetic \ufb01elds could be ampli\ufb01ed via Bell\u2019s non-resonant hybrid instability by a factor of B/B0 \u221d\u221aMA [28], where B0 \u223c0.01\u00b5G and MA \u223c300. So it is reasonable to assume the magnetic \ufb01eld strength at external accretion shocks is B \u223c0.1\u00b5G, about one order of magnitude smaller than that typically adopted in the previous studies [e.g., 2, 3]. Considering the photo-pair energy losses, protons can be accelerated up to Ep,max \u223c1018 eV at quasi-parallel accretion shocks (see Figure 1). 6. Summary 1. Astrophysical plasmas consist of both thermal and CR particles that are closely coupled with permeating magnetic \ufb01elds and underlying turbulent \ufb02ows. So understanding the complex network of physical interactions among these components, especially in the high beta collisionless ICM plasma, is crucial to the study of the particle acceleration at structure formation shocks (see Figure 6). 2. Gravitational energy associated with hierarchical clustering of the large-scale-structures must be dissipated at structure formation shocks into several di\ufb00erent forms: heat, CRs, turbulence and magnetic \ufb01elds [14]. 3. The vorticity generated by curved shocks decays into turbulence behind the shock, which in turn cascades into MHD/plasma waves in a wide range of scales and amplify magnetic \ufb01eld via turbulence dynamo [19]. 4. There is growing observational evidence indicating the presence of weak shocks, relativistic electrons, \fH. Kang / Nuclear and Particle Physics Proceedings 00 (2021) 1\u20138 7 microgauss level magnetic \ufb01elds, and turbulence in the ICM of galaxy clusters [12]. 5. CR protons are expected to be accelerated mainly at quasi-parallel shocks. For weak internal shocks (Ms \u22723) with high kinetic energy \ufb02uxes that form in the ICM, the CR proton acceleration e\ufb03ciency is likely to be \u03b7 < 0.01 in order to explain the nondetection of \u03b3-ray emission from galaxy clusters due to inelastic p-p collisions in the ICM [45]. 6. At quasi-parallel portion of strong external accretion shocks, CR protons could be accelerated to \u223c1018 eV, if the preshock magnetic \ufb01elds can be ampli\ufb01ed to \u223c0.1\u00b5G via CR streaming instabilities [3, 27]. 7. CR electrons are expected to be accelerated preferentially at quasi-perpendicular shocks [29]. Radio relics detected in the outskirts of merging clusters seem to reveal radiative signatures of relativistic electrons accelerated at merger-driven shocks mostly with Ms \u223c2 \u22123 [41]. 8. The injection of protons and electrons from thermal or suprathermal populations to the DSA process at collisionless shocks involves plasma kinetic processes such as excitation of waves by microinstabilities as well as shock drift acceleration and shock sur\ufb01ng acceleration [33]. During the last decade signi\ufb01cant progress been made in that front through PIC/hybrid plasma simulations of nonrelativistic shocks [27, 29]. 7. Acknowledgements This work was supported by the National Research Foundation of Korea through grants NRF2014R1A1A2057940 and NRF-2016R1A5A1013277. The author would like to thank D. Ryu for helpful comments on the paper." + }, + { + "url": "http://arxiv.org/abs/1703.00171v2", + "title": "Shock Acceleration Model for the Toothbrush Radio Relic", + "abstract": "Although many of the observed properties of giant radio relics detected in\nthe outskirts of galaxy clusters can be explained by relativistic electrons\naccelerated at merger-driven shocks, significant puzzles remain. In the case of\nthe so-called Toothbrush relic, the shock Mach number estimated from X-ray\nobservations ($M_{\\rm X}\\approx1.2-1.5$) is substantially weaker than that\ninferred from the radio spectral index ($M_{\\rm rad}\\approx2.8$).Toward\nunderstanding such a discrepancy, we here consider the following diffusive\nshock acceleration (DSA) models:(1) weak-shock models with $M_{\\rm s}\\lesssim\n2$ and a preexisting population of cosmic-ray electrons (CRe) with a flat\nenergy spectrum,and (2) strong-shock models with $M_{\\rm s}\\approx3$ and either\nshock-generated suprathermal electrons or preexisting fossil CRe. We calculate\nthe synchrotron emission from the accelerated CRe, following the time evolution\nof the electron DSA, and subsequent radiative cooling and postshock turbulent\nacceleration (TA). We find that both models could reproduce reasonably well the\nobserved integrated radio spectrum of the Toothbrush relic, but the observed\nbroad transverse profile requires the stochastic acceleration by downstream\nturbulence, which we label \"turbulent acceleration\" or TA to distinguish it\nfrom DSA. Moreover, to account for the almost uniform radio spectral index\nprofile along the length of the relic, the weak-shock models require a preshock\nregion over 400~kpc with a uniform population of preexisting CRe with a high\ncutoff energy ($\\gtrsim 40$ GeV). Due to the short cooling time, it is\nchallenging to explain the origin of such energetic electrons. Therefore, we\nsuggest the strong-shock models with low-energy seed CRe ($\\lesssim 150$~MeV)\nare preferred for the radio observations of this relic.", + "authors": "Hyesung Kang, Dongsu Ryu, T. W. Jones", + "published": "2017-03-01", + "updated": "2017-06-05", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Some galaxy clusters contain di\ufb00use, peripheral radio sources on scales as large as \u223c2 Mpc in length, called \u2018giant radio relics\u2019 (see, e.g., Feretti et al. 2012; Br\u00a8 uggen et al. 2012; Brunetti & Jones 2014, for reviews). Typically they show highly elongated morphologies, radio spectra relatively constant along the length of the relic, but steepening across the width, and high linear polarization (En\u00dflin et al. 1998; van Weeren et al. 2010). Moreover, they have integrated radio spectra with a power-law form at low frequencies, but that steepen above gigaherts frequencies (Stroe et al. 2016). Previous studies have demonstrated that such observational features can often be understood as synchrotron emission from \u227310 GeV electrons in \u223c\u00b5G magnetic \ufb01elds, accelerated via di\ufb00usive shock acceleration (DSA) at merger-driven shock waves in the cluster periphery (e.g., Kang et al. 2012). Yet, signi\ufb01cant questions remain in the merger-shock DSA model of radio relics. Three of the troublesome issues are (1) low DSA e\ufb03ciencies predicted for electrons injected in situ and accelerated at weak, Ms \u22723, shocks that are expected to form in merging clusters (e.g., Ryu et al. 2003; Kang et al. 2012); (2) inconsistencies of the X-ray based shock strengths with radio synchrotronbased shock strengths with the X-ray measures typically indicating weaker shocks (e.g., Akamatsu & Kawahara 2013; Ogrean et al. 2014); and (3) a low fraction (\u227210 %) of observed merging clusters with detected radio relics (e.g., En\u00dflin & Gopal-Krishna 2001; Kang 2016a). According to structure formation simulations, the mean separation between shock surfaces is \u223c1 Mpc, and the mean lifetime of intracluster medium (ICM) shocks is tdyn \u223c1 Gyr (e.g., Ryu et al. 2003; Pfrommer et al. 2006; Skillman et al. 2008; Vazza et al. 2009). So, actively merging clusters are expected to contain several shocks, and we might actually expect multiple radio relics in typical systems. Some of these di\ufb03culties could be accounted for in a scenario in which a shock may light up as a radio relic only when it encounters a preexisting cloud of fossil relativistic electrons in the ICM (e.g., En\u00dflin 1999; Kang & Ryu 2015). Here, we focus on issue (2) above. To set up what follows, we note that in the test-particle DSA model for a steady, planar shock, nonthermal electrons that are injected in situ and accelerated at a shock of sonic Mach number Ms form a power-law momentum distribution, fe(p, rs) \u221dp\u2212q with q = 4M 2 s /(M 2 s \u22121) (Drury 1983). Based on this result and the relation \u03b1sh = (q \u22123)/2 between q and the synchrotron spectral index at the shock, \u03b1sh (with j\u03bd \u221d\u03bd\u2212\u03b1), the Mach number of the hypothesized relic-generating shock is then commonly inferred from its radio spectral index using the relation \u03b1sh = (M 2 rad + 3)/2(M 2 rad \u22121). On the other hand, the shock Mach number can be also estimated from the temperature discontinuity obtained from X-ray observations, using the shock jump condition, T2/T1 = (M 2 X +3)(5M 2 X \u22121)/16/M 2 X, where the subscripts, 1 and 2, identify the upstream and downstream states, respectively. Sometimes, if the temperature jump is poorly constrained, MX is assessed from an estimate of the density jump, \u03c3 = \u03c12/\u03c11 = 4M 2 X/(M 2 X + 3). Although the radio and X-ray shock measures can agree, sometimes the synchrotron index, \u03b1sh, implies a signi\ufb01cantly higher Mach number, Mrad, than MX. Without subsequent, downstream acceleration, the effects of synchrotron and inverse Compton (iC) \u201ccooling\u201d ( \u02d9 p \u221d\u2212p2) will truncate the postshock electron spectrum above energies that drop with increasing distance from the shock, since cooling times for \u227310 GeV electrons under cluster conditions are generally < 100 Myr (e.g., Brunetti & Jones 2014). That is the standard explanation for observed spectral steepening across the width of the relic (downstream of the shock). For reference, we note that if the shock is steady and planar, and the postshock magnetic \ufb01eld is uniform, this energy loss prescription translates into an integrated synchrotron spectral index, \u03b1int = \u03b1sh + 1/2 (Heavens & Meisenheimer 1987). The spectral index along the northern, \u201cleading\u201d edge of the head portion (B1) of the so-called Toothbrush relic in the merging cluster 1RXS J060303.3 at z = 0.225 is estimated to be \u03b1sh \u22480.8 (q \u22484.6) with the corresponding radio Mach number Mrad \u22482.8 (van Weeren et al. 2016). But the gas density jump along the same edge in B1 inferred from X-ray observations implies a much weaker shock with MX \u22481.2 \u22121.5 (van Weeren et al. 2016). The associated radio index, \u03b1sh \u223c2 \u22125 (q \u223c7 \u221213), is much too steep to account for the observed radio spectrum. Toward understanding this discrepancy between Mrad and MX for the Toothbrush relic, we here consider the following two scenarios for modeling the radio observations of this relic: (1) weak-shock models (Ms \u2272 2) with \ufb02at-spectrum, preexisting cosmic-ray elections (CRe), and (2) strong-shock models (Ms \u22483) with low-energy seed CRe. In the weak-shock models, we adopt a preshock, preexisting population of CRe with the \u201cright\u201d power-law slope, for example, fpre(p) \u221d p\u2212s exp[\u2212(p/pe,c)2] with s = 2\u03b1sh + 3 \u223c4.4, where pe,c/mec > 104 is an e\ufb00ective cuto\ufb00to the spectrum. In the strong-shock models, on the other hand, Ms \u2248Mrad is chosen to match the observed radio spectral index, and low-energy seed CRe (p/mec \u223c30) are assumed to \fShock Acceleration Model for Radio Relics 3 come from either the suprathermal tail population generated at the shock or a preexisting fossil population. In the weak-shock models, the value for pe,c is critical, since the observed emissions at frequencies \u2273100 MHz generally come from electrons with 10 GeV or higher energies (p/mec \u2273104). So, if pe,c/mec \u226a104, the preexisting electron population provides just \u201clow-energy seed electrons\u201d to the DSA process, which for Ms \u22722 would still lead to \u03b1sh \u22731.2, and cannot produce the observed radio spectrum with \u03b1sh \u22480.8. Hence, the weakshock models with Ms \u22722 can reproduce the observed spectral index pro\ufb01le of the Toothbrush relic only with pe,c/mec \u223c7 \u22128 \u00d7 104 (Kang 2016a). Consequently, in order to explain the fact that Mrad > MX for the Toothbrush or similar relics by the weak-shock models, one should adopt a potentially radio-luminous, preexisting electron population with pe,c/mec \u226b104. Preshock radio emission might be observable in this case, of course, unless the shock has already swept through the region containing fossil electrons, so no preshock electrons remain. Such a requirement for large pe,c, however, poses a question about the origin of these preexisting CRe, since the electrons with p/mec \u223c8 \u00d7 104 in a \u00b5G \ufb01eld cool on a brief time scale of trad \u223c10 Myr. In the B1 portion of the Toothbrush relic, the spectral index is observed to be uniform over 400 kpc along its leading edge. This means that in the weak-shock models the length scale of the preshock region containing preexisting CRe with a uniformly \ufb02at spectrum with large pe,c should be as long as 400 kpc, tangential to the shock surface. Moreover, since the shock compression ratio is \u03c3 \u22732 for Ms \u22731.5, while the observed radio width of the head is at least 150 kpc, the width of this uniform preshock region should be \u2273300 kpc along the shock normal direction. Considering the short cooling times for high-energy electrons, it should be di\ufb03cult to explain the origin of such a uniform cloud of preexisting CRe by fossil CRe that were deposited in the past by an active galactic nucleus (AGN) jet, for instance, unless the e\ufb00ective electron dispersion speed across the preshock structure was \u22730.1c, or there was a uniformly e\ufb00ective turbulent acceleration (TA) in e\ufb00ect across that volume. In the strong-shock models with Ms \u22483, on the other hand, the challenge to account for the uniform spectrum at the relic edge becomes less severe, since the models require only low-energy seed CRe (p/mec \u223c30) that could be provided by either the shock-generated suprathermal electrons or preexisting fossil CRe (p/mec \u2272300) with long cooling times (trad \u22733.5 Gyr). In the latter case, the additional requirement for low-energy preexisting CRe to enhance the radio emission may explain why only a fraction of merger shocks can produce radio relics. Such low-energy CRe may originate from previous episodes of shock or turbulence acceleration or AGN jets in the ICM (e.g., En\u00dflin 1999; Pinzke et al. 2013). According to simulations for the large-scale structure formation of the universe, the surfaces of mergerdriven shocks responsible for radio relics are expected to consist of multiple shocks with di\ufb00erent Ms (see, e.g., Skillman et al. 2008; Vazza et al. 2009). From mock Xray and radio observations of relic shocks in numerically simulated clusters, Hong et al. (2015) showed that the shock Mach numbers inferred from an X-ray temperature discontinuity tend to be lower than those from radio spectral indices. This is because X-ray observations pick up the part of shocks with higher shock energy \ufb02ux but lower Ms, while radio emissions come preferentially from the part with higher Ms and so higher electron acceleration. In the strong-shock models, we assume that the B1 portion of the Toothbrush relic represents a portion of the shock surface with Ms \u22483, extending over 400 kpc along the length of the relic. It is important to note that the transverse width across the B1 component of the Toothbrush relic is about two times larger than that of another wellstudied radio relic, the so-callled Sausage relic in CIZA J2242.8+5301. The FWHM at 610 MHz, for example, is about 110 kpc for the Toothbrush relic (van Weeren et al. 2016), while it is about 55 kpc for the Sausage relic (van Weeren et al. 2010). For the high-frequency radio emission from electrons with p/mec \u223c104 radiatively cooled downstream from the shock, the characteristic width of the relic behind a spherical shock is \u2206l\u03bd \u2248120 kpc \u0010 u2 103 km s\u22121 \u0011 \u00b7 Q \u00b7 \u0014\u03bdobs(1 + z) 0.61GHz \u0015\u22121/2 , (1) where u2 is the \ufb02ow speed immediately downstream of the shock and z is the redshift of host clusters (Kang 2016b). We will argue below that relic-producing merger shocks are more spherical in geometry than planar. Note, then, that since the downstream \ufb02ow speed in the shock rest frame increases behind a spherical shock, the advection length in a given time scale is somewhat longer than that estimated for a planar shock (Donnert et al 2016). The factor Q depends on the postshock magnetic \ufb01eld strength, B2, as Q(B2, z) \u2261 \u0014 (5 \u00b5G)2 B2 2 + Brad(z)2 \u0015 \u0012 B2 5 \u00b5G \u00131/2 , (2) where B2 and Brad = 3.24 \u00b5G(1 + z)2 are expressed in units of \u00b5G. The factor Q evaluated, for instance, \f4 Kang, Ryu, & Jones for z = 0.225 peaks with Qmax \u22480.6 with B2 \u22482.8 \u00b5G. Then, with u2 \u2248103 km s\u22121, the maximum width at 610 MHz becomes \u2206l\u03bd \u224865 kpc. Being only about half the observed width of the B1 region of the Toothbrush relic at this frequency, it seems too small to allow the width to be set by radiative cooling alone following acceleration at the shock surface. To overcome such a mismatch, we here consider and include the process in which electrons are additionally accelerated stochastically by MHD/plasma turbulence behind the shock, that is, TA. Along somewhat similar lines, Fujita et al. (2015) recently suggested that radio spectra harder than predicted by the DSA in a weak shock could be explained if relativistic electrons are reaccelerated through resonant interactions with strong Alfv\u00b4 enic turbulence developed downstream of the relic shock. However, on small scales, Alfv\u00b4 enic MHD turbulence is known to become highly anisotropic, so resonant scattering is weak and ine\ufb00ective at particle acceleration (e.g., Brunetti & Lazarian 2007). On the other hand, fast-mode compressive turbulence remains isotropic down to dissipation scales, so it has become favored in treatments of stochastic reacceleration of electrons producing radio halos during cluster mergers (Brunetti & Lazarian 2007, 2011). We emphasize, at the same time, that solenoidal turbulence, likely to be energetically dominant on large scales, could still play a reacceleration role through turbulent magnetic reconnection (e.g., Brunetii & Lazarian 2016) or generation of small-scale slow-mode MHD waves that might interact resonantly with CRe (e.g., Lynn et al. 2014). In our study we do not depend on TA to produce a \ufb02at electron spectrum at the shock, but rather explore its potential role as an e\ufb00ective means of slowing energy loss downstream of the shock. In the next section, we describe our numerical simulations incorporating both DSA and TA for shock-based models designed to explore this problem. In Section 3, our results are compared with the observations of the Toothbrush relic. A brief summary follows in Section 4. 2. NUMERICAL CALCULATIONS 2.1. DSA Simulations for 1D Spherical Shocks According to cosmological simulations (e.g., Ryu et al. 2003; Vazza et al. 2009; Hong et al. 2014), the formation and evolution of cluster shocks can be quite complex and transient with time scale \u22721 Gyr, but the overall morphologies of shock surfaces could be represented by partial surfaces of spherical bubbles blowing outward. As in Kang & Ryu (2015), we here attempt to follow for \u22720.2 Gyr the evolution of a 1D spherical shock, which accounts for deceleration and adiabatic expansion behind the shock (see the Section 2.4 for the details). In our simulations, the di\ufb00usion-convection equation for a relativistic electron population is solved in 1D spherical geometry: \u2202ge \u2202t + u\u2202ge \u2202r = 1 3r2 \u2202(r2u) \u2202r \u0012\u2202ge \u2202y \u22124ge \u0013 + 1 r2 \u2202 \u2202r \u0014 r2\u03ba(r, p)\u2202ge \u2202r \u0015 +p \u2202 \u2202y \u0014Dpp p3 \u0012\u2202ge \u2202y \u22124ge \u0013\u0015 + p \u2202 \u2202y \u0012 b p2 ge \u0013 , (3) where ge(r, p, t) = fe(r, p, t)p4 is the pitch-angleaveraged phase space distribution function of electrons, r is the radial distance from the cluster center, and y \u2261ln(p/mec), with the electron mass, me, and the speed of light, c (Skilling 1975). The background \ufb02ow velocity, u(r, t), is obtained by solving the usual gas dynamic conservation equations in the test-particle limit where the nonthermal pressure is dynamically negligible. The spatial di\ufb00usion coe\ufb03cient for relativistic electrons is assumed to have the following Bohm-like form: \u03ba(r, p) = \u03ba\u2217 \u0012 p mec \u0013 , (4) where \u03ba\u2217= kBohm \u00b7 mec3/(3eB) and kBohm \u22651, with the limiting value representing Bohm di\ufb00usion for relativistic particles. The electron energy loss term, b(p) = \u02d9 pCoul + \u02d9 psync+iC, takes account of Coulomb scattering, synchrotron emission, and iC scattering o\ufb00the cosmic microwave background (CMB) radiation (e.g., Sarazin 1999). For a thermal plasma with the number density nth, the Coulomb cooling rate is \u02d9 pCoul = 3.3 \u00d7 10\u221229nth[1 + ln(\u03b3e/nth)/75], while the synchrotron-iC cooling rate is \u02d9 psync+iC = 3.7\u00d710\u221229(\u03b3e/104)2[(B/3.24\u00b5G)2 +(1+z)4] in cgs units, where z is the redshift. Hereafter, the Lorentz factor, \u03b3e = p/mec, will also be used for relativistic energy. Note that Coulomb cooling was not considered in our previous studies for DSA modeling of radio relics (e.g., Kang et al. 2012; Kang & Ryu 2015; Kang 2016a,b). However, since \u02d9 pCoul \u2273\u02d9 psync+iC for \u03b3e \u2272100 in the cluster outskirts with nth \u224810\u22124 cm\u22123, while tCoul \u223cpe/ \u02d9 pCoul \u2272Gyr for \u03b3e \u227210, Coulomb cooling can, in some cases, signi\ufb01cantly a\ufb00ect the electron spectrum for \u03b3e \u2272104 and also the ensuing radio emissivity at 0.1 \u22121 GHz. The radiative cooling time due to synchrotron-iC losses is given by trad(\u03b3e) = 9.8 \u00d7 107 yr \u0014 (5 \u00b5G)2 B2 + Brad(z)2 \u00152\u0010 \u03b3e 104 \u0011\u22121 . (5) \fShock Acceleration Model for Radio Relics 5 For B = 2.5 \u00b5G and z = 0.225, for example, trad \u2248 8.2 \u00d7 107 yr(\u03b3e/104)\u22121. In order to explore the e\ufb00ects of stochastic acceleration by turbulence, TA, we include the momentum diffusion term and implement the Crank-Nicholson scheme for it in the momentum space into the existing CRASH numerical hydrodynamics code (Kang & Jones 2006). Our simulations all assume a gas adiabatic index \u03b3g = 5/3. Any nonthermal pressures from CRe and magnetic \ufb01elds are dynamically insigni\ufb01cant in our models (see below), so they are neglected. The physical nature of the CRe momentum di\ufb00usion coe\ufb03cient Dpp is discussed in the following section. 2.2. Momentum Di\ufb00usion due to Turbulent Acceleration We pointed out in the Introduction that the recent observations of van Weeren et al. (2016) showed that (1) the transverse FWHMs of the B1 Toothbrush component are 140 kpc at 150 MHz and 110 kpc at 610 MHz, and (2) the spectral index between the two frequencies increases from \u03b1610 150 \u22480.8 at the northern edge to 1.9 at approximately 200 kpc to the south, toward the cluster center. While the systematic spectral steepening suggests postshock electron cooling, these widths are much broader than the cooling length given in Equation (1). In e\ufb00ect, the spectral steepening due to radiative cooling in the postshock region is inconsistent with the observed pro\ufb01les of radio \ufb02uxes and spectral index in this region, unless the e\ufb00ect of cooling is somehow substantially reduced (see Section 3). In response, we explore a scenario in which the postshock electrons gain energy from turbulent waves via Fermi II acceleration, TA, thus mitigating spectral steepening downstream. Turbulence accelerates particles stochastically; that is, if the characteristic momentum shift in a collision is \u2206p and the characteristic scattering time interval is \u2206t, then the resulting momentum di\ufb00usion coe\ufb03cient is Dpp \u223c(\u2206p)2/\u2206t. Since scattering events in turbulence typically lead to \u2206p \u221dp, a convenient general form is Dpp = p2 4 \u03c4acc , (6) where \u03c4acc \u223c(1/4)\u27e8(p/\u2206p)2\u2206t\u27e9is an e\ufb00ective acceleration time scale. If \u03c4acc is independent of momentum, this form with the factor 4 inserted into Equation (3) leads to \u03c4acc = \u27e8p\u27e9/(\u2202\u27e8p\u27e9/\u2202t), where \u27e8p\u27e9is the mean momentum of the distribution, fe(r, p, t). Generally speaking, TA in an ICM context can include nonresonant scattering o\ufb00compressive hydrodynamical (acoustic) turbulence (e.g., Ptuskin 1988), as well as gyro-resonant scattering o\ufb00Alfvenic turbulence (e.g., Fujita et al. 2015) and Landau (also known as Cerenkov or \u201ctransit time damping\u201d, TTD) resonance o\ufb00compressive MHD turbulence (with accompanying micro-instabilities to maintain particle isotropy; (e.g., Brunetti & Lazarian 2007, 2011; Lynn et al. 2014)). Resonant acceleration will most often be faster than nonresonant acceleration (e.g., Brunetti & Lazarian 2007; Miniati 2015). Alfvenic gyro-resonance involves turbulent wavelengths comparable to particle Larmour radii, which in ICM conditions for the CRe energies of interest will be sub-astronomical unit scale. While solenoidal turbulence may very well dominate the turbulence of interest (e.g., Porter et al. 2015) and, in the form of Alfven waves, probably cascades to su\ufb03ciently small scales (e.g., Kowal & Lazarian 2010), it should become highly anisotropic on small scales in ICM settings and thus very ine\ufb03cient in resonant scattering of CRe (e.g., Yan & Lazarian 2002)1. Fast-mode, compressive MHD turbulence should remain isotropic to dissipative scales, however. So, even though the magnetic energy in the waves of this mode will be relatively less, they can be much more e\ufb00ective accelerators. On these grounds, we adopt for our exploratory calculations a simple TA model based on TTD resonance with compressive, isotropic fast-mode MHD turbulence. Assuming that in the medium \u03b2p = P/PB \u226b1, where P is the plasma thermal pressure and PB = B2/(8\u03c0) is the magnetic pressure, we can then roughly express the acceleration time, \u03c4acc, as \u03c4acc \u223c \u0010 c a \u00112 1 \u27e8k\u27e9c P Wf . (7) Here, a is the acoustic wave speed, Wf is the total energy density in fast-mode turbulence (mostly contained in compressive \u201cpotential energy,\u201d but also including transverse magnetic \ufb01elds essential for resonant scattering). The term \u27e8k\u27e9measures the power-spectrumweighted mean wavenumber of the fast-mode turbulence (e.g., Brunetti & Lazarian 2007). For a power spectrum, Pf(k) \u221dk\u2212\u03b1, over the range 2\u03c0/L0 \u2264k \u2264 2\u03c0/\u2113d, with 3/2 \u2264\u03b1 \u22642 (e.g., Brunetti & Lazarian 2011) and \u27e81/k\u27e9= (L0/2\u03c0)H(\u03b1) with p \u2113d/L0 \u2264H \u2264 1/ ln L0/\u2113d. We can roughly estimate an outer scale, L0 \u223c100 kpc behind the shock of interest. The fast-mode dissipation scale, \u2113d, is uncertain and dependent on plasma collisionality, but it is likely to be 1 We mention for completeness a proposed alternate scenario in which solenoidal turbulence leads to fast magnetic reconnection and produces a hybrid, \ufb01rst-second order reacceleration process (Brunetii & Lazarian 2016). \f6 Kang, Ryu, & Jones less than \u223c1 kpc (e.g., Schekochihin & Crowley 2006; Brunetti & Lazarian 2007, 2011). Putting these together, we can estimate 1/(\u27e8k\u27e9c) \u223c104 yrs. With an acoustic speed, a \u223c103 km s\u22121, and an estimate Wf \u223c(1/10)P for shock-enhanced fast-mode turbulence in the immediate postshock \ufb02ow, we obtain a rough estimate of \u03c4acc,0 \u223c100 Myr near the shock. As a simple model allowing for decay of this turbulence behind the shock, we apply the form Wf \u221dexp [\u2212(rs \u2212r)/rdec] with rs > r, where rs is the radius of the spherical shock. So the TA time scale increases behind the shock as \u03c4acc = \u03c4acc,0 \u00b7 exp \u0014(rs \u2212r) rdec \u0015 (8) with, in most of our simulations, rdec \u2248100 kpc. 2.3. DSA Solutions at the Shock Since the time scale for DSA at the shock is much shorter than the cooling time scale for radio-emitting electrons (\u223c100 Myr), we assume that electrons are accelerated almost instantaneously to the maximum energy at the shock front. On the other hand, the minimum di\ufb00usion length scale to obtain converged solutions in simulations for Equation (3) is much smaller than the typical downstream cooling length of \u223c100 kpc. Taking advantage of such disparate scales, we adopt analytic solutions for the electron spectrum at the shock location as f(rs, p) = finj(p) or freacc(p), while Equation (3) is solved outside the shock. Here, finj(p) represents the electrons injected in situ and accelerated at the shock, while freacc(p) represents the reaccelerated electrons preexisting in the preshock region. So, basically we follow the energy losses and TA of electrons behind the shock, while the DSA analytic solutions are applied to the zone containing the shock. Note that shocks in CRASH are true discontinuities and tracked on sub-grid scales (Kang & Jones 2006). Since we do not need to resolve the di\ufb00usive shock precursor or follow the DSA process in detail, this scheme allows us to use a much coarser grid, reducing dramatically the required computation time. The electron population injected in situ from the background plasma and accelerated by DSA at the shock is modeled as finj(rs, p) = fN \u0012 p pinj \u0013\u2212q exp \" \u2212 \u0012 p peq \u00132# , (9) where fN, q, pinj, and peq are the normalization factor, the standard test-particle DSA power-law slope, the injection momentum, and the cuto\ufb00momentum, respectively. The injection momentum roughly identi\ufb01es particles with gyro-radii large enough to allow a signi\ufb01cant fraction of them to recross the physical shock from downstream rather than being advected downstream (e.g., Gieseler et al. 2000; Kang et al. 2002; Caprioli et al. 2015). So particles with p > pinj are assumed to participate in the Fermi I acceleration process. According to the hybrid simulations by Caprioli & Spitkovsky (2014), pinj \u2248 (3 \u22123.5)pth,p for protons at quasi-parallel shocks, where pth,p = p 2mpkT2 is the proton thermal momentum and k is the Boltzmann constant. The electron injection to the DSA Fermi I process from the thermal pool is thought to be very ine\ufb03cient, since the momentum of thermal electrons (pth,e = \u221a2mekT2) is much smaller than pinj. Recent particlein-cell (PIC) simulations of quasi-perpendicular shocks by Guo et al. (2014), however, showed that some of the incoming electrons are specularly re\ufb02ected at the shock ramp and accelerated via multiple cycles of shock drift acceleration (SDA), resulting in a suprathermal, powerlaw-like tail. Those suprathermal electrons are expected to be injected to the full Fermi I acceleration and eventually accelerated to highly relativistic energies. Such a hybrid process combining specular re\ufb02ection with SDA and DSA between the shock ramp and upstream waves is found to be e\ufb00ective at both quasi-perpendicular and quasi-parallel collisionless shocks (Park et al. 2015; Sunberg et al. 2016). However, the injection momentum for electrons is not well constrained, since the development of the full DSA power-law spectrum extending to p/mec \u226b1 has not been established in the simulations due to severe computational requirements for these PIC plasma simulations. Here, we adopt a simple model in which the electron injection depends on the shock strength as pinj \u2248(6.4/\u03c3)mpus, in e\ufb00ect resulting in pinj \u223c150pth,e. For a smaller compression ratio, the ratio, pinj/mpus, is larger, so the injection becomes less e\ufb03cient. The factor fN in Equation (9) depends on the suprathermal electron population with p \u223cpinj in the background plasma. We assume that the background electrons are energized via kinetic plasma processes at the shock and form a suprathermal tail represented by a \u03ba distribution of \u03ba = 1.6\u22122.5, rather than a Maxwellian distribution. The \u03ba distribution is well motivated in collisionless plasmas such as those in ICMs, where nonequilibrium interactions can easily dominate for the distribution of suprathermal particles (Pierrard & Lazar 2010). It has a power-law-like high-energy tail, which asymptotes to the Maxwellian distribution for large \u03ba. The relatively large population of suprathermal particles enhances the injection fraction compared to the Maxwellian form (Kang et al. 2014). This enhancement \fShock Acceleration Model for Radio Relics 7 is larger for smaller \u03ba. The injection e\ufb03ciency at the shock is also less sensitive to the shock Mach number, compared to that from the Maxwellian distribution. Note, however, that the suprathermal electron population and the injection rate do not a\ufb00ect signi\ufb01cantly the shapes of the radio-emitting electron energy spectrum and the ensuing radio synchrotron spectrum, so the adopted models for pinj and the \u03ba distribution do not in\ufb02uence the main conclusions of this study. The cuto\ufb00momentum in Equation (9) can be estimated from the condition that the DSA acceleration rate is equal to the synchrotron/iC loss rate: peq = \u03b3eqmec = m2 ec2us p 4e3q/27 B1 B2 e,1 + B2 e,2 !1/2 k\u22121 Bohm, (10) where us is the shock speed and B2 e = B2 + Brad(z)2 represents the e\ufb00ective magnetic \ufb01eld strength that accounts for both synchrotron and iC losses (Kang 2011). For typical parameters with us \u223c3 \u00d7 103 km s\u22121, B1 \u223c1 \u00b5G, and kBohm \u223c1, the cuto\ufb00momentum becomes peq/mec \u223c108, but the exact value is not important, as long as peq/mec \u226b104. If there is a preexisting, upstream electron population, fpre(p), the accelerated population at the shock is given by freacc(rs, p) = q \u00b7 p\u2212q Z p pinj p\u2032q\u22121fpre(p\u2032)dp\u2032 (11) (Drury 1983). In previous studies, the DSA of preexisting CR particles is commonly referred to as \u201creacceleration\u201d (e.g., Kang et al. 2012; Pinzke et al. 2013), so we label freacc as the \u201cDSA reaccelerated\u201d component. In contrast, finj in Equation (9) represents the DSA of the background suprathermal particles injected in situ at the shock, so we label it as the \u201cDSA injected\u201d component. We emphasize that our DSA reacceleration models involve irreversible acceleration of preexisting CRe, in contrast to the adiabatic compression models of En\u00dflin et al. (1998). In our simulations, the preshock electron population is assumed to have a power-law spectrum with exponential cuto\ufb00as follows: fpre(p) = fo \u00b7 p\u2212s exp \" \u2212 \u0012 p pe,c \u00132# , (12) where the slope s is chosen to match the observed radio spectral index. As mentioned in the Introduction, we adopt a large cuto\ufb00Lorentz factor, \u03b3e,c = pe,c/mec = 104 \u2212105, in the weak-shock models, while \u03b3e,c = 300 in the strong-shock models (see also Table 1). The normalization factor, fo, is arbitrary in the simulations, since the CR pressure is dynamically insigni\ufb01cant (that is, in the test-particle limit). Yet, it would be useful to parameterize it with the ratio of the CRe to the gas pressure in the preshock region, N \u2261PCRe,1/P1 \u221dfo for a given set of s and pe,c. In the models considered here, typically N \u223c(0.05 \u22120.5)% matches the amplitude of observed radio \ufb02ux in the Toothbrush relic. In our DSA simulations, pe,c is assumed for simplicity to stay constant in the preshock region for the duration of the simulations (\u223c200 Myr). This is probably unrealistic for high-energy electrons with \u03b3e > 104 (see Equation (5)), unless preexisting electrons are accelerated continuously in the preshock region, for instance by turbulence. 2.4. Model Parameters 2.4.1. Observed Properties of the Toothbrush Relic Before outlining our simulation model parameters, we brie\ufb02y review our target, the Toothbrush radio relic. The relic has a linear morphology aligned roughly eastwest with multiple components that, together, resemble the head and handle of a toothbrush (van Weeren et al. 2012) on respectively the west and east ends. Our focus is on the head component (labeled as B1 in Figure 4 of van Weeren et al. (2012)), whose \u201cbristles\u201d point southward and whose northern edge seems to coincide with the shock location detected in X-ray observations. van Weeren et al. (2016) estimated rather similar preshock and postshock temperatures, kT1 = 8.3+3.2 \u22122.4 keV and kT2 = 8.2+0.7 \u22120.9 keV, respectively, indicating that kT1 is more uncertain from their data. From the slope change in the X-ray surface brightness across the putative shock in the component B1, they estimated a low shock Mach number, MX \u223c1.2. On the other hand, Mrad \u22482.8 is required to explain the radio spectral index (\u03b1s \u22480.8) at the northern edge of B1 as a consequence of the DSA of CRe electrons injected locally from the thermal plasma. 2.4.2. Shock Dynamics We assume for simplicity, but one step beyond a planar shock model, that the shock dynamics can be approximated initially by a self-similar blast wave that propagates through an isothermal ICM with the density pro\ufb01le of nth = 10\u22124 cm\u22123(r/0.8Mpc)\u22122. Then, the shock radius and velocity evolve roughly as rs \u221dt2/3 and us \u221dt\u22121/3, respectively, where t is the time since the nominal point explosion for the spherical blast wave (e.g., Ryu & Vishniac 1991). The shock Mach number decreases in time as the spherical shock expands in the simulations. For this self-similar shock, the downstream \ufb02ow speed in the upstream rest frame decreases toward \f8 Kang, Ryu, & Jones Table 1. Model Parameters Model kT1 Ms,i [Ms,o]a [kT2,o]b B1 [B2,o]c s \u03b3e,c \u03c4acc,0 Remarks Name (keV) (keV) ( \u00b5G) ( \u00b5G) (Myr) W1.7a 5.2 1.7 1.64 8.56 1.5 2.7 4.4 105 100 no injection W1.7b 5.2 1.7 1.64 8.56 1.5 2.7 4.4 4 \u00d7 104 100 no injection W1.7c 5.2 1.7 1.64 8.56 1.5 2.7 4.4 104 100 no injection W1.7aN 5.2 1.7 1.64 8.56 1.5 2.7 4.4 8 \u00d7 104 no injection W2.0a 4.3 2.0 1.87 8.23 1.5 2.5 4.4 8 \u00d7 104 100 no injection W2.0b 4.3 2.0 1.87 8.23 1.5 2.5 4.4 4 \u00d7 104 100 no injection W2.0c 4.3 2.0 1.87 8.23 1.5 2.5 4.4 104 100 no injection W2.0d 4.3 2.0 1.87 8.23 1.5 2.5 4.4 8 \u00d7 104 50 no injection W2.0aN 4.3 2.0 1.87 8.23 1.5 2.5 4.4 8 \u00d7 104 no injection S3.6a 3.0 3.6 3.03 11.2 1 2.5 4.6 3 \u00d7 102 100 \u03ba = 1.6 S3.6b 3.0 3.6 3.03 11.2 1 2.5 4.6 3 \u00d7 102 100 seed CRe S3.6c 3.0 3.6 3.03 11.2 1 2.5 4.6 3 \u00d7 102 100 no decay (rdec \u2192\u221e) S3.6aN 3.0 3.6 3.03 11.2 1 2.5 4.6 3 \u00d7 102 \u03ba = 1.6 S3.6bN 3.0 3.6 3.03 11.2 1 2.5 4.6 3 \u00d7 102 seed CRe aShock sonic Mach number at the time of observation. b Postshock temperature at the time of observation. c Postshock magnetic \ufb01eld strength at the time of observation. the cluster center as u(r) \u221d(r/rs), so the postshock \ufb02ow speed with respect to the shock front increases downstream away from the shock. We acknowledge that the actual shock dynamics in the simulations deviate slightly from such behaviors, since the model shocks are not strong, although this should not in\ufb02uence our conclusions. Table 1 summarizes model parameters for the DSA simulations considered in this study. Considering the observed ranges for both kT1 and kT2, we vary the preshock temperature as kT1 = 3.0\u22125.2 keV. At the onset of the simulations, the shock is speci\ufb01ed by the initial Mach number, Ms,i = 1.7 \u22123.6, which sets the initial shock speed as us,i = Ms,i \u00b7 150 km s\u22121p T1/106K, and is located at rs,i \u22480.8 Mpc from the cluster center. This can be regarded as the time when the relic-generating shock encounters the preshock region containing preexisting electrons, that is, the birth of the radio relic. We de\ufb01ne the \u201cshock age,\u201d tage \u2261t \u2212tonset, as the time since the onset of our simulations. We \ufb01nd that the downstream radio \ufb02ux pro\ufb01les and the integrated spectrum become compatible with the observations at the \u201ctime of observation,\u201d tage \u223c140 \u2212150 Myr, typically when the shock is located at rs \u22481.1 \u22121.2 Mpc. The fourth and \ufb01fth columns of Table 1 show the shock Mach number, Ms,o, and the postshock temperature, kT2,o, at the time of observation. In this study, we examine if the various proposed DSA-based models can explain the observed radio \ufb02ux pro\ufb01les reported by van Weeren et al. (2016), which, as we pointed out, depend strongly on the electron cooling length behind the shock. Therefore the magnetic \ufb01eld strength, which impacts electron cooling, is another key parameter. The sixth column of Table 1 shows the preshock magnetic \ufb01eld strength, B1 = 1 \u22121.5 \u00b5G, which is assumed to be uniform in the upstream region. The postshock magnetic \ufb01eld strength is modeled as B2(t) = B1 p 1/3 + 2\u03c3(t)2/3 \u22482.5 \u22122.7 \u00b5G, which decreases slightly as the shock compression ratio, \u03c3(t), decreases in time in response to shock evolution. For the downstream region (r < rs), we assume a simple model in which the magnetic \ufb01eld strength scales with the gas pressure as Bdn(r, t) = B2(t) \u00b7 [P(r, t)/P2(t)]1/2, where P2(t) is the gas pressure immediately behind the shock. 2.4.3. DSA Model Parameters As mentioned in the Introduction, the discrepancy between the observationally inferred values of MX and Mrad could be resolved if we adopt a preexisting electron population with the \u201cright values\u201d of s and pe,c. Alternatively, we can explain the observed radio spec\fShock Acceleration Model for Radio Relics 9 Figure 1. Electron distribution at the shock position, ge(rs, p) = p4fe(rs, p) (upper panels), and volume-integrated electron distribution, Ge(p) = R ge(r, p)dV (lower panels). See Table 1 for model parameters. In the upper panels, the red and black dotted lines show the distribution function for preexisting electrons, p4fpre, while the black solid and red dashed lines show either p4freacc for the W1.7, W2.0, and S3.6b models or p4finj for the S3.6a model. In the upper right panel, the \u03ba distributions with \u03ba = 1.6 (black dot-dashed line) and \u03ba = 2.5 (blue dot-dashed line) for suprathermal electrons are also shown for p < pinj \u224830mec. In the lower panels, results are shown at tage = 142 Myr for W1.7a (black solid lines), W1.7b (red dashed), W1.7c (blue dot-dashed) and W1.7aN (green long-dashed); at tage = 148 Myr for W2.0a (black solid), W2.0b (red dashed), W2.0c (blue dot-dashed) and W2.0aN (green long-dashed); and at tage = 144 Myr for S3.6a (black solid), S3.6b (red dashed) and S3.6aN (green long-dashed). tral index with a shock with Mrad, assuming that MX and Mrad may represent di\ufb00erent parts of nonuniform shock surfaces. Our study considers both of these possible scenarios: (1) in the weak-shock models a shock with Ms \u22722 encounters a preshock region of a \ufb02at preexisting CRe population with \u03b3e,c > 104, and (2) in the strongshock models a shock with Ms \u22483.0 accelerates lowenergy seed electrons (\u03b3e \u223c30), either shock-generated suprathermal electrons or preexisting fossil CRe. In general, we \ufb01nd in these experiments that the models in which postshock electrons cool without turbulent reacceleration cannot explain the broad widths of the observed radio \ufb02ux pro\ufb01les, independent of the assumed shock strength and CRe sources, as shown in the next section. Consequently, we also explore models that include postshock TA with the characteristic acceleration time scale of \u03c4acc \u223c100 Myr, which, as argued in Section 2.2, is justi\ufb01able in this context and also is comparable to expected postshock electron cooling times. To facilitate the analyses below, we comment brie\ufb02y on the model naming convention in Table 1. The \ufb01rst character, W or S, refers to weak-shock or strong-shock models, respectively, while the number after the \ufb01rst letter corresponds to the initial Mach number, Ms,i. This is followed by a sequence label (a, b, c, d) as the preexisting CRe cuto\ufb00, \u03b3e,c, or TA time, \u03c4acc, parameters vary. If there is no postshock TA, the letter \u201cN\u201d is appended at the end. In the weak-shock models, we adopt the initial shock Mach number, Ms,i = 1.7 \u22122.0, and set s = 4.4 as the power-law slope for preexisting CRe. In order to see the dependence of emissions on the cuto\ufb00energy in the preexisting electron spectrum, we consider a wide range of \u03b3e,c = 104 \u2212105 in the W1.7a, b, c and W2.0a, b, c models (column 9 of Table 1). In model W2.0d, an en\f10 Kang, Ryu, & Jones Figure 2. Synchrotron emissivity at 150 MHz, j150(r) (upper panels, in arbitrary units), and associated spectral index between 150 and 610 MHz, \u03b1610 150 (lower panels), as a function of the radial distance from the cluster center at four di\ufb00erent tage. See Table 1 for model parameters. Thick (thin) lines are used for the models with (without) turbulent acceleration. hanced, postshock turbulent reacceleration with shorter \u03c4acc,0 is considered. For all W1.7 and W2.0 models, the in situ injection from the background plasma is turned o\ufb00in order to focus on the \u201cDSA reacceleration\u201d of preexisting CRe. In the case of the strong-shock scenario, the S3.6a model includes only the \u201cDSA injection\u201d from a suprathermal \u03ba distribution of \u03ba = 1.6, while the S3.6b model incorporates only the \u201cDSA reacceleration\u201d of the preexisting CRe population with s = 4.6 and \u03b3e,c = 300. For the S3.6b model, the simulation results remain similar for di\ufb00erent values of cuto\ufb00energy, \u03b3e,c, as long as \u03b3e,c > pinj/mec \u224830. In the S3.6c model, the decay of turbulence is turned o\ufb00(rdec \u2192\u221e), so the momentum di\ufb00usion coe\ufb03cient is assumed to be uniform behind the shock; that is, Dpp = p2/(4\u03c4acc,0). The upper panels of Figure 1 show the preexisting electron spectrum, fpre (red and black dotted lines), and the analytic solutions for the shock spectra, freacc and finj, given in Equations (11) and (9), respectively. Here, the normalization for fpre corresponds to N \u22430.01 for W1.7a and W2.0a and N \u22430.001 for S3.6b. For the W1.7 and W2.0 models, at the shock ge(rs, p) = p4freacc(p) is used, since the in situ injection from the background plasma is suppressed. For these models, the slope of freacc(p) at the shock position is the preshock, s, for p < pe,c, while it becomes the DSA value, q, for p > pe,c. In the upper right panel of Figure 1, the black dotdashed line illustrates the \u03ba distribution of \u03ba = 1.6 for p < pinj, while the black solid and red long-dashed lines show finj(p) and freacc(p), respectively, for p \u2265pinj. As shown here, the normalization factor fN for finj(p) is speci\ufb01ed by the \u03ba distribution. In all S3.6 models, the DSA slope q is \ufb02atter than s, so both finj and freacc have power-law spectra with the slope q, extending to peq/mec \u223c108, independent of \u03b3e,c. As a result, preexisting low-energy CRe just provide seeds to the DSA process and enhance the injection, but do not a\ufb00ect the shape of the postshock electron spectrum for p \u226bpinj (i.e., the black solid line for S3.6a versus the red dashed line for S3.6b in the upper right panel). Regarding the shock-generated suprathermal electron population and its posited non-Maxwellian, \u03ba\fShock Acceleration Model for Radio Relics 11 distribution form, the \u03ba index is not universal, since it depends on a local balance of nonequilibrium processes. If we adopt a steeper \u03ba distribution, with, for example, \u03ba = 2.5 (blue dot-dashed line in Figure 1), then the amplitude of the injected electron \ufb02ux at pinj will be smaller, and so the ensuing radio \ufb02ux will be reduced from the models shown here (S3.6a and S3.6aN). 3. RESULTS OF DSA SIMULATIONS 3.1. Radial Pro\ufb01les of Radio Emissivity Figure 2 shows the evolution of the synchrotron volume emissivity at 150 MHz, j150(r), and the associated spectral index between 150 and 610 MHz, \u03b1610 150(r), determined from j150(r) and j610(r). The shock is located at rs,i \u22480.8 Mpc at the start of the simulations, tage = 0. In the case of the W1.7 and W2.0 models, this can be regarded as the moment when the shock begins to accelerate preexisting electrons and become radio-bright. The \ufb01gure shows that in the models with postshock TA (thick lines) the spectral steepening is signi\ufb01cantly delayed relative to the models without TA (thin lines). Only the models with TA seem to produce \u03b1610 150 pro\ufb01les broad enough to be compatible with the observed pro\ufb01le, which increases from \u03b1610 150 \u22480.8 to \u03b1610 150 \u22482.0 over \u223c200 kpc across the relic width. For the W1.7a and W2.0a models, the emissivity increases by an order of magnitude (a factor 8 \u221212) from upstream to downstream across the shock. Note that the subsequent, postshock emissivity decreases faster with time in the S3.6a model with only DSA injection from the background plasma, compared to the W1.7 and W2.0 models with the DSA reacceleration of the preexisting CRe. This is because for the particular injection model adopted here, the injection rate depends on us and Ms, both of which decrease in time as the shock propagates. 3.2. Radio Surface Brightness Pro\ufb01les The radio surface brightness, I\u03bd, is calculated by adopting the spherical wedge volume of radio-emitting electrons, speci\ufb01ed with the two extension angles relative to the sky plane, \u03c81 and \u03c82, as shown in Figure 2 of Kang (2016a): I\u03bd(R) = Z h1,max 0 j\u03bd(r)dh1 + Z h2,max 0 j\u03bd(r)dh2 , (13) where R is the distance behind the projected shock edge in the plane of the sky (measured from the shock toward the cluster center), r is the radial distance outward from the cluster center, and h1 = r sin \u03c81 and h2 = r sin \u03c82 are the path lengths along line of sight beyond and in front of the sky plane, respectively. (See Figure 1 of Kang (2015) for the geometrical meaning of R.) Figure 3 shows the pro\ufb01les of I150(R) and \u03b1610 150(R), now calculated from I150(R) and I610(R), at the shock age of tage = 142 \u2212148 Myr. The adopted values of \u03c81 and \u03c82 are given in the lower panels. In the weak-shock models with Ms,o \u22481.6 \u22121.9, a high-cuto\ufb00Lorentz factor, \u03b3e,c \u22734 \u00d7 104, is required to match \u03b1610 150 \u22480.8 at the shock position. From the geometric consideration only (that is, the line-of-sight length through the model relic), the \ufb01rst in\ufb02ection point in the I(R) pro\ufb01le occurs at rs(1 \u2212cos \u03c81) \u224838 kpc for the shock radius rs \u2248 1.1 Mpc and \u03c81 = 15\u25e6, and the second in\ufb02ection point occurs at rs(1 \u2212cos \u03c82) \u224887 kpc for \u03c82 = 23\u25e6. The third in\ufb02ection point at d \u2248150 \u2212160 kpc occurs at the postshock advection length, \u223cu2tage, which corresponds to the width of the postshock spherical shell. Note that the normalization factor for I150 is arbitrary, but it is the same for all three models with Ms,i = 1.7 (upper left panel) and for the three models with Ms,i = 2.0 (upper middle panel). But note that for the S2.0d model I150 is reduced by a factor of 0.6, compared to the other three models. So, for example, the relative ratio of I150 between W1.7aN (without TA) and W1.7a (with TA) is meaningful. In the case of the S3.6 models (upper right panel), on the other hand, the normalization factor is the same for S3.6aN, S3.6a, and S3.6c (with only DSA injection of shock-generated suprathermal electrons), but a di\ufb00erent factor is used for S3.6b (with preexisting, seed CRe) in order to plot the four models together in the same panel. The e\ufb00ects of postshock TA can be seen clearly in the spectral steepening of \u03b1610 150 in the lower panels. As shown in Figure 4 below, for instance, the S3.6aN model (black) produces a \u201ctoo-steep\u201d spectral pro\ufb01le compared to observations, while the S3.6c model (green) without turbulence decay (rdec \u2192\u221e) produces a \u201ctoo-\ufb02at\u201d spectral pro\ufb01le. To compare to the observed radio \ufb02ux density distribution, S\u03bd, the intensity, I\u03bd, should be convolved with telescope beams. In Figure 4, a Gaussian smoothing with 23.5 kpc width (equivalent to 6.\u2032\u20325 at the distance of the Toothbrush relic) is applied to calculate S\u03bd(R), while the spectral index \u03b1610 150 is then calculated from S150(R) and S610(R). The observational data of van Weeren et al. (2016) are shown with magenta dots. The observed \ufb02ux density at 150 MHz covering the region of 6.\u2032\u20325 \u00d7 70\u2032\u2032 at R \u224850 kpc behind the shock is S150 \u22480.20 Jy. The required amount of preexisting CRe to match this \ufb02ux level corresponds to N \u22480.4\u22120.5% for the W1.7a,b and W2.0a,b models, and N \u22480.05% for the S3.6b model. In the S3.6a model (without preexisting CRe), the corresponding \ufb02ux density is S150 \u22480.004 Jy, \ufb01ve times smaller than the observed value. Consider\f12 Kang, Ryu, & Jones Figure 3. Surface brightness pro\ufb01le at 150 MHz, I150 (upper panels, in arbitrary units), and the spectral index between 150 and 610 MHz with I (lower panels), as a function of the projected distance behind the shock, R (kpc). See Table 1 for model parameters. Results are shown at tage = 142 Myr for W1.7aN (black solid lines), W1.7a (red dashed), and W1.7b (blue dot-dashed); at tage = 148 Myr for W2.0aN (black solid), W2.0a (red dashed), W2.0b (blue dot-dashed), and W2.0d (green long-dashed); and at tage = 144 Myr for S3.6aN (black solid), S3.6a (red dashed), S3.6b (blue dot-dashed), and S3.6c (green long-dashed). The extension angles are assumed to be \u03c81 = 15\u25e6and \u03c82 = 23\u25e6for the W1.7 and W2.0 models, while \u03c81 = 12\u25e6 and \u03c82 = 20\u25e6for the S3.6 models. The I150 of the W2.0d model (faster TA) is reduced by a factor of 0.6, compared to those of other W2.0 models. ing that the \u03ba = 1.6 distribution is already quite \ufb02at and so \u03ba index cannot be reduced further, it could be di\ufb03cult to increase signi\ufb01cantly the \ufb02ux density S150 in the S3.6a model. In that regard, the S3.6b model with preexisting CRe is favored over the S3.6a model. Note that the synchrotron intensity scales with I150 \u221dB(s\u22121)/2 2 , while the downstream magnetic \ufb01eld strength in these models is chosen to be B2,o \u22482.5 \u22122.7 \u00b5G (see Table 1) in order to maximize the downstream cooling length given in Equation (1). In the upper panels of Figure 4, di\ufb00erent normalization factors are adopted for each model to obtain the best match with the observed \ufb02ux level of S150 roughly at the peak values near 30 \u221250 kpc. The same relative normalization factors are scaled for the higher frequency and applied to S610 in the middle panels. The observed pro\ufb01le of S150 indicates that the region of the Toothbrush relic beyond R > 150 kpc might be contaminated by a contribution from the radio halo. We \ufb01nd that for the W1.7 and W2.0 models, a preexisting electron population with s = 4.4 and \u03b3e,c \u22734\u00d7104 is necessary to reproduce the observed spectral steepening pro\ufb01le across the relic width. Moreover, the results demonstrate that the six models with TA (W1.7a,b, W2.0a,b, and S3.6a,b) can reproduce the observed pro\ufb01les of S\u03bd(R) and \u03b1610 150(R) reasonably well, while, as noted previously, none of the models without TA (black solid lines) can reproduce the pro\ufb01le of \u03b1610 150. However, it is also important to realize that the models should not produce \u201cexcess\u201d TA. In particular, also as noted previously, the W2.0d model (green) with \u03c4acc,0 = 50 Myr and the S3.6c model (green) without turbulence decay produce \u201ctoo-\ufb02at\u201d pro\ufb01les of \u03b1610 150. At the time of observation, Ms,o \u22483.03, in the S3.6 models, so \u03b1s \u22480.74, which is slightly \ufb02atter than the \fShock Acceleration Model for Radio Relics 13 Figure 4. Radio \ufb02ux density, S\u03bd, within a synthesized telescope beam at 150 MHz (top panels) and at 610 MHz (middle panels) in arbitrary units, and the spectral index, \u03b1610 150, between the two frequencies (bottom panels), plotted as a function of the projected distance behind the shock, R (kpc). See Table 1 for model parameters. The surface brightness pro\ufb01les shown in Figure 3 are smoothed by a Gaussian beam with 6.5\u2032\u2032 resolution (\u224823.5 kpc). The same line types as in Figure 3 are used. S150 and S610 of the W2.0d model (faster TA) are lowered by a factor of 0.6, compared to those of the other W2.0 models, as in Figure 3. The magenta dots are the observational data of van Weeren et al. (2016). observed index of 0.8 at the leading edge of the Toothbrush relic. This, we argue, is still consistent, because the observed radio \ufb02ux pro\ufb01les are blended by a \ufb01nite telescope beam. We also considered a model (not shown) with Ms,i = 3.3 with Ms,o \u22482.85, so at the time of observation, q \u22484.6 (\u03b1s \u22480.78). That model, however, produces a spectral index pro\ufb01le across the relic a bit too steep to be compatible with the observed pro\ufb01le. 3.3. Volume-Integrated CRe and Radio Spectra In the case of pure in situ injection without TA, the postshock momentum distribution function is basically the same as the DSA power-law spectrum given in Equation (9) except for the increasingly lower exponential cuto\ufb00due to postshock radiative cooling. So the volume-integrated CRe energy spectrum, Fe(p) = R fe(r, p)dV , is expected to have a broken power law form, whose slope increases from q to q + 1 at the break momentum, pbr/mec \u2248 104(tage/100Myr)\u22121(5 \u00b5G)2/(B2 2 + B2 rad). In the lower right panel of Figure 1, for instance, we can see that the volume-integrated electron spectrum, Ge(p) = p4Fe(p), steepens gradually near p/mec \u223c3 \u00d7 103 in the S3.6aN model (without TA, green long-dashed line). Of course such a simple picture for the steepening does not apply to the W1.7 and W2.0 models with the DSA reacceleration of preexisting electrons, since the spectrum at the shock, freacc, is a broken power-law that steepens from p\u2212s to p\u2212q above pe,c. In these models, Ge(p) depends on the assumed value of \u03b3e,c (see the \f14 Kang, Ryu, & Jones Figure 5. Time evolution of the volume-integrated synchrotron spectrum, \u03bdJ\u03bd, for the W1.7a, W2.0a, S3.6a, and S3.6b models. See Table 1 for model parameters. The spectra at three di\ufb00erent shock ages are shown with black solid, red dashed, and blue dot-dashed lines. The green long-dashed line shows \u03bdJ\u03bd at the \ufb01rst epoch for models without TA. Note that the normalization factors for the green lines are 1.6 times higher than for other models with TA. The open magenta squares and solid black \ufb01lled circles are for the B1 component of the Toothbrush relic. The squares at low frequencies are the observational data given in Table A1 of Stroe et al. (2016). The two squares at 4.85 and 8.35 GHz are \ufb02uxes in Table 5 of Kierdorf et al. (2016), multiplied by a factor of 0.71. The error bars are given in the same tables. The solid black circles at 16 and 30 GHz are the data points, multiplied by factors of 1.1 and 1.8, respectively, which could represent the SZ-corrected \ufb02uxes (Basu et al. 2016). black, red, and blue lines in the lower left and lower middle panels of Figure 1) as well as \u03c4acc. The models without TA are also shown as green long-dashed lines for comparison. In the S3.6a model in the lower right panel of Figure 1, the suprathermal \u03ba-like population for p \u2273pinj \u224830 mec provides seed electrons for the in situ injection into DSA and subsequent TA in the postshock \ufb02ow. In fact, this results in an excess, low-energy CRe population in the range 30 \u2272p/mec \u2272300 for the models, compared to the S3.6aN model, as shown in the \ufb01gure. This low-energy component depends on the details of kinetic plasma processes operating near the shock, which are not yet fully understood, and would not contribute signi\ufb01cantly to the observed radio emission in the range of 0.15 \u221210 GHz. For the postshock magnetic \ufb01eld strength, B2 \u22482.5 \u00b5G, electrons with 6.9 \u00d7 103 \u2264p/mec \u22645.6 \u00d7 104 make the peak contribution in this observation frequency range. From the spectral shape of Ge(p), we expect that the ensuing volume-integrated radio spectrum, J\u03bd = R j\u03bd(r)dV , should steepen gradually toward high frequencies. Moreover, the form depends on pe,c and \u03c4acc in the W1.7 and W2.0 models and on pbr and \u03c4acc in the S3.6 models. Figure 5 shows the volume-integrated radio spectrum, \u03bdJ\u03bd, for the W1.7a, W2.0a, S3.6a, and S3.6b models at three di\ufb00erent shock ages to demonstrate how the spectrum evolves in time. For the models without TA (W1.7aN, W2.0aN, S3.6aN, and S3.6bN), the spectrum is shown only at the \ufb01rst epoch (the green long-dashed lines). In each panel, the normalization factor for the vertical scale is chosen so that the simulated curves match the observation data around 2 GHz. For the models without TA, the normalization factor is 1.6 times \fShock Acceleration Model for Radio Relics 15 Figure 6. Spectral index between 150 and 610 MHz, \u03b1610 150(R), (top panels) and volume-integrated synchrotron spectrum, \u03bdJ\u03bd, (bottom panels) for the weak-shock models. The models with di\ufb00erent values of \u03b3e,c are compared (W1.7a,b,c and W2.0a,b,c). In the W2.0d model with \u03c4acc = 5 \u00d7 10 Myr (green long-dashed lines), turbulent acceleration is faster than in the W2.0a model. The magenta dots in the upper panels are the same as those in Figure 4. The open magenta squares and solid black \ufb01lled circles in the lower panels are the same as those in Figure 5. larger than for the corresponding models with TA. Note that the open squares (except at 4.85 and 8.35 GHz) are data for the B1 component of the Toothbrush relic in Table A1 of Stroe et al. (2016). Kierdorf et al. (2016) presented the sum of B1 + B2 + B3 \ufb02ux at 4.8 and 8.35 GHz in their Table 5. Considering that the average ratio of the B1/(B1 + B2 + B3) \ufb02uxes near 2 GHz is about 0.71 according to Tables 3 and A1 of Stroe et al. (2016), we lower the \ufb02uxes at 4.85 and 8.35 GHz in the Kierdorf\u2019s data by the same factor. Basu et al. (2016) showed that the Sunyaev-Zeldovich (SZ) decrement in the observed radio \ufb02ux can be signi\ufb01cant above 10 GHz for radio relics. We adopt their estimates for the SZ contamination factor for the Toothbrush relic given in their Table 1. Then the SZ correction factors, F, for the \ufb02uxes at 16 and 30 GHz are about 1.1 and 1.8, respectively. Two solid black \ufb01lled circles correspond to the \ufb02ux levels so-corrected at the two highest frequencies. Although the models without TA do not reproduce the observed pro\ufb01le of \u03b1610 150(R), as shown in Figure 4, the W1.7aN and W2.0aN models seem to \ufb01t the observed J\u03bd better than the W1.7a and W2.0a models. So this exercise teaches us that it is important to test any model against several di\ufb00erent observed properties. Among the strong-shock models, S3.6a and S3.6b with TA seem to produce better \ufb01ts to SZ-uncorrected J\u03bd, while S3.6aN and S3.6bN without TA give the spectra more consistent with SZ-corrected J\u03bd. In all models considered here, however, it seems challenging to explain the observed \ufb02ux at 8.35 GHz. In conclusion, adjustments of basic parameters can allow both of the weak-shock and strong-shock models to explain the observational data for the Toothbrush B1 component reasonably well. In the weak-shock scenario, as we argued in the Introduction, however, it would be challenging to ful\ufb01ll the requirement for a homogeneous, \f16 Kang, Ryu, & Jones \ufb02at-spectrum preexisting electron population over a region 400 kpc in length and 300 kpc in width, which is needed to explain the observed uniformity in the spectral index along the length of the relic. If the preexisting electrons cool by radiative and collisional losses non-uniformly, or if the preshock CRe have a span in \u201cages\u201d, both the cuto\ufb00energy and thus the spectral index at the relic edge would be expected to vary along the relic length. To explore such e\ufb00ects, we compare in Figure 6 the weak-shock models allowing di\ufb00erent cuto\ufb00energies, 104 \u2264\u03b3e,c \u2264105. In order to reproduce the observed pro\ufb01les of both \u03b1610 150 and \u03bdJ\u03bd, \u03b3e,c \u22738 \u00d7 104 is required for the W1.7 and W2.0 models. Considering that the cooling times for electrons with \u03b3e,c = 8 \u00d7 104 in microgauss \ufb01elds are only \u223c13 Myr, it would be very challenging to explain a constant \u03b3e,c within the required preshock region. In the right-hand panels of Figure 6, the W2.0d model (green long-dashed lines) shows that the \u201cenhanced\u201d TA with \u03c4acc,0 = 50 Myr would be too e\ufb03cient to explain the observed pro\ufb01le of \u03b1610 150(R). The model produces too many low-energy electrons with \u03b3e < 104, compared to high-energy electrons with \u03b3e \u2273104. This implies that the path to a model consistent with the observations cannot involve the adoption of smaller \u03b3e,c combined with more rapid TA (smaller \u03c4acc). Our results indicate that the strong-shock model with Ms \u22483 is favored. That could mean that the observed X-ray and radio Mach numbers represent di\ufb00erent parts of a nonuniform shock surface (see the discussion in the Introduction). However, we should point out that the predicted J\u03bd values for the S3.6a and S3.6b models deviate from the observed curvature at 8.35 GHz (Figure 5). Finally, as noted earlier, in order to explain the rareness of detected radio relics in merging clusters, radio relics might be generated preferentially when shocks encounter regions of preexisting low-energy CRe (i.e., the S3.6b model). 4. SUMMARY In this study, we reexamine the merger-driven shock model for radio relics, in which relativistic electrons are accelerated via DSA at the periphery of galaxy clusters. To that end, we perform time-dependent DSA simulations of one-dimensional, spherical shocks, and we compare the results with observed features of the Toothbrush relic reported by Stroe et al. (2016) and van Weeren et al. (2016). In addition to DSA, energy losses by Coulomb scattering, synchrotron emission, and iC scattering o\ufb00the CMB radiation, and, signi\ufb01cantly, TA by compressive MHD/plasma mode downstream of the shock are included in the simulations. Considering apparently incompatible shock Mach numbers from X-ray (MX \u2248 1.2 \u22121.5) and radio (Mrad \u22482.8) observations of the Toothbrush relic, two possible scenarios are considered (see Table 1 for details): (1) weak-shock models in which a preexisting \ufb02at-spectrum electron population with high cuto\ufb00energy is accelerated by a weak shock with Ms \u22481.6\u22121.9, and (2) strong-shock models in which low-energy seed CRe, either shock-generated suprathermal electrons or preexisting soft-spectrum electrons, are accelerated by a strong shock with Ms \u22483.0. The main results are summarized as follows: 1. In order to reproduce the broad pro\ufb01le of the spectral index behind the head (component B1) of the Toothbrush relic, TA with \u03c4acc \u2248100 Myr should be included to delay the spectral aging in the postshock region. This level of TA is strong but plausible in ICM postshock \ufb02ows. 2. The strong-shock models with Ms \u22483.0, either with a \u03ba-like distribution of suprathermal electrons (the S3.6a model) or with low-energy preexisting CRe with p/mec \u2272300 (the S3.6b model), are more feasible than the weak-shock models. These models could explain the observed uniform spectral index pro\ufb01le along the relic edge over 400 kpc in relic length (component B1). Further, the S3.6b model may be preferred because (1) it can reproduce the observed \ufb02ux density with a small fraction (N \u22480.05%) of preexisting CRe, and (2) it can explain the low occurrence (\u227210%) of giant radio relics among merging clusters, where otherwise \u201csuitable\u201d shocks are expected to be common. These lowenergy fossil electrons could represent the leftovers either previously accelerated within the ICM by shock or turbulence or ejected from AGNs into the ICM, since their cooling times are long, trad > 3.5 Gyr with B \u223c1 \u00b5G for \u03b3e < 300. The the S3.6a model, in which a \u03ba = 1.6 suprathermal distribution is adopted, the predicted \ufb02ux density is about \ufb01ve times smaller than the observed level. 3. For the weak-shock models with Ms \u22481.6 \u22121.9, a \ufb02at (s \u22484.4) preexisting electron population with seemingly unrealistically high-energy cuto\ufb00(\u03b3e,c \u22738\u00d7104) is required to reproduce the observational data (the W1.7a and W2.0a models). It would be challenging to generate and maintain such a \ufb02at-spectrum preexisting population with a uniform value of \u03b3e,c over the upstream region of 400 kpc in length and 300 kpc in width, since the cooling time is short, \u03c4rad \u223c10 Myr for electrons with \u03b3e \u223c105 in a 1 \u00b5G level magnetic \ufb01eld. \fShock Acceleration Model for Radio Relics 17 H.K. was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2014R1A1A2057940). D.R. was supported by the National Research Foundation of Korea through grants 2014M1A7A1A03029872 and 2016R1A5A1013277. T.W.J. was supported by the US National Science Foundation through grant AST1211595. The authors thank R. J. van Weeren for providing the radio \ufb02ux data for the Toothbrush relic published in van Weeren et al. (2016). Software: CRASH (Kang & Jones 2006)" + }, + { + "url": "http://arxiv.org/abs/1602.03278v2", + "title": "Re-acceleration Model for Radio Relics with Spectral Curvature", + "abstract": "Most of the observed features of radio {\\it gischt} relics such as spectral\nsteepening across the relic width and power-law-like integrated spectrum can be\nadequately explained by diffusive shock acceleration (DSA) model, in which\nrelativistic electrons are (re-)accelerated at shock waves induced in the\nintracluster medium. However, the steep spectral curvature in the integrated\nspectrum above $\\sim 2$ GHz detected in some radio relics such as the Sausage\nrelic in cluster CIZA J2242.8+5301 may not be interpreted by simple radiative\ncooling of postshock electrons. In order to understand such steepening, we here\nconsider a model in which a spherical shock sweeps through and then exits out\nof a finite-size cloud with fossil relativistic electrons. The ensuing\nintegrated radio spectrum is expected to steepen much more than predicted for\naging postshock electrons, since the re-acceleration stops after the\ncloud-crossing time. Using DSA simulations that are intended to reproduce radio\nobservations of the Sausage relic, we show that both the integrated radio\nspectrum and the surface brightness profile can be fitted reasonably well, if a\nshock of speed, $u_s \\sim 2.5-2.8\\times 10^3 \\kms$, and sonic Mach number, $M_s\n\\sim 2.7-3.0$, traverses a fossil cloud for $\\sim 45$ Myr and the postshock\nelectrons cool further for another $\\sim 10$ Myr. This attempt illustrates that\nsteep curved spectra of some radio gischt relics could be modeled by adjusting\nthe shape of the fossil electron spectrum and adopting the specific\nconfiguration of the fossil cloud.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2016-02-10", + "updated": "2016-05-07", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Radio relics are di\ufb00use radio sources that contain relativistic electrons with the Lorentz factor of \u03b3e \u223c104, radiating synchrotron emission in the magnetic \ufb01elds of order of \u00b5G in galaxy clusters (see, e.g., Feretti et al. 2012; Brunetti & Jones 2014, for reviews). After the classi\ufb01cation scheme of Kempner et al. (2004), they are often divided into two main groups according to their origins and observed properties: radio gischt and AGN relic/radio phoenix. Radio gischt relics are thought to be produced by merger-driven shocks. They are characterized by elongated shape, radio spectrum steepening behind the hypothesized shock, power-law-like integrated radio spectrum, and high polarization fraction (e.g. En\u00dflin et al. 1998; Br\u00a8 uggen et al. 2012). They are found mainly in the periphery of merging clusters. Giant radio relics such as the Sausage relic in CIZA J2242.8+5301 and the Toothbrush relic in 1RXS J0603.3 are typical examples of radio gischt (van Weeren et al. 2010, 2012). On the other hand, AGN relics are radio-emitting relativistic plasmas ejected from radio-loud AGNs, and they turn into radio ghosts (undetectable in radio) quickly due to fast electron cooling when their source AGNs are extinct (En\u00dflin 1999). Radio ghosts can be reborn later as radio phoenixes, if cooled electron plasmas are compressed and re-energized by structure-formation shocks (En\u00dflin & Gopal-Krishna 2001). Radio phoenixes have roundish or ring-like shape and steep-curved integrated radio spectrum of aged electron populations, and they are found near their source galaxies in the cluster center region (e.g. Slee et al. 2001; van Weeren et al. 2011b). The relics in A2443 and A1033 are radio phoenixes (Clarke et al. 2013; de Gasperin et al. 2015). In the di\ufb00usive shock acceleration (DSA) model for radio gischt relics, relativistic electrons are accelerated or re-accelerated at cluster shocks that are driven by supersonic motions associated with mergers of sub-structures or infall of the warm-hot gas along \ufb01laments into the hot intracluster medium (ICM) (e.g. En\u00dflin et al. 1998; Hoeft et al. 2008; Vazza et al. 2012; Hong et al. 2014). Although the injection of seed electrons into Fermi \ufb01rst order process at weak cluster shocks has not been fully understood (e.g. Kang et al. 2014; Guo et al. 2014), it is now expected that the in situ injection/acceleration from the ICM thermal plasma and/or the re-acceleration of fossil relativistic electrons may explain the radio \ufb02ux level of observed radio gischt relics (e.g. Kang et al. 2012; Vazza et al. 2015). In particular, the presence of AGN relics and radio phoenixes implies that the ICM may host clouds of aged relativistic electrons with \u03b3c \u2272300. Re-acceleration of those electrons by cluster shocks as the origin of radio relics has been explored by several authors (e.g. Kang & Ryu 2011; Kang et al. 2012; Pinzke et al. 2013). In the re-acceleration model for radio gischt relic, it is assumed that the shock propagates in the ICM thermal plasma that contains an additional population of suprathermal electrons \f\u2013 3 \u2013 with dynamically insigni\ufb01cant nonthermal pressure (e.g. En\u00dflin et al. 1998; Kang et al. 2012; Pinzke et al. 2013). The role of those fossil electrons is to provide seed electrons to the DSA process. In the compression model for radio phoenix, on the other hand, the fossil radio plasma does not mix with the background gas and has high buoyancy and high sound speed (En\u00dflin & Gopal-Krishna 2001). So the radio plasma is compressed only adiabatically, when a shock wave sweeps through the radio bubble. The shock passage through such hot radio plasma is expected to result in a \ufb01lamentary or toroidal structure (En\u00dflin & Br\u00a8 uggen 2002; Pfrommer & Jones 2011), which is consistent with the observed morphology of radio phoenixes (Slee et al. 2001). According to cosmological hydrodynamical simulations for large-scale structure formation, shocks are ubiquitous in the ICM with the mean separation of \u223c1 Mpc between shock surfaces and with the mean life time of tdyn \u223c1 Gyr (e.g., Ryu et al. 2003; Pfrommer et al. 2006; Skillman et al. 2008; Vazza et al. 2009). So it is natural to expect that actively merging clusters would contain at least several shocks and associated radio relics. However, the fraction of X-ray luminous merging clusters hosting radio relics is observed to be order of \u223c10 % (Feretti et al. 2012). In order to reconcile such rarity of observed radio gischt relics with the frequency of shocks estimated by those structure formation simulations, Kang & Ryu (2015) (Paper I) proposed a scenario in which shocks in the ICM may light up as radio relics only when they encounter clouds of fossil relativistic electrons left over from either radio jets from AGNs or previous episodes of shock/turbulence acceleration. In the basic DSA model of steady planar shock with constant postshock magnetic \ufb01eld, the electron distribution function at the shock location becomes a power-law of fe(p, rs) \u221dp\u2212q with slope q = 4M2 s /(M2 s \u22121), while the volume-integrated electron spectrum behind the shock becomes Fe(p) \u221dp\u2212(q+1) (Drury 1983). Then, the synchrotron spectrum at the shock becomes a power-law of j\u03bd(rs) \u221d\u03bd\u2212\u03b1sh with the shock index, \u03b1sh = (M2 s + 3)/2(M2 s \u22121), while the volume-integrated radio spectrum becomes J\u03bd \u221d\u03bd\u2212A\u03bd with the integrated index, A\u03bd = \u03b1sh + 0.5, above the break frequency \u03bdbr (e.g. En\u00dflin et al. 1998; Kang 2011). Here, Ms is the sonic Mach number of the shock. The simple picture of DSA needs to be modi\ufb01ed in real situations. In the re-acceleration model, for instance, if the fossil electrons dominate over the electrons injected from ICM plasma, the ensuing electron spectrum must depend on the shape of the fossil electron spectrum and may not be a single power-law. In addition, if the shock acceleration duration is less than \u223c100 Myr, the integrated radio spectrum cannot be a simple power-law, but instead it steepens gradually over the frequency range of 0.1\u221210 GHz, since the synchrotron break frequency falls to \u03bdbr \u223c1 GHz or so (Kang 2015a). Kang (2015b) demonstrated that, even with a pure in situ injection model, both the electron spectrum and the ensuing \f\u2013 4 \u2013 radio spectrum could depart from simple power-law forms in the case of spherically expanding shocks with varying speeds and/or nonuniform magnetic \ufb01eld pro\ufb01les. In Paper I, we showed that the re-acceleration of fossil electrons at spherical shocks expanding through cluster outskirts could result in the curved integrated spectrum. In some radio relics, the integrated radio spectra exhibit the steepening above \u223c2 GHz, but much stronger than predicted from simple radiative cooling of shock-accelerated electrons in the postshock region. For instance, Trasatti et al. (2015) suggested that the integrated spectrum of the relic in A2256 could be \ufb01tted by a broken power-law with A\u03bd \u22480.85 between 0.35 GHz and 1.37 GHz and A\u03bd \u22481.0 between 1.37 GHz and 10.45 GHz. Recently, Stroe et al. (2016) showed that the integrated spectral index of the Sausage relic increases from A\u03bd \u22480.9 below 2.5 GHz to A\u03bd \u22481.8 above 2.0 GHz, while that of the Toothbrush relic increases from A\u03bd \u22481.0 below 2.5 GHz to A\u03bd \u22481.4 above 2.0 GHz. Note that A\u03bd \u22480.9 \u22121.0 for the low frequency spectrum of the Sausage and Toothbrush relics is larger than the inferred shock index, \u03b1sh \u22480.6 \u22120.7, while A\u03bd \u22481.4 \u22121.8 for the high frequency part is also larger than \u03b1sh + 0.5 \u22481.1 \u22121.2 (van Weeren et al. 2012; Stroe et al. 2014). In particular, in the case of the Sausage relic, the steepening of J\u03bd is much stronger than expected for aging postshock electrons. This demonstrates that the simple relation of A\u03bd = \u03b1sh + 0.5 should be applied only with caution in interpreting observed radio spectra. The picture of DSA just based on shock compression and radiative cooling should be too simple to be applied to real situations, which could be complicated by additional elements such as the presence of pre-exiting electron population and the variations of shock dynamics and magnetic \ufb01eld ampli\ufb01cation. It was pointed that the Sunyaev-Zeldovich (SZ) e\ufb00ect can induce a steepening at high frequencies. Basu et al. (2015) argued that the e\ufb00ect may reduce the radio \ufb02ux by a factor of two or so at \u03bd \u223c30 GHz for the case of of the Sausage relic. On the other hand, the observations require a reduction of a factor of several. So although the detailed modeling still has to be worked out, the SZ e\ufb00ect alone would not be enough to explain the observed steepening. In an attempt to reproduce the observed spectrum of the Sausage relic, Paper I showed that the integrated spectrum estimated from the DSA simulations of spherical shocks expanding in the cluster outskirts with the \u2018acceleration\u2019 age, tage \u227260 \u221280 Myr, steepens only gradually over 0.1 \u221210 GHz. But the abrupt increase of A\u03bd above \u223c2 GHz detected in the Sausage relic could not be explained, implying that additional physical processes would operate for electrons with \u03b3e \u2273104. In this study, we propose a simple but natural reacceleration scenario, in which the shock passes through a \ufb01nite-size cloud of fossil electrons and runs ahead of the postshock volume of radio-emitting electrons. Since the supply of seed electrons is stopped outside the cloud, the shock no longer e\ufb03ciently accelerate electrons. Then, the integrated radio spectrum of the relic steepens beyond radiative cooling alone, \f\u2013 5 \u2013 and the shock front and the radio relic do not coincide spatially with each other. Another observational feature that supports the re-acceleration scenario is nearly the uniform surface brightness along thin elongated structures observed in some relics, such as the Sausage relic and the Toothbrush relic (van Weeren et al. 2010, 2012). Using numerical simulations of merging cluster, van Weeren et al. (2011a) demonstrated that a merger-driven bow shock would generate the surface brightness pro\ufb01le that peaks in the middle and decreases along the length of the relic away from the middle, if the electrons are injected/accelerated at the shock and occupy a portion of spherical shell. So their study implies that the pure in situ injection picture may not explain the uniform surface brightness pro\ufb01le. On the other hand, van Weeren et al. (2010) and Kang et al. (2012) showed that the radio \ufb02ux density pro\ufb01le with the observed width of \u223c55 kpc for the Sausage relic can be \ufb01tted by a patch of cylindrical shell with radius \u223c1.5 Mpc, which is de\ufb01ned by a length of \u223c2 Mpc and an \u2018extension angle\u2019 of \u03c8 \u224810\u25e6. Although rather arbitrary and peculiar, such a con\ufb01guration could yield uniform radio \ufb02ux density along the length of the radio relic. In Paper I, we proposed that such a geometrical structure can be produced, if a spherical shock propagates into a long cylindrical volume of fossil electrons and the re-accelerated population of fossil electrons dominates over the injected electron population (see also Figure 1 of Kang 2015b). Here, we adopt the same geometrical structure for the radio-emitting volume in the calculation of the surface brightness pro\ufb01le, but assuming that the shock has existed and runs ahead of the volume. In the next section, the numerical simulations and the shock models, designed to reproduced the Sausage relic, are described. In Section 3, our results are compared with the observations of the Sausage relic. A brief summary is followed in Section 4. 2. NUMERICAL CALCULATIONS The numerical setup for DSA simulations, the basic features of DSA and synchrotron/inverseCompton (iC) cooling, and the properties of shocks and magnetic \ufb01elds in the ICM were explained in details in Paper I. Here only brief descriptions are given. \f\u2013 6 \u2013 2.1. DSA Simulations for 1D Spherical Shocks The di\ufb00usion-convection equation for the relativistic electron population is solved in the one-dimensional (1D) spherical geometry: \u2202ge \u2202t + u\u2202ge \u2202r = 1 3r2 \u2202(r2u) \u2202r \u0012\u2202ge \u2202y \u22124ge \u0013 + 1 r2 \u2202 \u2202r \u0014 r2D(r, p)\u2202ge \u2202r \u0015 + p \u2202 \u2202y \u0012 b p2ge \u0013 , (1) where ge(r, p, t) = fe(r, p, t)p4 is the pitch-angle-averaged phase space distribution function of electrons and y \u2261ln(p/mec) with the electron mass me and the speed of light c (Skilling 1975). The background \ufb02ow velocity, u(r, t), is obtained by solving the usual gasdynamic conservation equations in the test-particle limit where the nonthermal pressure is assumed to be negligible. The spatial di\ufb00usion coe\ufb03cient for relativistic electrons is assumed to have the following Bohm-like form, D(r, p) = 1.7 \u00d7 1019cm2s\u22121 \u0012 B(r) 1 \u00b5G \u0013\u22121 \u0012 p mec \u0013 . (2) Then, the cuto\ufb00Lorentz factor in the shock-accelerated electron spectrum is given by \u03b3e,eq \u2248109 \u0010 us 3000 km s\u22121 \u0011 \u0012 B1 1 \u00b5G \u00131/2 \u0012 (1 \u00b5G)2 B2 e,1 + B2 e,2 \u00131/2 , (3) where B2 e \u2261B2 + B2 rad is the \u2018e\ufb00ective\u2019 magnetic \ufb01eld strength which accounts for both synchrotron and iC losses (Kang 2015a). Hereafter, the subscripts \u201c1\u201d and \u201c2\u201d are used to indicate the preshock and postshock quantities, respectively. Note that for electrons with \u03b3e,eq \u223c108, the di\ufb00usion length is D(\u03b3e)/us \u223c2 pc and the di\ufb00usion time is D(\u03b3e)/u2 s \u223c 600 yr, if B \u223c1 \u00b5G and us \u223c3 \u00d7 103 km s\u22121. So in e\ufb00ect electrons are accelerated almost instantaneously to the cuto\ufb00energy at the shock front. The radiative cooling coe\ufb03cient, b(p), is calculated from the cooling time scale as trad(\u03b3e) = p b(p) = 9.8 \u00d7 107 yr \u0012 Be 5 \u00b5G \u0013\u22122 \u0010 \u03b3e 104 \u0011\u22121 . (4) The cooling time scale for the radio-emitting electrons with \u03b3e = 103 \u2212104 is about 108 \u2212109 yr for the magnetic \ufb01eld strength of a few \u00b5G. 2.2. Shock Parameters For the initial setup, we adopt a Sedov blast wave propagating into a uniform static medium, which can be speci\ufb01ed by two parameters, typically, the explosion energy and the \f\u2013 7 \u2013 background density (e.g., Ryu & Vishniac 1991). Here, we instead choose the initial shock radius and speed as rs,i = 1.3 Mpc and us,i = Ms,i\u00b7cs,1 (see Table 1). As in Paper I, the shock parameters are chosen to emulate the shock associated with the Sausage relic; the preshock temperature is set to be kT1 = 3.35 keV, corresponding to the preshock sound speed of cs,1 = 923 km s\u22121 (Ogrean et al. 2014). The background gas is assumed to be isothermal, since the shock typically travels only \u223c200 kpc, which is su\ufb03ciently small compared to the size of galaxy clusters, for the duration of our simulations \u227270 Myr. The density of the background gas in cluster outskirts is assumed to decrease as \u03c1up = \u03c10(r/rs,i)\u22122. This corresponds to the so-called beta model for isothamal ICMs, \u03c1(r) \u221d [1 + (r/rc)2]\u22123\u03b2/2 with \u03b2 \u223c2/3, which is consistent with typical X-ray brightness pro\ufb01les of observed clusters (Sarazin 1986). Since we neglect the in situ injection at the shock front (see Section 2.4) and we do not concern about the absolute radio \ufb02ux level in this study (see Section 3), \u03c10 needs not to be speci\ufb01ed. 2.3. Models for Magnetic Fields Although the synchrotron cooling and emission of relativistic electrons in radio relics are determined mainly by postshock magnetic \ufb01elds, little has yet been constrained by observations. Thus, we consider a rather simple model for postshock magnetic \ufb01elds as in Paper I. (1) The magnetic \ufb01eld strength across the shock transition is assumed to increase due to the compression of two perpendicular components, B2(t) = B1 p 1/3 + 2\u03c3(t)2/3, (5) where B1 and B2 are the preshock and postshock magnetic \ufb01eld strengths, respectively, and \u03c3(t) = \u03c12/\u03c11 is the density compression ratio across the shock. (2) For the downstream region (r < rs), the magnetic \ufb01eld strength is assumed to scale with the gas pressure as Bdn(r, t) = B2(t) \u00b7 [Pg(r, t)/Pg,2(t)]1/2, (6) where Pg,2(t) is the gas pressure immediately behind the shock. In e\ufb00ect, the ratio of the magnetic to thermal pressure, that is, the plasma beta, is assumed to be constant downstream of the shock. 2.4. Fossil Electron Cloud In Paper I, we explored a scenario in which a shock in the ICM lights up as a radio relic when it encounters a cloud that contains fossil relativistic electrons, as described in the \f\u2013 8 \u2013 Introduction. In this study, we consider a slightly modi\ufb01ed scenario in which a spherical shock passes across a fossil electron cloud with width Lcloud \u223c100 kpc. Then, the shock separates from and runs ahead of the downstream radio-emitting electrons after the crossing time, tcross \u223cLcloud/us \u224832.6 Myr\u00b7(Lcloud/100 kpc)(us/3000 km s\u22121)\u22121. We assume that the downstream volume with relativistic electrons has the same geometric structure as illustrated in Figure 1 of Kang (2015b), except that the shock is detached from and moves ahead the radio-emitting volume. In other words, the re-acceleration of seed electrons operates only during tcross, that is, between the time of entry into and the time of exit out of the fossil electron cloud, and then, the downstream electrons merely cool radiatively up to tage > tcross, leading to the steepening of the integrated radio spectrum. Hereafter, the \u2018age\u2019, tage, is de\ufb01ned as the time since the shock enters into the cloud. The fossil electrons are assumed to have a power-law spectrum with exponential cuto\ufb00, ffossil(p) = f0 \u00b7 \u0012 p pinj \u0013\u2212s exp \" \u2212 \u0012 \u03b3e \u03b3e,c \u00132# , (7) with s = 4.0\u22124.2 and \u03b3e,c = 103\u2212104 for the slope and the cuto\ufb00Lorentz factor, respectively. Again the normalization factor, f0 \u00b7ps inj, is arbitrary in our calculations. This could represent fossil electrons that have cooled down to \u223c\u03b3e,c from much higher energies for (0.1\u22121)\u00d7tdyn \u2248 0.1 \u22121 Gyr. As described in Paper I (also in the Introduction), one can think of several possible origins for such fossil electrons in the ICM: (1) old remnants (radio ghosts) of radio jets from AGNs, (2) electron populations that were accelerated by previous shocks and have cooled down \u03b3e,c, and (3) electron populations that were accelerated by turbulence during merger activities. In order to focus on the consequences of the fossil electrons, the in situ injection is suppressed; that is, we assume that the in situ injected and accelerated population is negligible compared to the re-accelerated population of the fossil electrons. We also assume that the nonthermal pressure of the fossil electrons is dynamically insigni\ufb01cant, thus, the sole purpose of adopting the fossil electrons is to supply seed electrons into the DSA process at the shock. Finally, we assume that the background gas has \u03b3g = 5/3. 3. RESULTS OF DSA SIMULATIONS 3.1. Model Parameters We consider several models whose parameters are summarized in Table 1. In all the models, the preshock temperature is \ufb01xed at kT = 3.35 keV, and the preshock magnetic \ufb01eld \f\u2013 9 \u2013 strength at B1 = 2.5 \u00b5G with the immediate postshock magnetic \ufb01eld strength, B2(t) \u2248 6.0 \u22126.3 \u00b5G during 60 Myr. For the \ufb01ducial model, M3.0C1, the initial shock speed is us,i = 2.8 \u00d7 103 km s\u22121, corresponding to the initial shock Mach number Ms,i = 3.0, at the onset of simulation, the width of the fossil electron cloud is Lcloud = 131 kpc, and the fossil electron population is speci\ufb01ed with the power-law slope s = 4.2 and the cuto\ufb00Lorentz factor \u03b3e,c = 104. The shock speed and Mach number decrease by \u223c10 % or so during 60 Myr in this model as well as in other models in the table (see Paper I). In the M3.0C1g and M3.0C1s models, di\ufb00erent populations of fossil electrons are considered with \u03b3e,c = 103 and s = 4.0, respectively. The Mach number dependence is explored with three additional models, M2.5C1, M3.3C1, and M4.5C1. The e\ufb00ects of the cloud size are examined in the M3.0C2, M3.0C3, and M3.0C4 models with Lcloud = 105, 155, and 263 kpc, respectively. In the M3.0C4 model, the shock stays inside the fossil electron cloud until 92 Myr, so the spectral steepening comes from radiative cooling only. Lastly, the SC1pex1 model is the same one considered in Paper I, in which the initial shock Mach number, Ms,i = 2.4, was chosen to match the steep spectral curvature above 1.5 GHz in the observed spectrum of the Sausage relic (Stroe et al. 2014). Note that in this model the shock Mach number decreases to Ms \u22482.1 at 60 My, which is lower than MX \u22482.54\u22123.15 derived from X-ray observations (Akamatsu & Kawahara 2013; Ogrean et al. 2014) Figure 1 compares the evolution of the synchrotron emissivity, j\u03bd(r, \u03bd), in the M3.0C1, M3.0C1g, and M3.0C3 models (from top to bottom panels) at age tage = 18, 45, 55, and 66 Myr (from left to right panels). In the left-most panels at 18 Myr, the shock at r = rs has penetrated 52 kpc into the cloud in the three models. The shock is about to exit out of the cloud at 45 Myr in the M3.0C1 and M3.0C1g models and at 55 Myr in the M3.0C3 model. In the right-most panels, the edge of the radio emitting region is located behind the shock front (r/rs = 1). The comparison of M3.0C1 and M3.0C1g models indicates that the postshock radio emission does not sensitively depend on the cuto\ufb00in the fossil electron spectrum for the range, \u03b3e,c \u223c103 \u2212104, considered here. The distributions of j\u03bd(r, \u03bd) of the M3.0C1 and M3.0C1g models at 55 Myr look similar to that of M3.0C3 at 66 Myr except that the downstream volume is slightly larger in the M3.0C3 model. Note that the fossil electrons in the cloud start to cool from the onset of the simulations. 3.2. Electron and Radio Spectra Figure 2 compares the evolutions of the electron spectrum and the radio spectrum in the M3.0C1 (upper four panels) and M3.0C1g (lower four panels) models at the same tage as \f\u2013 10 \u2013 in Figure 1. The electron spectrum at the shock, ge(rs, p), and the volume-integrated electron spectrum, Ge(p) = p4Fe(p) = p4 R 4\u03c0r2fe(r, p)dr, the particle slopes, q = \u2212d ln fe(rs, p)/d lnp and Q = \u2212d ln Fe(p)/d ln p, are shown in the left panels. The local synchrotron spectrum at the shock, j\u03bd(rs), and the integrated spectrum, J\u03bd = R j\u03bd(r)dV , the synchrotron spectral indices, \u03b1sh = \u2212d ln j\u03bd(rs)/d ln \u03bd and A\u03bd = \u2212d ln J\u03bd/d ln \u03bd, are shown in the right panels. Here, the plotted quantities, ge, Ge, j\u03bd, and J\u03bd, are in arbitrary units. The shock quantities, ge, q, j\u03bd, and \u03b1sh, are not shown at tage = 55 and 66 Myr, since the shock has exited out of the fossil electron cloud. Note that here the in-situ injection/acceleration was suppressed in order to focus on the re-acceleration of fossil electrons (see Section 2.4). The shape of Ge(p) is signi\ufb01cantly di\ufb00erent in the two models in the momentum range of \u03b3e \u223c103 \u2212104 due to di\ufb00erent exponential cuto\ufb00s in the initial seed populations. Once the re-acceleration of seed electrons has stopped at tcross \u223c45 Myr, the gradual steepening of the integrated electron spectrum progresses, as can be seen in the magenta dashed and dotted lines for both Ge and its slope Q. The behavior of \u03b1sh is related with the electron slope as \u03b1sh \u2248(q \u22123)/2, which is exact only in the case of a single power-law electron spectrum. Before the shock exits the cloud, the integrated spectral index increases from A\u03bd \u2248(s \u22123)/2 \u22480.6 to A\u03bd \u2248\u03b1sh + 0.5 \u22481.3 over a broad range of frequency, \u223c(0.01 \u221230) GHz (see the magenta solid and long-dashed lines). But after tcross, the integrated index A\u03bd becomes much steeper at high frequencies (see the magenta dashed and dotted lines) due to the combined e\ufb00ects of radiative cooling and the lack of re-acceleration at the shock front. This suggests that the shock breaking out of a \ufb01nite-size cloud of fossil electrons may explain the steepening of the integrated spectrum well beyond the radiative cooling alone. The comparison of J\u03bd (or A\u03bd) of the two models shown in the \ufb01gure indicates there is only a marginal di\ufb00erence at low frequencies below 0.5 GHz, although they have di\ufb00erent exponential cuto\ufb00s in the seed populations. 3.3. Surface Brightness Pro\ufb01le As in Paper I, the radio intensity or surface brightness, I\u03bd, is calculated by adopting the geometric volume of radio-emitting electrons in Figure 1 of Kang (2015b): I\u03bd(R) = 2 Z h 0 j\u03bd(r)dh, (8) where R is the distance behind the projected shock edge in the plane of the sky, r is the radial distance from the cluster center, and h is the path length along line of sights. Here, we set the extension angle \u03c8 = 12\u25e6, slightly larger than \u03c8 = 10\u25e6adopted in Paper I, in order \f\u2013 11 \u2013 to reproduce the observed width of the Sausage relic. Note that the radio \ufb02ux density, S\u03bd, can be obtained by convolving I\u03bd with a telescope beam as S\u03bd(R) \u2248I\u03bd(R)\u03c0\u03b81\u03b82(1 + z)\u22123, if the brightness distribution is broad compared to the beam size of \u03b81\u03b82. In Figures 3 and 4, the spatial pro\ufb01les of I\u03bd(R) at the radio frequency of 0.6 GHz and the radio spectral index, \u03b11.4 0.6, estimated between 0.6 GHz and 1.4 GHz, are shown for all the models listed in Table 1. The radio intensity I\u03bd(R) is plotted at three di\ufb00erent tage to illustrate the time evolution, but \u03b11.4 0.6 is plotted only at the middle tage for the clarity of the \ufb01gure. The pro\ufb01le of I\u03bd(R) in the SC1exp1 model at 62 Myr is shown in the green dotted line for comparison. Figure 3 compares the models with di\ufb00erent cloud sizes, Lcloud = 105 \u2212263 kpc, and the models with di\ufb00erent fossil electron populations. Note that tcross = 37, 45, 54, and 92 Myr for M3.0C2, M3.0C1, M3.0C3, and M3.0C4 models, respectively. In the M3.0C4 model, the shock still remains inside the cloud at the last epoch of tage = 66 Myr. So the edge of the relic coincides with the projected shock position (R = 0) and the FWHM of I\u03bd(R) at 0.6 GHz, \u2206lSB, continues to increase with time in this model. In the other models, the shock breaks out of the cloud around tcross and the relic edge lags behind the shock. As a result, \u2206lSB does not increase with time after tcross. The value of \u2206lSB ranges 43 \u221244 kpc in M3.0C2, 45 \u221250 kpc in M3.0C1, 46 \u221257 kpc in M3.0C3, and 46 \u221257 kpc in M3.0C4. In the comparison model SC1exp1, \u2206lSB \u224851 kpc. In most of the models with Lcloud \u2265131 kpc, \u2206lSB is in a rough agreement with the observed width of the Sausage relic at 0.6 GHz, reported in van Weeren et al. (2010). The distance between the shock front and the relic edge after tcross is lshift \u224810 kpc \u0010 u2 103 km s\u22121 \u0011 \u0012tage \u2212tcross 10 Myr \u0013 , (9) where u2 = cs,1Ms,i/\u03c3 is the postshock advection speed. In the models with Lcloud = 131 kpc, lshift \u224810 \u221220 kpc for tage = 55 \u221266 Myr. So it may be di\ufb03cult to detect such spatial shift between the shock location in X-ray observations and the relic edge in radio observations with currently available facilities. The shift is, for instance, much smaller than the misalignment of \u223c200 kpc between the X-ray shock location and the radio relic edge detected in the Toothbrush relic (Ogrean et al. 2013). We note that van Weeren et al. (2016) pointed that the slope of the X-ray brightness pro\ufb01le changes at the expected location of the shock associated with the Toothbrush relic. This should imply that the evidence for a spatial o\ufb00set between the shock and this relic is not compelling anyway. At tage = 55 Myr, the shock of initially Ms,i = 3.0 weakens to Ms \u22482.7, so the DSA radio index is expected to be \u03b1shock \u22480.8 at the shock location. In Figure 3, we see that \u03b11.4 0.6 \f\u2013 12 \u2013 at tage = 55 Myr (magenta lines) has \u223c0.75\u22120.95 at the edge of the relic and \u223c1.5\u22121.8 at about 60 kpc downstream of the relic edge, increasing behind the shock due to the electron cooling. Especially in the models with Lcloud = 103 and 131 kpc, \u03b11.4 0.6 at the relic edge is larger than \u03b1shock, since the shock is ahead of the relic edge. This suggests the possibility that the shock Mach number estimated by X-ray observations could be slightly larger than that estimated from the radio spectral index, which is opposite to the tendency of observational data. In the case of the Toothbrush relic, for instance, a \u2018radio Mach number\u2019 was estimated to be Mradio \u22483.3\u22124.6 from \u03b1sh \u22480.6\u22120.7 (van Weeren et al. 2012), while an \u2018X-ray Mach number\u2019, MX \u22722, was derived from the density/temperature jump (Ogrean et al. 2013). The discrepancy in this relic might be understood by the projection e\ufb00ect of multiple shock surfaces, as suggested in Hong et al. (2015). Figure 4 compares the models with di\ufb00erent Ms,i at three di\ufb00erent tage speci\ufb01ed in each panel. The value of \u2206lSB ranges 44 \u221253 kpc in M2.5C1, 45 \u221250 kpc in M3.0C1, 40 \u221247 kpc in M3.3C1, and \u223c44 kpc in M4.5C1. Since the preshock sound speed and the magnetic \ufb01eld strength are the same in all the models considered here, the postshock advection speed, u2 = cs,1Ms,i/\u03c3, and the postshock magnetic \ufb01eld strength, B2 = B1 p 1/3 + 2\u03c32/3, are dependent on Ms,i and \u03c3. The widths are slightly smaller in the M3.3C1 and M4.5C1 models with higher Mach numbers. 3.4. Volume-Integrated Spectra As discussed in Paper I (see the Introduction), the volume-integrated synchrotron spectrum, J\u03bd, is expected to have a power-law form, only for a steady planar shock in uniform background medium, and only if tage is much longer than \u223c100 Myr. Otherwise, the spectrum steepens gradually around the break frequency of J\u03bd given by \u03bdbr \u22480.63GHz \u0012 tage 100Myr \u0013\u22122 \u0012 (5 \u00b5G)2 B2 2 + B2 rad \u00132 \u0012 B2 5 \u00b5G \u0013 . (10) The shock younger than 100 Myr has \u03bdbr \u22730.6 GHz, with the spectral curvature changing over \u223c(0.1 \u221210)\u03bdbr, the typical frequency range of radio observations (Paper I). Thus, the integrated radio spectra of observed radio relics are likely to be curved instead of being single power-law. Figures 5 and 6 show J\u03bd for the same models at the same tage as in Figures 3 and 4, respectively. Again, J\u03bd is in arbitrary units. The \ufb01lled magenta circles represent the data points for the integrated \ufb02ux of the Sausage relic, which were taken from Table 3 of Stroe et al. (2016) and re-scaled to \ufb01t by eye the spectra of the long-dashed lines, except \f\u2013 13 \u2013 in the M3.0C3 and M3.0C4 models. For the M3.0C3 model with Lcl = 155 kpc, the data points were \ufb01tted to the spectrum of the dashed line at 66 Myr. On the other hand, in the M3.0C4 model with Lcl = 263 kpc, the shock is still inside the fossil electron cloud even at 66 Myr, so the curvature of J\u03bd at high frequencies, which is induced only by the radiative losses, is not steep enough to \ufb01t the observed \ufb02ux above 16 GHz. Among the models shown in Figure 5, the \ufb01ducial model M3.0C1 as well as the M3.0C1s model with slightly \ufb02atter fossil electrons best reproduce the overall spectrum as well as the steep curvature at high frequencies. Other models do not reproduce the observed spectrum as good as the two models. In the comparison model SC1pex1 of Paper I (green dotted line), the shock stays inside the fossil electron cloud and so the spectral curvature is due to only radiative cooling. Although the initial Mach number, Ms,i = 2.4, was chosen for this model to explain the steep spectrum above 1.5 GHz, yet the abrupt increase of the curvature near 1 GHz could not be reproduced. Figure 6 shows that the observed spectrum of the Sausage relic can be \ufb01tted reasonable well by all four models with Lcl = 131 kpc and Ms,i = 2.5 \u22124.5 at about 10 Myr after the shock has exited out of the fossil cloud (long-dashed lines). They generate much better \ufb01t compared to the SC1exp1 model. In M3.3C1 model, for instance, the shock Mach number decrease to Ms = 3.0 at tage \u224853 Myr and the integrated spectrum at that epoch is in a very good agreement with the observed spectrum. But considering that the width of I\u03bd(R) of the M3.0C1 model agrees a bit better with the observations (see Figure 4), we designate M3.0C1 as the \ufb01ducial model here. These simulation results demonstrate that the observed spectrum of the Sausage relic with steep curvature could be explained naturally by the shock passage over a \ufb01nite-size cloud of fossil electrons without invoking any additional physical processes other than the synchrotron/iC coolings. 4. SUMMARY In Kang & Ryu (2015) (Paper I), we proposed a model for radio gischt relics in which a spherical shock sweeps through an elongated cloud of the ICM thermal gas with an additional population of fossil relativistic electrons. At the shock, the fossil electrons are re-accelerated to radio-emitting energies (\u03b3e \u223c104) and beyond, producing a di\ufb00use radio source. We argued in Paper I that such a model may explain the following characteristics of giant radio relics: (1) the low occurrence of radio relics compared to the expected frequency of shocks in merging clusters, (2) the uniform surface brightness along the length of arc-like relics, and \f\u2013 14 \u2013 (3) the spectral curvature in the integrated radio spectrum that runs over \u223c(0.1 \u221210)\u03bdbr. But we were not able to reproduce the abrupt increase of the integrated spectral index, A\u03bd (J\u03bd \u221d\u03bd\u2212A\u03bd), above \u223cGHz, detected in the observed spectra of some relics including the Sausage relic (e.g. Stroe et al. 2014, 2016; Trasatti et al. 2015) In an e\ufb00ort to explain steep curved radio spectra, in this paper, we explore the possibility that the shock breaks out of a \ufb01nite-size cloud of fossil electrons, leading to the volumeintegrated electron spectrum much steeper than expected from the simple radiative aging alone. To that end, we performed time-dependent, DSA simulations of one-dimensional, spherical shocks with the parameters relevant for the Sausage relic, which sweep through fossil electron clouds of 105 \u2212263 kpc in width. In the \ufb01ducial model, M3.0C1, the shock has initially us,i \u22482.8 \u00d7 103 km s\u22121 (Ms,i \u22483.0), breaks out of the cloud of 131 kpc after the crossing time of tcross \u224845 Myr, and decelerates to us \u22482.5 \u00d7 103 km s\u22121 (Ms \u22482.7) at tage \u224855 Myr. As in Paper I, we assume that the fossil electron population has a powerlaw spectrum with exponential cuto\ufb00. We also consider various models with di\ufb00erent fossil electron populations or shock Mach numbers, as summarized in Table 1. We then calculate the radio surface brightness pro\ufb01le, I\u03bd(R), and the volume-integrated spectrum, J\u03bd, adopting the downstream volumes with the same geometrical structure assumed in Paper I. We \ufb01nd that a shock of Ms \u22482.7\u22123.0 and us \u22482.5\u22122.8\u00d7103 km s\u22121 (e.g., the M3.0C1 and M3.3C1 models), which has exited the fossil electron cloud of 131 kpc about 10 Myr ago, can leave radio-emitting electrons behind, which produce both I\u03bd(R) and J\u03bd consistent with the observations reported by van Weeren et al. (2010) and Stroe et al. (2016). Although the detailed shape of J\u03bd depends on the spectrum of fossil electrons (e.g., the slope, s, and the cuto\ufb00energy, \u03b3e,c) as well as the shock Mach number and the magnetic \ufb01eld strength, the ensuing radio spectrum may explain the steepening of J\u03bd above \u223c2 GHz, seen in the Sausage relic, by adjusting the time since the break-out, tage \u2212tcross. As emphasized in Paper I, the single power-law radio spectrum is valid only for a steady planar shock with age much longer than 100 Myr (see equation (10)). For a spherically expanding shock at younger age, the integrated radio spectrum should be curved in the range of \u223c(0.1\u221210) \u03bdbr. In this study, we further demonstrate that the spectral index could be much bigger than A\u03bd = \u03b1shock + 0.5 at high frequencies, if the shock sweeps out of the fossil electron cloud of \ufb01nite size. We conjecture that the typical value of tage \u2212tcross for observed giant radio relics would be of order of 10 Myr, because, if much longer than that, the radio \ufb02ux density decreases quickly due to fast cooling after the shock breaks out of the cloud (see Figures 3 and 4). In such cases, the spatial o\ufb00set between the projected shock front and the edge of the radio relic is of order of 10 kpc, which is too small to be resolved with currently available observation \f\u2013 15 \u2013 facilities. In addition, with the o\ufb00set, the shock Mach number, Mradio, derived from the local spectral index, \u03b1\u03bd, observed at the relic edge is expected to be slightly lower than the actual shock Mach number, for instance, the Mach number, MX, if the shock can be detected in X-ray observations. This is contrary to the observed trend that in some radio relics MX \u2272Mradio (e.g. Akamatsu & Kawahara 2013). Such observations of MX \u2272Mradio, hence, should be understood by other reasons, for instance, the projection e\ufb00ect of multiple shock surfaces along line of sights in X-ray and radio observations (e.g. Hong et al. 2015). We thank the referee for constructive suggestions. We also thank R. J. van Weeren for comments on the manuscript. HK was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2014R1A1A2057940). DR was supported by the National Research Foundation of Korea through grant NRF-2014M1A7A1A03029872." + }, + { + "url": "http://arxiv.org/abs/1505.04256v2", + "title": "Curved Radio Spectra of Weak Cluster Shocks", + "abstract": "In order to understand certain observed features of arc-like giant radio\nrelics such as the rareness, uniform surface brightness, and curved integrated\nspectra, we explore a diffusive shock acceleration (DSA) model for radio relics\nin which a spherical shock impinges on a magnetized cloud containing fossil\nrelativistic electrons. Toward this end, we perform DSA simulations of\nspherical shocks with the parameters relevant for the Sausage radio relic in\ncluster CIZA J2242.8+5301, and calculate the ensuing radio synchrotron emission\nfrom re-accelerated electrons. Three types of fossil electron populations are\nconsidered: a delta-function like population with the shock injection momentum,\na power-law distribution, and a power-law with an exponential cutoff. The\nsurface brightness profile of radio-emitting postshock region and the\nvolume-integrated radio spectrum are calculated and compared with observations.\nWe find that the observed width of the Sausage relic can be explained\nreasonably well by shocks with speed $u_s \\sim 3\\times 10^3 \\kms$ and sonic\nMach number $M_s \\sim 3$. These shocks produce curved radio spectra that\nsteepen gradually over $(0.1-10) \\nu_{\\rm br}$ with break frequency $ \\nu_{\\rm\nbr}\\sim 1$ GHz, if the duration of electron acceleration is $\\sim 60 - 80$ Myr.\nHowever, the abrupt increase of spectral index above $\\sim 1.5$ GHz observed in\nthe Sausage relic seems to indicate that additional physical processes, other\nthan radiative losses, operate for electrons with $\\gamma_e \\gtrsim 10^4$.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2015-05-16", + "updated": "2015-07-24", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Radio relics are di\ufb00use radio sources found in the outskirts of galaxy clusters and they are thought to trace synchrotron-emitting cosmic-ray (CR) electrons accelerated via di\ufb00usive shock acceleration (DSA) at cluster shocks (e.g. Ensslin et al. 1998; Bagchi et al. 2006; van Weeren et al. 2010; Br\u00a8 uggen et al. 2012). So far several dozens of clusters have been observed to have radio relics with a variety of morphologies and most of them are considered to be associated with cluster merger activities (see for reviews, e.g., Feretti et al. 2012; Brunetti & Jones 2014). For instance, double radio relics, such as the ones in ZwCl0008.8+5215, are thought to reveal the bow shocks induced by a binary major merger (van Weeren et al. 2011b; de Gasperin et al. 2014). On the other hand, recently it was shown that shocks induced by the infall of the warm-hot intergalactic medium (WHIM) along adjacent \ufb01laments into the hot intracluster medium (ICM) can e\ufb03ciently accelerate CR electrons, and so they could be responsible for some radio relics in the cluster outskirts (see, e.g., Hong et al. 2014). The radio relic 1253+275 in Coma cluster observed in both radio (Brown & Rudnick 2011) and X-ray (Ogrean & Br\u00a8 uggen 2013) provides an example of such infall shocks. The so-called Sausage relic in CIZA J2242.8+5301 (z = 0.192) contains a thin arc-like structure of \u223c55 kpc width and \u223c2 Mpc length, which could be represented by a portion of spherical shell with radius \u223c1.5 Mpc (van Weeren et al. 2010). Unique features of this giant radio relic include the nearly uniform surface brightness along the length of the relic and the strong polarization of up to 50 \u221260% with magnetic \ufb01eld vectors aligned with the relic (van Weeren et al. 2010). A temperature jump across the relic that corresponds to a Ms \u22482.54 \u22123.15 shock has been detected in X-ray observations (Akamatsu & Kawahara 2013; Ogrean et al. 2014). This was smaller than Ms \u22484.6 estimated from the above radio observation. Several examples of Mpc-scale radio relics include the Toothbrush relic in 1RXS J0603.3 with a peculiar linear morphology (van Weeren et al. 2012) and the relics in A3667 (Rottgering et al. 1997) and A3376 (Bagchi et al. 2006). The shock Mach numbers of radio relics estimated based on X-ray observation are often lower than those inferred from the radio spectral index using the DSA model, for instance, in the Toothbrush relic (Ogrean et al. 2013) and in the radio relic in A2256 (Trasatti et al. 2015). Although such giant radio relics are quite rare, the fraction of X-ray luminous clusters hosting some radio relics is estimated to be \u223c10 % or so (Feretti et al. 2012). Through a number of studies using cosmological hydrodynamical simulations, it has been demonstrated that during the process of hierarchical structure formation, abundant shocks are produced in the large-scale structure of the universe, especially in clusters (e.g., Ryu et al. 2003; Pfrommer et al. 2006; Skillman et al. 2008; Hoeft et al. 2008; Vazza et al. 2009; Vazza et al 2011; Hong et al. 2014). Considering that the characteristic time-scale of \f\u2013 3 \u2013 cluster dynamics including mergers is tdyn \u223c1 Gyr, typical cluster shocks are expected to last for about the same period. Yet, the number of observed radio relics, which is thought to trace such shocks, is still limited. So it is plausible to conjecture that cluster shocks may \u2018turn on\u2019 to emit synchrotron radiation only for a fraction of their lifetime. One feasible scenario is that a cluster shock lights up in radio when it sweeps up a fossil cloud, i.e., a magnetized ICM gas with fossil relativistic electrons left over from either a radio jet from AGN or a previous episode of shock/turbulence acceleration (see the discussions in Section 2.5 and Figure 1). Pre-exiting seed electrons and/or enhanced magnetic \ufb01elds are the requisite conditions for possible lighting-up of relic shocks. In particular, the elongated shape with uniform surface brightness and high polarization fraction of radio emission in the Sausage relic, may be explained, if a Mpc-scale thermal gas cloud, containing fossil relativistic electrons and permeated with regular magnetic \ufb01eld of a few to several \u00b5G, is adopted. A more detailed description will be given later in Section 2.5. In this picture, fossil electrons are expected to be re-accelerated for less than cloud-crossing time (< Rcloud/us \u223c100 Myr), which is much shorter than the cluster dynamical time-scale. In addition, only occasional encounters with fossil clouds combined with the short acceleration duration could alleviate the strong constraints on the DSA theory based on non-detection of \u03b3-ray emission from clusters by Fermi-LAT (Ackermann et al. 2014; Vazza et al. 2015). A similar idea has been brought up by Shimwell et al. (2015), who reported the discovery of a Mpc-scale, elongated relic in the Bullet cluster 1E 0657-55.8. They also proposed that the arc-like shape of uniform surface brightness in some radio relics may trace the underlying regions of pre-existing, seed electrons remaining from old radio lobes. On the other hand, Ensslin & Gopal-Krishna (2001) suggested that radio relics could be explained by revival of fossil radio plasma by compression due to a passage of a shock, rather than DSA. In a follow-up study, Ensslin & Br\u00a8 uggen (2002) showed using MHD simulations that a cocoon of hot radio plasma swept by a shock turns into a \ufb01lamentary or toroidal structure. Although this scenario remains to be a viable explanation for some radio relics, it may not account for the uniform arc-like morphology of the Sausage relic. It is now well established, through observations of radio halos/relics and Faraday rotation measures of background radio sources, that the ICM is permeated with \u00b5G-level magnetic \ufb01elds (e.g. Bonafede et al. 2011; Feretti et al. 2012). The observed radial pro\ufb01le of magnetic \ufb01eld strength tends to peak at the center with a few \u00b5G and decrease outward to \u223c0.1\u00b5G in the cluster outskirts (Bonafede et al. 2010). A variety of physical processes that could generate and amplify magnetic \ufb01elds in the ICM have been suggested: primordial processes, plasma processes at the recombination epoch, and Biermann battery mechanism, combined with turbulence dynamo, in addition to galactic winds and AGN jets (e.g. \f\u2013 4 \u2013 Ryu et al. 2008; Dolag et al. 2008; Br\u00a8 uggen et al. 2012; Ryu et al. 2012; Cho 2014). Given the fact that \u223c5\u00b5G \ufb01elds are required to explain the amplitude and width of the observed radio \ufb02ux pro\ufb01le of the Sausage relic (e.g. van Weeren et al. 2010; Kang et al. 2012), the presence of a cloud with enhanced magnetic \ufb01elds of several \u00b5G might be preferred to the background \ufb01elds of \u223c0.1\u00b5G in the cluster periphery. Alternatively, Iapichino & Br\u00a8 uggen (2012) showed that the postshock magnetic \ufb01elds can be ampli\ufb01ed to \u223c5 \u22127\u00b5G level leading to high degrees of polarization, if there exists dynamically signi\ufb01cant turbulence in the upstream region of a curved shock. Although it is well accepted that magnetic \ufb01elds can be ampli\ufb01ed via various plasma instabilities at collisionless shocks, the dependence on the shock parameters such as the shock sonic and Alfv\u00b4 enic Mach numbers, and the obliquity of background magnetic \ufb01elds remains to be further investigated (see Schure et al. 2012). For example, the acceleration of protons and ensuing magnetic \ufb01eld ampli\ufb01cation via resonant and non-resonant streaming instabilities are found to be ine\ufb00ective at perpendicular shocks (Bell 1978, 2004; Caprioli & Spitkovsky 2014). In several studies using cosmological hydrodynamical simulations, synthetic radio maps of simulated clusters were constructed by identifying shocks and adopting models for DSA of electrons and magnetic \ufb01eld ampli\ufb01cation (Nuza et al. 2012; Vazza et al. 2012a; Skillman et al. 2013; Hong et al. 2015). In particular, Vazza et al. (2012b) demonstrated, by generating mock radio maps of simulated cluster samples, that radio emission tends to increase toward the cluster periphery and peak around 0.2 \u22120.5Rvir (where Rvir is the virial radius), mainly because the kinetic energy dissipated at shocks peaks around 0.2Rvir. As a result, radio relics are rarely found in the cluster central regions. Re-acceleration of fossil relativistic electrons by cosmological shocks during the large scale structure formation has been explored by Pinzke et al. (2013). The radio emitting shocks in these studies look like segments of spherical shocks, moving from the cluster core region into the periphery. We presume that they are generated mostly as a consequence of major mergers or energetic infalls of the WHIM along adjacent \ufb01laments. So it seems necessary to study spherical shocks propagating through the cluster periphery, rather than interpreting the radio spectra by DSA at steady planar shocks, in order to better understand the nature of radio relics (Kang 2015a,b). According to the DSA theory, in the case of a steady planar shock with constant postshock magnetic \ufb01eld, the electron distribution function at the shock location becomes a power-law of fe(p, rs) \u221dp\u2212q, and so the synchrotron emissivity from those electrons becomes a power-law of j\u03bd(rs) \u221d\u03bd\u2212\u03b1inj. The power-low slopes depend only on the shock sonic Mach number, Ms, and are given as q = 4M2 s /(M2 s \u22121) and \u03b1inj = (M2 s + 3)/2(M2 s \u22121) for the gasdynamic shock with the adiabatic index \u03b3g = 5/3 (Drury 1983; Blandford & Eichler 1987; Ensslin et al. 1998). Here we refer \u03b1inj as the injection spectral index for a steady planar shock with constant postshock magnetic \ufb01eld. Then, the volume-integrated synchrotron \f\u2013 5 \u2013 spectrum downstream of the shock also becomes a simple power-law of J\u03bd = R j\u03bd(r)dV \u221d \u03bd\u2212A\u03bd with the spectral index A\u03bd = \u03b1inj + 0.5 above the break frequency, \u03bdbr, since electrons cool via synchrotron and inverse-Compton (IC) losses behind the shock (e.g., Ensslin et al. 1998; Kang 2011).1 Such predictions of the DSA theory have been applied to explain the observed properties of radio relics, e.g., the relation between the injection spectral index and the volume-integrated spectral index, and the gradual steepening of spatially resolved spectrum downstream of the shock. Kang et al. (2012) performed time-dependent, DSA simulations of CR electrons for steady planar shocks with Ms = 2\u22124.5 and constant postshock magnetic \ufb01elds. Several models with thermal leakage injection or pre-existing electrons were considered in order to reproduce the surface brightness and spectral aging pro\ufb01les of radio relics in CIZA J2242.8+5301 and ZwCl0008.8+5215. Adopting the same geometrical structure of radio-emitting volume as described in Section 2.5, they showed that the synchrotron emission from shock accelerated electrons could explain the observed pro\ufb01les of the radio \ufb02ux, S\u03bd(R), of the Sausage relic, and the observed pro\ufb01les of both S\u03bd(R) and \u03b1\u03bd(R) of the relic in ZwCl0008.8+5215. Here R is the distance behind the projected shock edge in the plane of the sky. In the case of spherically expanding shocks with varying speeds and/or nonuniform magnetic \ufb01eld pro\ufb01les, on the other hand, the electron spectrum and the ensuing radio spectrum could deviate from those simple power-law forms, as shown in Kang (2015a,b). Then even the injection slope should vary with the frequency, i.e., \u03b1inj(\u03bd). Here we follow the evolution of a spherical shock expanding outward in the cluster outskirts with a decreasing density pro\ufb01le, which may lead to a curvature in both the injected spectrum and the volumeintegrated spectrum. Moreover, if the shock is relatively young or the electron acceleration duration is short (\u2272100 Myr), then the break frequency falls in \u03bdbr \u223c1 GHz and the volume-integrated spectrum of a radio relic would steepen gradually with the spectral index from \u03b1inj to \u03b1inj + 0.5 over (0.1 \u221210)\u03bdbr (e.g. Kang 2015b). In the case of the Sausage relic, van Weeren et al. (2010) and Stroe et al. (2013) originally reported observations of \u03b1inj \u22480.6 and \u03b1 = A\u03bd \u22481.06, which imply a shock of Ms \u22484.6. Stroe et al. (2014b), however, found a spectral steepening of the volumeintegrated spectrum at 16 GHz, which would be inconsistent with the DSA model for a steady planar shock. Moreover, Stroe et al. (2014a), by performing a spatially-resolved spectral \ufb01tting, revised the injection index to a steeper value, \u03b1inj \u22480.77. Then, the corresponding 1Note that radio observers commonly use \u2018\u03b1\u2019 as the spectral index of the \ufb02ux density, S\u03bd \u221d\u03bd\u2212\u03b1 for unresolved sources, so in that case \u03b1 is the same as A\u03bd. Here \u03b1\u03bd(r) is de\ufb01ned as the spectral index of the local emissivity, j\u03bd(r). See Equations (9)-(10). \f\u2013 6 \u2013 shock Mach number is reduced to Ms \u22482.9. They also suggested that the spectral age, calculated under the assumption of freely-aging electrons downstream of a steady planar shock, might not be compatible with the shock speed estimated from X-ray and radio observations. Also Trasatti et al. (2015) reported that for the relic in A2256, the volume-integrated index steepens from A\u03bd \u22480.85 for \u03bd = 351 \u22121369 MHz to A\u03bd \u22481.0 for \u03bd = 1.37 \u221210.45 GHz, which was interpreted as a broken power-law. Discoveries of radio relic shocks with Ms \u223c2 \u22123 in recent years have brought up the need for more accurate understanding of injections of protons and electrons at weak collisionless shocks, especially at high plasma beta (\u03b2p \u223c50\u2212100) ICM plasmas (e.g. Kang et al. 2014). Here \u03b2p is the ratio of the gas to magnetic \ufb01eld pressure. Injection of electrons into the Fermi 1st-order process has been one of long-standing problems in the DSA theory for astrophysical shocks, because it involves complex plasma kinetic processes that can be studied only through full Particle-in-Cell (PIC) simulations (e.g. Amano & Hoshino 2009; Riquelme & Spitkovsky 2011). It is thought that electrons must be pre-accelerated from their thermal momentum to several times the postshock thermal proton momentum to take part in the DSA process, and electron injection is much less e\ufb03cient than proton injection due to smaller rigidity of electrons. Several recent studies using PIC simulations have shown that some of incoming protons and electrons gain energies via shock drift acceleration (SDA) while drifting along the shock surface, and then the particles are re\ufb02ected toward the upstream region. Those re\ufb02ected particles can be scattered back to the shock by plasma waves excited in the foreshock region, and then undergo multiple cycles of SDA, resulting in power-law suprathermal populations (e.g., Guo et al. 2014a,b; Park et al. 2015). Such \u2018self pre-acceleration\u2019 of thermal electrons in the foreshock region could be su\ufb03cient enough even at weak shocks in high beta ICM plasmas to explain the observed \ufb02ux level of radio relics. In these PIC simulations, however, subsequent acceleration of suprathermal electrons into full DSA regime has not been explored yet, because extreme computational resources are required to follow the simulations for a large dynamic range of particle energy. The main reasons that we implement the fossil electron distribution, instead of the shock injection only case, are (1) the relative scarcity of radio relics compared to the abundance of shocks expected to form in the ICM, (2) the peculiar uniformity of the surface brightness of the Sausage relic, and (3) curved integrated spectra often found in some radio relics, implying the acceleration duration \u2272100 Myr, much shorter than the cluster dynamical time. In this paper, we consider a DSA model for radio relics; a spherical shock moves into a magnetized gas cloud containing fossil relativistic electrons, while propagating through a density gradient in the cluster outskirts. Speci\ufb01cally, we perform time-dependent DSA simulations for several spherical shock models with the parameters relevant for the Sausage \f\u2013 7 \u2013 relic. We then calculate the surface brightness pro\ufb01le, I\u03bd, and the volume-integrated radio spectrum, J\u03bd, by adopting a speci\ufb01c geometrical structure of shock surface, and compare them with the observational data of the Sausage relic. In Section 2, the DSA simulations and the model parameters are described. The comparison of our results with observations is discussed in Section 3. A brief summary is given in Section 4. 2. DSA SIMULATIONS OF CR ELECTRONS 2.1. 1D Spherical CRASH Code We follow the evolution of the CR electron population by solving the following di\ufb00usionconvection equation in the one-dimensional (1D) spherical geometry: \u2202ge \u2202t + u\u2202ge \u2202r = 1 3r2 \u2202(r2u) \u2202r \u0012\u2202ge \u2202y \u22124ge \u0013 + 1 r2 \u2202 \u2202r \u0014 r2D(r, p)\u2202ge \u2202r \u0015 + p \u2202 \u2202y \u0012 b p2ge \u0013 , (1) where ge(r, p, t) = fe(r, p, t)p4 is the pitch-angle-averaged phase space distribution function of electrons, u(r, t) is the \ufb02ow velocity and y \u2261ln(p/mec) with the electron mass me and the speed of light c (Skilling 1975). The spatial di\ufb00usion coe\ufb03cient, D(r, p), is assumed to have a Bohm-like dependence on the rigidity, D(r, p) = 1.7 \u00d7 1019cm2s\u22121 \u0012 B(r) 1 \u00b5G \u0013\u22121 \u0012 p mec \u0013 . (2) The cooling coe\ufb03cient b(p) = \u2212dp/dt accounts for radiative cooling, and the cooling time scale is de\ufb01ned as trad(\u03b3e) = p b(p) = 9.8 \u00d7 107 yr \u0012 Be 5 \u00b5G \u0013\u22122 \u0010 \u03b3e 104 \u0011\u22121 , (3) where \u03b3e is the Lorentz factor of electrons. Here the \u2018e\ufb00ective\u2019 magnetic \ufb01eld strength, B2 e \u2261B2 + B2 rad with Brad = 3.24 \u00b5G(1 + z)2, takes account for the IC loss due to the cosmic background radiation as well as synchrotron loss. The redshift of the cluster CIZA J2242.8+5301 is z = 0.192. Assuming that the test-particle limit is applied at weak cluster shocks with Ms \u2272 several (see Table 1), the usual gasdynamic conservation equations are solved to follow the background \ufb02ow speed, u(r, t), using the 1D spherical version of the CRASH (Cosmic-Ray Amr SHock) code (Kang & Jones 2006). The structure and evolution of u(r, t) are fed into \f\u2013 8 \u2013 Equation (1), while the gas pressure, Pg(r, t), is used in modeling the postshock magnetic \ufb01eld pro\ufb01le (see Section 2.3). We do not consider the acceleration of CR protons in this study, since the synchrotron emission from CR electrons is our main interest and the dynamical feedback from the CR proton pressure can be ignored at weak shocks (Ms \u22723) in the testparticle regime (Kang & Ryu 2013). In order to optimize the shock tracking of the CRASH code, a comoving frame that expands with the instantaneous shock speed is adopted. Further details of DSA simulations can be found in Kang (2015a). 2.2. Shock Parameters To set up the initial shock structure, we adopt a Sedov self-similar blast wave propagating into a uniform static medium, which can be speci\ufb01ed by two parameters, typically, the explosion energy, E0, and the background density, \u03c10 (Ostriker & McKee 1988; Ryu & Vishniac 1991). For our DSA simulations, we choose the initial shock radius and speed, rs,i and us,i, respectively, and adopt the self-similar pro\ufb01les of the gas density \u03c1(r), the gas pressure Pg(r), and the \ufb02ow speed u(r) behind the shock in the upstream rest-frame. For the \ufb01ducial case (SA1 model in Table 1), for example, the initial shock parameters are rs,i = 1.3 Mpc, and us,i = 3.3 \u00d7 103 km s\u22121. For the model parameters for the shock and upstream conditions, refer to Table 1 and Section 3.1. We suppose that at the onset of the DSA simulations, this initial shock propagates through the ICM with the gas density gradient described by a power law of r. Typical X-ray brightness pro\ufb01les of observed clusters can be represented approximately by the so-call beta model for isothermal ICMs, \u03c1(r) \u221d[1+(r/rc)2]\u22123\u03b2/2 with \u03b2 \u223c2/3 (Sarazin 1986). In the outskirts of clusters, well outside of the core radius (r \u226brc), it asymptotes as \u03c1(r) \u221dr\u22123\u03b2. We take the upstream gas density of \u03c1up \u221dr\u22122 as the \ufb01ducial case (SA1), but also consider \u03c1up \u221dr\u22124 (SA3) and \u03c1up = constant (SA4) for comparison. The shock speed and Mach number decrease in time as the spherical shock expands, depending on the upstream density pro\ufb01le. Kang (2015b) demonstrated that the shock decelerates approximately as us \u221dt\u22123/5 for \u03c1up = constant and as us \u221dt\u22121/3 for \u03c1up \u221dr\u22122, while the shock speed is almost constant in the case of \u03c1up \u221dr\u22124. As a result, nonlinear deviations from the DSA predictions for steady planar shocks are expected to become the strongest in SA4 model, while the weakest in SA3 model. In the \ufb01ducial SA1 model, which is the most realistic among the three models, the e\ufb00ects of the evolving spherical shock are expected to be moderate. The ICM temperature is set as kT1 = 3.35keV, adopted from Ogrean et al. (2014), in most of the models. Hereafter, the subscripts \u201c1\u201d and \u201c2\u201d are used to indicate the quantities immediately upstream and downstream of the shock, respectively. Although the \f\u2013 9 \u2013 ICM temperature is known to decrease slightly in the cluster outskirt, it is assumed to be isothermal, since the shock typically travels only 0.2 \u22120.3 Mpc for the duration of our simulations \u2272100 Myr. 2.3. Models for Magnetic Fields Magnetic \ufb01elds in the downstream region of the shock are the key ingredient that governs the synchrotron cooling and emission of CR electrons in our models. We assume that the fossil cloud is magnetized to \u00b5G level. As discussed in the Introduction, observations indicate that the magnetic \ufb01eld strength decreases from \u223c1\u221210 \u00b5G in the core region to \u223c0.1\u22121 \u00b5G in the periphery of clusters (e.g., Bonafede et al. 2010; Feretti et al. 2012). This corresponds to the plasma beta of \u03b2p \u223c50 \u2212100 in typical ICMs (e.g., Ryu et al. 2008, 2012). On the other hand, it is well established that magnetic \ufb01elds can be ampli\ufb01ed via resonant and non-resonant instabilities induced by CR protons streaming upstream of strong shocks (Bell 1978, 2004). In addition, magnetic \ufb01elds can be ampli\ufb01ed by turbulent motions behind shocks (Giacalone & Jokipii 2007). Recent hybrid plasma simulations have shown that the magnetic \ufb01eld ampli\ufb01cation factor due to streaming CR protons scales with the Alfv\u00b4 enic Mach number, MA, and the CR proton acceleration e\ufb03ciency as \u27e8\u03b4B/B\u27e92 \u223c3MA(Pcr,2/\u03c11u2 s) (Caprioli & Spitkovsky 2014). Here, \u03b4B is the turbulent magnetic \ufb01eld perpendicular to the mean background magnetic \ufb01eld, Pcr,2 is the downstream CR pressure, and \u03c11u2 s is the upstream ram pressure. For typical radio relic shocks, the sonic and Alfv\u00b4 enic Mach numbers are expected to range 2 \u2272Ms \u22725 and 10 \u2272MA \u227225, respectively (e.g., Hong et al. 2014). The magnetic \ufb01eld ampli\ufb01cation in both the upstream and downstream of weak shocks is not yet fully understood, especially in high beta ICM plasmas. So we consider simple models for the postshock magnetic \ufb01elds. For the \ufb01ducial case, we assume that the magnetic \ufb01eld strength across the shock transition is increased by compression of the two perpendicular components: B2(t) = B1 p 1/3 + 2\u03c3(t)2/3, (4) where B1 and B2 are the magnetic \ufb01eld strengths immediately upstream and downstream of the shock, respectively, and \u03c3(t) = \u03c12/\u03c11 is the time-varying compression ratio across the shock. For the downstream region (r < rs), the magnetic \ufb01eld strength is assumed to scale with the gas pressure: Bdn(r, t) = B2(t) \u00b7 [Pg(r, t)/Pg,2(t)]1/2, (5) where Pg,2(t) is the gas pressure immediately behind the shock. This assumes that the ratio of the magnetic to thermal energy density is constant downstream of the shock. Since Pg(r) decreases behind the spherical blast wave, Bdn(r) also decreases downstream as illustrated \f\u2013 10 \u2013 in Figure 2. This \ufb01ducial magnetic \ufb01eld model is adopted in most of the models described in Table 1, except SA2 and SA4 models. The range of B2(t) is shown for the acceleration duration of 0 \u2264tage \u226460 Myr in Table 1, re\ufb02ecting the decrease of shock compression ratio during the period, but the change is small. In the second, more simpli\ufb01ed (but somewhat unrealistic) model, it is assumed that B1 = 2 \u00b5G and B2 = 7 \u00b5G, and the downstream magnetic \ufb01eld strength is constant, i.e., , Bdn = B2 for r < rs. This model was adopted in Kang et al. (2012) and also for SA2 and SA4 models for comparison in this study. Kang (2015b) showed that the postshock synchrotron emission increases downstream away from the shock in the case of a decelerating shock, because the shock is stronger at earlier time. But such nonlinear signatures become less distinct in the model with decreasing downstream magnetic \ufb01elds, compared with the model with constant downstream magnetic \ufb01elds, because the contribution of synchrotron emission from further downstream region becomes weaker. 2.4. Injection Momentum As described in the Introduction, the new picture of particle injection emerged from recent PIC simulations is quite di\ufb00erent from the \u2018classical\u2019 thermal leakage injection model commonly employed in previous studies of DSA (e.g., Kang et al. 2002). However, the requirement of p \u22733pth,p for particles to take part in the full Fermi 1st-order process, scattering back and forth di\ufb00usively across the shock transition zone with thickness, \u2206lshock \u223crg(pth,p), seems to remain valid (Caprioli et al. 2015; Park et al. 2015). Here, pth,p = p 2mpkBT2 is the most probable momentum of thermal protons with postshock temperature T2 and rg is the gyroradius of particles. In other words, only suprathermal particles with the gyro-radius greater than the shock thickness are expected to cross the shock transition layer. Hence, we adopt the traditional phenomenological model in which only particles above the injection momentum, pinj \u22485.3 mpus/\u03c3, are allowed to get injected into the CR populations at the lowest momentum boundary (Kang et al. 2002). This can be translated to the electron Lorentz factor, \u03b3e,inj = pinj/mec \u223c30(us/3000 km s\u22121)(3.0/\u03c3). In the case of expanding shocks considered in this study, pinj(t) decreases as the shock slows down in time. \f\u2013 11 \u2013 2.5. Fossil Electrons As mentioned in the Introduction, one peculiar feature of the Sausage relic is the uniform surface brightness along the Mpc-scale arc-like shape, which requires a special geometrical distribution of shock-accelerated electrons (van Weeren et al. 2011a). Some of previous studies adopted the ribbon-like curved shock surface and the downstream swept-up volume, viewed edge-on with the viewing extension angle \u03c8 \u223c10\u25e6(e.g., van Weeren et al. 2010; Kang et al. 2012; Kang 2015b). We suggest a picture where a spherical shock of radius rs \u223c1.5 Mpc passes through an elongated cloud with width wcloud \u223c260 kpc and length lcloud \u223c2 Mpc, \ufb01lled with fossil electrons. Then the shock surface penetrated into the cloud becomes a ribbon-like patch, distributed on a sphere with radius rs \u223c1.5 Mpc with the angle \u03c8 = 360\u25e6\u00b7 wcloud/(2\u03c0rs) \u223c10\u25e6. The downstream volume of radio-emitting, reaccelerated electrons has the width \u2206l(\u03b3e) \u2248(us/\u03c3) \u00b7 min[tage, trad(\u03b3e)], as shown in Figure 1 of Kang (2015b). Hereafter the \u2018acceleration age\u2019, tage, is de\ufb01ned as the duration of electron acceleration since the shock encounters the cloud. This model is expected to produce a uniform surface brightness along the relic length. Moreover, if the acceleration age is tage \u2272100 Myr, the volume-integrated radio spectrum is expected to steepen gradually over 0.1-10 GHz (Kang 2015b). There are several possible origins for such clouds of relativistic electrons in the ICMs: (1) old remnants of radio jets from AGNs, (2) electron populations that were accelerated by previous shocks and have cooled down below \u03b3e < 104, and (3) electron populations that were accelerated by turbulence during merger activities. Although an AGN jet would have a hollow, cocoon-like shape initially, it may turn into a \ufb01lled, cylindrical shape through di\ufb00usion, turbulent mixing, or contraction of relativistic plasmas, as the electrons cool radiatively. During such evolution relativistic electrons could be mixed with the ICM gas within the cloud. We assume that the cloud is composed of the thermal gas of \u03b3g = 5/3, whose properties (density and temperature) are the same as the surrounding ICM gas, and an additional population of fossil electrons with dynamically insigni\ufb01cant pressure. In that regard, our fossil electron cloud is di\ufb00erent from hot bubbles considered in previous studies of the interaction of a shock with a hot rare\ufb01ed bubble (e.g., Ensslin & Br\u00a8 uggen 2002). In the other two cases where electrons were accelerated either by shocks or by turbulence (see, e.g., Farnsworth et al. 2013), it is natural to assume that the cloud medium should contain both thermal gas and fossil electrons. Three di\ufb00erent spectra for fossil electron populations are considered. In the \ufb01rst \ufb01ducial case (e.g., SA1 model), nonthermal electrons have the momentum around pinj, which corresponds to \u03b3e,inj \u223c20 \u221230 for the model shock parameters considered here. So in this model, seed electrons with \u223c\u03b3e,inj are injected from the fossil population and re-accelerated into \f\u2013 12 \u2013 radio-emitting CR electrons with \u03b3e \u2273103. Since we compare the surface brightness pro\ufb01les in arbitrary units here, we do not concern about the normalization of the fossil population. In the second model, the fossil electrons are assumed to have a power-law distribution extending up to \u03b3e \u226b104, fe,up(p) = f0 \u00b7 \u0012 p pinj \u0013\u2212s . (6) For the modeling of the Sausage relic, the value of s is chosen as s = 2\u03b1inj + 3 = 4.2 with \u03b1inj = 0.6 (SA1p model). As mentioned in the Introduction, the volume-integrated radio spectrum of the Sausage relic seems to steepen at high frequencies, perhaps more strongly than expected from radiative cooling alone (Stroe et al. 2014b). So in the third model, we consider a power-law population with exponential cuto\ufb00as follows: fe,up(p) = f0 \u00b7 \u0012 p pinj \u0013\u2212s exp \" \u2212 \u0012 \u03b3e \u03b3e,cut \u00132# , (7) where \u03b3e,cut is the cuto\ufb00Lorentz factor. This may represent fossil electrons that have cooled down to \u223c\u03b3e,cut from the power-law distribution in Equation (6). The integrated spectrum of the Sausage relic shows a strong curvature above \u223c1.5 GHz (Stroe et al. 2014b), which corresponds to the characteristic frequency of synchrotron emission from electrons with \u03b3e \u223c 1.5 \u00d7 104 when the magnetic \ufb01eld strength ranges 5 \u22127 \u00b5G (see Equation (12) below). So \u03b3e,cut = 104 and 2 \u00d7 104 are chosen for SC1pex1 and SC1pex2 models, respectively. Figure 3 shows the electron distributions in the cloud medium: the thermal distribution for the ICM gas with kT = 3.35 keV and the fossil population. The characteristics of the di\ufb00erent models, SA1 (\ufb01ducial model), SA1p (power-law), and SC1pex1 (power-law with an exponential cuto\ufb00) will be described in Section 3.1. Note that there could be \u2018suprathermal\u2019 distribution between the thermal and CR populations. But it does not need to be speci\ufb01ed here, since only f(p) around pinj controls the spectrum of re-accelerated electrons. 3. RESULTS OF DSA SIMULATIONS 3.1. Models We consider several models whose characteristics are summarized in Table 1. For the \ufb01ducial model, SA1, the upstream density is assumed to decrease as \u03c1up = \u03c10(r/rs,i)\u22122, \f\u2013 13 \u2013 while the upstream temperature is taken to be kT1 = 3.35 keV. Since we do not concern about the absolute radio \ufb02ux level in this study, \u03c10 needs not to be speci\ufb01ed. The preshock magnetic \ufb01eld strength is B1 = 2.5 \u00b5G and the immediate postshock magnetic \ufb01eld strength is B2(t) \u22486.7 \u22126.3 \u00b5G during 60 Myr, while the downstream \ufb01eld, Bdn(r), is given as in equation (5). The initial shock speed is us,i = 3.3 \u00d7 103 km s\u22121, corresponding to the sonic Mach number of Ms,i = 3.5 at the onset of simulation. In all models with \u03c1up \u221dr\u22122, the shock slows down as us \u221dt\u22121/3, and at the electron acceleration age of 60 Myr, us \u22482.9\u00d7103 km s\u22121 with Ms \u22483.1 and \u03b1inj \u22480.73. The fossil seeds electrons are assumed to have a deltafunction-like distribution around \u03b3e,inj \u224830. In SA1b model, the upstream magnetic \ufb01eld strength is weaker with B1 = 0.25 \u00b5G and B2(t) \u22480.67 \u22120.63 \u00b5G during 60 Myr. Otherwise, it is the same as the \ufb01ducial model. So the character \u2018b\u2019 in the model name denotes \u2018weaker magnetic \ufb01eld\u2019, compared to the \ufb01ducial model. Comparison of SA1 and SA1b models will be discussed in Section 3.4. In SA1p model, the fossil electrons have a power-law population of fe,up \u221dp\u22124.2, while the rest of the parameters are the same as those of SA1 model. The character \u2018p\u2019 in the model name denotes a \u2018power-law\u2019 fossil population. In SA2 model, both the preshock and postshock magnetic \ufb01eld strengths are constant, i.e., B1 = 2 \u00b5G and B2 = 7 \u00b5G, otherwise it is the same as SA1 model. In SA3 model, the upstream gas density decreases as \u03c1up = \u03c10(r/rs,i)\u22124, so the shock decelerates more slowly, compared to SA1 model. Considering this, the initial shock speed is set to us,i = 3.0 \u00d7 103 km s\u22121 with Ms,i = 3.2. At the acceleration age of 60 Myr, the shock speed decreases to us \u22482.8 \u00d7 103 km s\u22121 corresponding to Ms \u22483.0. In SA4 model, the upstream density is constant, so the shock decelerates approximately as us \u221dt\u22123/5, more quickly, compared to SA1 model. The upstream and downstream magnetic \ufb01eld strengths are also constant as in SA2 model. Figure 2 compares the pro\ufb01les of the \ufb02ow speed, u(r, t), and the magnetic \ufb01eld strength, B(r, t), in SA1 and SA4 models. Note that although the shocks in SA1 and SA4 models are not very strong with the initial Mach number Ms,i \u22483.5, they decelerate approximately as us \u221dt\u22121/3 and us \u221dt\u22123/5, respectively, as in the self-similar solutions of blast waves (Ostriker & McKee 1988; Ryu & Vishniac 1991). The shock speed decreases by \u223c12 \u221215% during the electron acceleration age of 60 Myr in the two models. In SB1 model, the preshock temperature, T1, is lower by a factor of 1.52, and so the initial shock speed us,i = 3.3 \u00d7 103 km s\u22121 corresponds to Ms,i \u22485.3. The shock speed us \u22482.9 \u00d7 103 km s\u22121 at the age of 60 Myr corresponds to Ms \u22484.6, so the injection spectral index \u03b1inj \u22480.6. The \u2018SB\u2019 shock is di\ufb00erent from the \u2018SA\u2019 shock in terms of only \f\u2013 14 \u2013 the sonic Mach number. In SC1pex1 and SC1pex2 models, the fossil electrons have fe,up \u221dp\u22124.2\u00b7exp[\u2212(\u03b3e/\u03b3e,cut)2] with the cuto\ufb00at \u03b3e,cut = 104 and \u03b3e,cut = 2 \u00d7 104, respectively. The character \u2018pex\u2019 in the model name denotes a \u2018power-law with an exponential cuto\ufb00\u2019. A slower initial shock with us,i \u22482.2 \u00d7 103 km s\u22121 and Ms,i \u22482.4 is chosen, so at the acceleration age of 80 Myr the shock slows down to Ms \u22482.1 with \u03b1inj \u22481.1. The \u2018SC\u2019 shock di\ufb00ers from the \u2018SA\u2019 shock in terms of the shock speed and the sonic Mach number. The integrated spectral index at high frequencies would be steep with A\u03bd \u22481.6, while A\u03bd \u22480.7 at low frequencies due to the \ufb02at fossil electron population. They are intended to be toy models that could reproduce the integrated spectral indices, A\u03bd \u223c0.7 for \u03bd = 0.1\u22120.2 GHz and A\u03bd \u223c1.6 for \u03bd = 2.3\u221216 GHz, compatible with the observed curved spectrum of the Sausage relic (Stroe et al. 2014b). 3.2. Radio Spectra and Indices The local synchrotron emissivity, j\u03bd(r), is calculated, using the electron distribution function, fe(r, p, t), and the magnetic \ufb01eld pro\ufb01le, B(r, t). Then, the radio intensity or surface brightness, I\u03bd, is calculated by integrating j\u03bd along lines-of-sight (LoSs). I\u03bd(R) = 2 Z hmax 0 j\u03bd(r)dh. (8) R is the distance behind the projected shock edge in the plane of the sky, as de\ufb01ned in the Introduction, and h is the path length along LOSs; r, R, and h are related as r2 = (rs \u2212 R)2 +h2. The extension angle is \u03c8 = 10\u25e6(see Section 2.5). Note that the radio \ufb02ux density, S\u03bd, can be obtained by convolving I\u03bd with a telescope beam as S\u03bd(R) \u2248I\u03bd(R)\u03c0\u03b81\u03b82(1+z)\u22123, if the brightness distribution is broad compared to the beam size of \u03b81\u03b82. The volume-integrated synchrotron spectrum, J\u03bd = R j\u03bd(r)dV , is calculated by integrating j\u03bd over the entire downstream region with the assumed geometric structure described in Section 2.5. The spectral indices of the local emissivity, j\u03bd(r), and the integrated spectrum, J\u03bd, are de\ufb01ned as follows: \u03b1\u03bd(r) = \u2212d ln j\u03bd(r) d ln \u03bd , (9) A\u03bd = \u2212d ln J\u03bd d ln \u03bd . (10) As noted in the Introduction, unresolved radio observations usually report the spectral index, \u2018\u03b1\u2019 at various frequencies, which is equivalent to A\u03bd here. In the postshock region, the cuto\ufb00of the electron spectrum, fe(r, \u03b3e), decreases downstream from the shock due to radiative losses, so the volume-integrated electron spectrum \f\u2013 15 \u2013 steepens from \u03b3\u2212q e to \u03b3\u2212(q+1) e above the break Lorentz factor (see Figure 5). At the electron acceleration age tage, the break Lorentz factor can be estimated from the condition tage = trad (Kang et al. 2012): \u03b3e,br \u2248104 \u0012 tage 100Myr \u0013\u22121 \u0012 52 B2 2 + B2 rad \u0013 . (11) Hereafter, the postshock magnetic \ufb01eld strength, B2, and Brad are expressed in units of \u00b5G. Since the synchrotron emission from mono-energetic electrons with \u03b3e peaks around \u03bdpeak \u22480.3 \u0012 3eB2 4\u03c0mec \u0013 \u03b32 e \u22480.63 GHz \u00b7 \u0012B2 5 \u0013 \u0010 \u03b3e 104 \u00112 , (12) the break frequency that corresponds to \u03b3e,br becomes \u03bdbr \u22480.63GHz \u0012 tage 100Myr \u0013\u22122 \u0012 52 B2 2 + B2 rad \u00132 \u0012B2 5 \u0013 . (13) So the volume-integrated synchrotron spectrum, J\u03bd, has a spectral break, or more precisely a gradual increase of the spectral index approximately from \u03b1inj to \u03b1inj + 0.5 around \u03bdbr. 3.3. Width of Radio Shocks For electrons with \u03b3e > 104, the radiative cooling time in Equation (3) becomes shorter than the acceleration age if tage \u2273100 Myr. Then, the width of the spatial distribution of those high-energy electrons downstream of the shock becomes \u2206lcool(\u03b3e) \u2248u2trad(\u03b3e) \u2248100 kpc \u00b7 \u0010 u2 103 km s\u22121 \u0011 \u0012 52 B2 2 + B2 rad \u0013 \u0010 \u03b3e 104 \u0011\u22121 , (14) where u2 is the downstream \ufb02ow speed. With the characteristic frequency \u03bdpeak of electrons with \u03b3e in Equation (12), the width of the synchrotron emitting region behind the shock at the observation frequency of \u03bdobs = \u03bdpeak/(1 + z) would be similar to \u2206lcool(\u03b3e): \u2206l\u03bdobs \u2248W\u00b7\u2206lcool(\u03b3e) \u2248W\u00b7100 kpc\u00b7 \u0010 u2 103 km s\u22121 \u0011 \u0012 52 B2 2 + B2 rad \u0013 \u0012B2 5 \u00131/2 \u0014\u03bdobs(1 + z) 0.63GHz \u0015\u22121/2 . (15) Here, W is a numerical factor of \u223c1.2 \u22121.3. This factor takes account for the fact that the spatial distribution of synchrotron emission at \u03bdpeak is somewhat broader than that of electrons with the corresponding \u03b3e, because more abundant, lower energy electrons also make contributions (Kang 2015a). One can estimate two possible values of the postshock \f\u2013 16 \u2013 magnetic \ufb01eld strength, B2, from this relation, if \u2206l\u03bdobs can be determined from the observed pro\ufb01le of surface brightness and u2 is known. For example, van Weeren et al. (2010) inferred two possible values of the postshock magnetic \ufb01eld strength of the Sausage relic, Blow \u2248 1.2 \u00b5G and Bhigh \u22485 \u00b5G, by assuming that the FWHM of surface brightness, \u2206lSB, is the same as \u2206l\u03bdobs \u2248\u2206lcool(\u03b3e) \u224855 kpc (i.e., W = 1). On the other hand, Kang et al. (2012) showed that the pro\ufb01le of surface brightness depends strongly on \u03c8 in the case of the assumed geometrical structure. Figure 4 illustrates the results of a planar shock with di\ufb00erent \u03c8\u2019s. The shock has us = 2.7\u00d7103 km s\u22121 and Ms = 4.5. The magnetic \ufb01eld strengths are assumed to be B1 = 2 \u00b5G and B2 = 7 \u00b5G, upstream and downstream of the shock, respectively. The results shown are at the acceleration age of 80 Myr. Note that the quantities, ge(d, \u03b3e), j\u03bd(d), and I\u03bd(R), are multiplied by arbitrary powers of \u03b3e and \u03bdobs, so they can be shown for di\ufb00erent values of \u03b3e and \u03bdobs with single linear scales in Figure 4. Here, the FWHM of ge(\u03b3e) is, for instance, \u2206lcool(\u03b3e) \u224828 kpc for \u03b3e = 8.47 \u00d7 103 (red dotted line in the top panel), while the FWHM of j\u03bd is \u2206l\u03bdobs \u224835 kpc for \u03bdobs = 0.594 GHz (red dotted line in the middle panel). Note that the values of \u03b3e and \u03bdpeak = \u03bdobs(1 + z) are chosen from the sets of discrete simulation bins, so they satisfy only approximately the Equation (12). The bottom panel demonstrates that the FWHM of I\u03bd, \u2206lSB, strongly depends on \u03c8. For instance, \u2206lSB \u224847 kpc for \u03bdobs = 0.594 GHz, if \u03c8 = 10\u25e6 (red dotted line). This implies that due to the projection e\ufb00ect, \u2206lSB could be substantially di\ufb00erent from \u2206lcool. So \u2206lSB of the observed radio \ufb02ux pro\ufb01le may not be used to infer possible values of B2 in radio relics, unless the geometrical structure of the shock is known and so the projection e\ufb00ect can be modeled accordingly. Note that the quantity Y (B2, z) \u2261[(B2 2 + B2 rad)/52]\u22121 \u00b7 (B2/5)1/2 in Equation (15) also appears as Y 2 in Equation (13). So the break frequency becomes identical for the two values of B2, Blow and Bhigh, that give the same value of Y . For example, SA1 model with B2(t) \u22486.7\u22126.3 \u00b5G and SA1b model with B2(t) \u22480.67\u22120.63 \u00b5G (see Table 1) produce not only the similar \u2206l\u03bdobs but also the similar spectral break in J\u03bd. But the corresponding values of \u03b3e,br in Equation (11) are di\ufb00erent for the two models with Blow and Bhigh. Moreover, the amplitude of the integrated spectrum would scale as J\u03bd(Bhigh)/J\u03bd(Blow) \u223c(Bhigh/Blow)2, if the electron spectrum ne(\u03b3e) for \u03b3e < \u03b3e,br is similar for the two models . We compare these two models in detail in the next section. 3.4. Comparison of the Two Models with Bhigh and Blow In Figure 5, we \ufb01rst compare the electron spectra and the synchrotron emission spectra in SA1 and SA1b models. Here, the plotted quantities, ge, Ge, j\u03bd and J\u03bd, are in arbitrary \f\u2013 17 \u2013 units. The postshock magnetic \ufb01eld strength decreases from B2 \u22486.7 \u00b5G to 6.2 \u00b5G in 110 Myr in SA1 model, while B2 \u22480.67 \u00b5G to 0.62 \u00b5G in SA1b model. In these two models, the values of Y (B2, z) are similar, so the spectral break in the integrated spectra should be similar as well. The left panels of Figure 5 show that the electron spectrum at the shock, ge(rs, p), steepens in time as the shock weakens. As expected, the volume-integrated electron spectrum, Ge(p) = p4Fe(p) = p4 R fe(r, p)dV , steepens by one power of p above the break momentum, pe,br, which decreases in time due to radiative losses. In both models, however, the slopes, q = \u2212d ln fe(rs, p)/d ln p and Q = \u2212d ln Fe(p)/d ln p, deviate from the simple steepening of Q = q to q + 1 above pe,br, because of the time-dependence of the shock properties. The right panels of Figure 5 show the synchrotron spectrum at the shock, j\u03bd(rs), the volume-integrated synchrotron spectrum, J\u03bd, and their spectral indices, \u03b1inj(\u03bd) and A\u03bd. Note that in both models, the transition of A\u03bd from \u03b1inj to \u03b1inj +0.5 is broader than the transition of Q from q to q + 1. This is because the synchrotron emission at a given frequency comes from electrons with a somewhat broad range of \u03b3e\u2019s. As in the case of Q, A\u03bd does not follow the simple steepening, but rather shows nonlinear features due to the evolving shock properties. This implies that the simple relation, A\u03bd \u2248\u03b1inj+0.5, which is commonly adopted in order to con\ufb01rm the DSA origin of synchrotron spectrum, should be applied with some caution in the case of evolving shocks. The highest momentum for ge(rs, p), peq \u221dus \u00b7 [B1/(B2 e,1 + B2 e,2)]1/2, is higher in SA1 model with stronger magnetic \ufb01eld, but the amplitude of ge(rs, p) near peq, say at p/mec \u223c 106 \u2212107.5, is greater in SA1b model with weaker magnetic \ufb01eld. As a consequence, the ratio of the synchrotron emission of the two models is somewhat less than the ratio of the magnetic energy density, (Bhigh/Blow)2 = 100. In SA1b model, for example, the amplitudes of j\u03bd(rs) and J\u03bd at 0.1 \u221210 GHz are reduced by a factor of \u223c60, compared to the respective values in SA1 model. Also the cuto\ufb00frequencies for both j\u03bd(rs) and J\u03bd are lower in SA1b model, compared to those in SA1 model. As pointed above, \u03bdbr is almost the same in the two models, although pe,br is di\ufb00erent. This con\ufb01rms that we would get two possible solutions for B2, if we attempt to estimate the magnetic \ufb01eld strength from the value of \u03bdbr in the integrated spectrum. 3.5. Surface Brightness Pro\ufb01le As noted in Section 3.3, the width of the downstream electron distribution behind the shock is determined by the advection length, \u2206ladv \u2248u2 \u00b7 tage, for low-energy electrons or by \f\u2013 18 \u2013 the cooling length, \u2206lcool(\u03b3e) \u2248u2 \u00b7 trad(\u03b3e), for high-energy electrons. As a result, at high frequencies the width of the synchrotron emission region, \u2206l\u03bdobs, varies with the downstream magnetic \ufb01eld strength for given u2 and \u03bdobs as in Equation (15). In addition, the surface brightness pro\ufb01le, I\u03bd(R), and its FWHM, \u2206lSB, also depend on the extension angle, \u03c8, as demonstrated in Figure 4. In Figures 6 and 7, the spatial pro\ufb01les of I\u03bd(R) are shown for eight models listed in Table 1. Here, we choose the observation frequencies \u03bdobs = 150, 600, 1400 MHz; the source frequencies are given as \u03bd = \u03bdobs(1 + z) with z = 0.192. While the downstream \ufb02ow speed ranges u2 \u22481, 000 \u2212900 km s\u22121 in most of the models, u2 \u2248850 \u2212750 km s\u22121 in SC1pex1 model. So the results are shown at tage = 30, 60, and 110 Myr, except in SC1pex1 model for which the results are shown at tage = 30, 80, and 126 Myr. Note that the quantity, \u03bdI\u03bd \u00d7 X, is shown in order to be plotted with one linear scale, where X is the numerical scale factor speci\ufb01ed in each panel. In SB1 model with a stronger shock, for example, the electron acceleration is more e\ufb03cient by a factor of about 10, compared to other models, so the pro\ufb01les shown are reduced by similar factors. For a \ufb01xed value of \u03c8, the surface brightness pro\ufb01le is determined by the distribution of j\u03bd(r) behind the shock along the path length h, as given in Equation (8). The intensity increases gradually from the shock position (R = 0) to the \ufb01rst in\ufb02ection point, Rinf,1(t) = rs(t)(1 \u2212cos \u03c8) \u224821 \u221224 kpc with \u03c8 = 10\u25e6, mainly due to the increase of the path length along LoSs. Since the path length starts to decrease beyond Rinf,1, if the emissivity, j\u03bd, is constant or decreases downstream of the shock, then I\u03bd should decrease for R > Rinf,1. In all the models considered here, however, at low frequencies, j\u03bd increases downstream of the shock, because the model spherical shock is faster and stronger at earlier times. As a result, I\u03bd at 150 MHz is almost constant or decrease very slowly beyond Rinf,1, or even increases downstream away from the shock in some cases (e.g., SA2 and SA4 models). So the downstream pro\ufb01le of the synchrotron radiation at low frequencies emitted by uncooled, low-energy electrons could reveal some information about the shock dynamics, providing that the downstream magnetic \ufb01eld strength is known. The second in\ufb02ection in I\u03bd(R) occurs roughly at Rinf,2 \u2248W \u00b7 \u2206ladv for low frequencies, and at Rinf,2 \u2248W \u00b7 \u2206lcool(\u03b3e) for high frequencies where trad < tage, with W \u22481.2 \u22121.3. Here, \u2206ladv \u2248100 kpc(u2/103 km s\u22121)(tage/100 Myr), and \u2206lcool(\u03b3e) is given in Equation (14). Figures 6 and 7 exhibit that at 30 Myr (tage < trad), the second in\ufb02ection appears at the same position (\u2206ladv \u224830 kpc) for the three frequencies shown. At later times, the position of the second in\ufb02ection depends on tage at low frequencies, while it varies with B2 and \u03bdobs in addition to u2. Thus, only if tage > trad(\u03b3e), \u2206lSB at high frequencies can be used to infer B2, providing that \u03c8 and u2 are known. \f\u2013 19 \u2013 The width of the Sausage relic, de\ufb01ned as the FWHM of the surface brightness, was estimated to be \u2206lSB \u223c55 kpc at 600 MHz (van Weeren et al. 2010). As shown in Figures 6 and 7, in all the models except SA4, the calculated FWHM of I\u03bd(R) at 600 MHz (red dotted lines) at the acceleration age of 60 \u2212110 Myr would be compatible with the observed value. In SA4 model, the intensity pro\ufb01le at 600 MHz seems a bit too broad to \ufb01t the observed pro\ufb01le. 3.6. Volume-Integrated and Postshock Spectra Figures 8 and 9 show the volume-integrated synchrotron spectrum, J\u03bd, and its slope, A\u03bd, for the same models at the same ages as in Figures 6 and 7. Here J\u03bd is integrated over the curved, ribbon-like downstream volume with \u03c8 = 10\u25e6, as described in Section 3.2. In the \ufb01gures, J\u03bd is plotted in arbitrary units. The \ufb01lled circles represent the data points for the integrated \ufb02ux of the Sausage relic, which were taken from Table 1 of Stroe et al. (2014b) and re-scaled roughly to \ufb01t by eye the spectrum of SA1 model at 60 Myr (the red dotted line in the top-left panel of Figure 8). Observational errors given by Stroe et al. (2014b) are about 10 %, so the error bars are in fact smaller than the size of the \ufb01lled circles in the \ufb01gures, except the one at 16 GHz with 25 %. At \ufb01rst it looks that in most of the models, the calculated J\u03bd reasonably \ufb01ts the observed data including the point at 16 GHz. But careful inspections indicate that our models fail to reproduce the rather abrupt increase in the spectral curvature at \u223c1.5 GHz. The calculated J\u03bd is either too steep at low frequencies \u22721.5 GHz, or too \ufb02at at high frequencies \u22731.5 GHz. For instance, J\u03bd of the \ufb01ducial model, SA1, is too steep at \u03bd \u22721.5 GHz. On the other hand, SA1p model with a \ufb02atter fossil population and SB1 model with a \ufb02atter injection spectrum (due to higher Ms) seem to have the downstream electron spectra too \ufb02at to explain the observed data for \u03bd \u22731.5 GHz. As mentioned before, the transition of the integrated spectral index from \u03b1inj to \u03b1inj+0.5 occurs gradually over a broad range of frequency, (0.1 \u221210)\u03bdbr. If we estimate \u03bdbr as the frequency at which the gradient, dA\u03bd/d\u03bd, has the largest value, it is \u223c4, 1, 0.3 GHz at tage = 30, 60, and 110 Myr, respectively, for all the models except SC1pex1 model (for which A\u03bd is shown at di\ufb00erent epochs). Since \u03bdbr is determined mainly by the magnetic \ufb01eld strength and the acceleration age, it does not sensitively depend on other details of the models. In SC1pex1 model, the power-law portion (\u221dp4.2) gives a \ufb02atter spectrum with A\u03bd \u22480.7 at low frequencies, while the newly injected population at the shock with Ms \u22482.1 results in a steeper spectrum with A\u03bd \u22481.6 at high frequencies. Note that the spectral index estimated with the observed \ufb02ux in Stroe et al. (2014b) between 2.3 and 16 GHz is \f\u2013 20 \u2013 A\u03bd \u22481.62, implying Ms \u22482.1. In Figure 10, J\u03bd\u2019s from all the models considered here are compared at tage = 60 Myr (SA and SB shocks) or 80 Myr (SC shocks). The observed data points of the Sausage relic are also shown. In most of the models (SA models), the shocks have Ms \u22483.0 \u22123.1 at 60 Myr, so the predicted J\u03bd between 2.3 and 16 GHz is a bit \ufb02atter with A\u03bd \u22481.25 than the observed spectrum with A\u03bd \u22481.62 at the same frequency range. As mentioned above, SA1p and SB1 models produce J\u03bd, which is signi\ufb01cantly \ufb02atter at high frequencies than the observed spectrum. The toy model, SC1pex1, seems to produce the best \ufb01t to the observed spectrum, as noted before. In short, the volume-integrated synchrotron spectra calculated here steepen gradually over (0.1 \u221210)\u03bdbr with the break frequency \u03bdbr \u223c1 GHz, if tage \u223c60 \u221280 Myr. However, all the models considered here seem to have some di\ufb03culties \ufb01tting the sharp curvature around \u223c1.5 GHz in the observed integrated spectrum. This implies that the shock dynamics and/or the downstream magnetic \ufb01eld pro\ufb01le could be di\ufb00erent from what we consider here. Perhaps some additional physics that can introduce a feature in the electron energy spectrum, stronger than the \u2018q + 1\u2019 steepening due to radiative cooling, might be necessary. In Figure 11, we present for the three models, SA1, SA1b, and SC1pex1, the mean intensity spectrum in the downstream regions of [Ri, Ri + 5kpc], \u27e8I\u03bd\u27e9= R Ri+5 Ri I\u03bd(R)dR, where Ri = 5 kpc \u00b7 (2i \u22121) and i runs from 1 to 6. This is designed to be compared with the \u2018spatially resolved\u2019 spectrum behind the shock in radio observations (e.g., Stroe et al. 2013). The \ufb01gure shows how the downstream spectrum cuts o\ufb00at progressively lower frequencies due to radiative cooling as the observed position moves further away from the shock. 4. SUMMARY We propose a model that may explain some characteristics of giant radio relics: the relative rareness, uniform surface brightness along the length of thin arc-like radio structure, and spectral curvature in the integrated radio spectrum over \u223c(0.1\u221210) GHz. In the model, a spherical shock encounters an elongated cloud of the ICM thermal gas that is permeated by enhanced magnetic \ufb01elds and an additional population of fossil relativistic electrons. As a result of the shock passage, the fossil electrons are re-accelerated to radio-emitting energies (\u03b3e \u2272104), resulting in a birth of a giant radio relic. In order to explore this scenario, we have performed time-dependent, DSA simulations of spherical shocks with the parameters relevant for the Sausage radio relic in cluster CIZA J2242.8+5301. In the \ufb01ducial model, the shock decelerates from us \u22483.3 \u00d7 103 km s\u22121 \f\u2013 21 \u2013 (Ms \u22483.5) to us \u22482.9\u00d7103 km s\u22121 (Ms \u22483.1) at the acceleration age of 60 Myr. The seed, fossil electrons with \u03b3e,inj \u223c30 are assumed to be injected into the CR population, which is subsequently re-accelerated to higher energies. Such shocks are expected to produce the electron energy spectrum, fe(p) \u221dp\u22124.5, resulting in the synchrotron radiation spectrum with the injection index, \u03b1inj \u22480.75, and the integrated index, A\u03bd \u22481.25, at high frequencies (\u22731 GHz). We consider various models with a range of shock parameters, di\ufb00erent upstream gas density pro\ufb01les, di\ufb00erent downstream magnetic \ufb01eld pro\ufb01les, and three types of fossil electron populations, as summarized in Table 1. Adopting a ribbon-like curved shock surface and the associated downstream volume, which are constrained by the extension angle (or viewing depth) of \u03c8 = 10\u25e6as detailed in Section 2.5 (e.g van Weeren et al. 2010; Kang et al. 2012), the radio surface brightness pro\ufb01le, I\u03bd(R), and the volume-integrated spectrum, J\u03bd, are calculated. The main results are summarized as follows. 1) Two observables, the break frequency in the integrated synchrotron spectrum, \u03bdbr, and the width of the synchrotron emission region behind the shock, \u2206l\u03bdobs, can have identical values for two values of postshock magnetic \ufb01eld strength (see Equations [13] and [15]). 2) The observed width of the surface brightness projected onto the sky plane, \u2206lSB, strongly depends on the assumed value of \u03c8 (see Figure 4). So \u2206lSB may not be used to estimate the postshock magnetic \ufb01eld strength, unless the projection e\ufb00ects can be modeled properly. 3) The integrated synchrotron spectrum is expected to have a spectral curvature that runs over a broad range of frequency, typically for (0.1 \u221210)\u03bdbr. For a shock of Ms \u22483 with the postshock magnetic \ufb01eld strength, Blow \u223c0.62 \u00b5G or Bhigh \u223c6.2 \u00b5G, the integrated spectral index increases gradually from \u03b1inj \u22480.75 to \u03b1inj + 0.5 \u22481.25 over 0.1 \u221210 GHz, if the duration of the shock acceleration is \u223c60 Myr. 4) Assuming that the upstream sound speed is cs,1 \u2248920 km s\u22121 (kT1 \u22483.35 keV) as inferred from X-ray observation, a shock of Ms \u22483 and us \u22483 \u00d7 103 km s\u22121 (e.g., SA1 model) can reasonably explain the observed width, \u2206lSB \u223c55 kpc (van Weeren et al. 2010), and the curved integrated spectrum of the Sausage relic (Stroe et al. 2014b). SB1 model with a shock of Ms \u22484.5, however, produces the integrated spectrum that seems too \ufb02at to explain the observed spectrum above \u223c1 GHz. 5) We also consider two toy models with power-law electron populations with exponential cuto\ufb00s at \u03b3e \u223c104, fe,up(p) \u221dp\u22124.2 exp[\u2212(\u03b3e/\u03b3e,cut)2] (SC1pex1 and SC1pex2 models). They may represent the electron populations that were produced earlier and then have cooled down to \u03b3e \u223c104. SC1pex1 model with a weaker shock (Ms \u22482.1) reproduces better the \f\u2013 22 \u2013 characteristics of the observed integrated spectrum. But the steepening of the integrated spectrum due to radiative cooling alone may not explain the strong spectral curvature above 1.5 GHz toward 16 GHz. 6) This strong curvature at \u223c1.5 GHz may imply that the downstream electron energy spectrum is in\ufb02uenced by some additional physical processes other than radiative losses, because the integrated spectrum of radiatively cooled electrons steepens with the frequency only gradually. This conclusion is likely to remain unchanged even in the case where the observed spectrum consists of the synchrotron emission from multiple shocks with di\ufb00erent Mach numbers, as long as the postshock electrons experience only simple radiative cooling. Other models that may explain the curved spectrum will be further explored and presented elsewhere. The authors thank the anonymous referee for his/her thorough review and constructive suggestions that lead to a signi\ufb01cant improvement of the paper. HK was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2014R1A1A2057940). DR was supported by the National Research Foundation of Korea through grant NRF-2014M1A7A1A03029872 and NRF-2012K1A3A7A03049606." + }, + { + "url": "http://arxiv.org/abs/1405.0557v1", + "title": "Injection of $\u03ba$-like Suprathermal Particles into Diffusive Shock Acceleration", + "abstract": "We consider a phenomenological model for the thermal leakage injection in the\ndiffusive shock acceleration (DSA) process, in which suprathermal protons and\nelectrons near the shock transition zone are assumed to have the so-called\n$\\kappa$-distributions produced by interactions of background thermal particles\nwith pre-existing and/or self-excited plasma/MHD waves or turbulence. The\n$\\kappa$-distribution has a power-law tail, instead of an exponential cutoff,\nwell above the thermal peak momentum. So there are a larger number of potential\nseed particles with momentum, above that required for participation in the DSA\nprocess. As a result, the injection fraction for the $\\kappa$-distribution\ndepends on the shock Mach number much less severely compared to that for the\nMaxwellian distribution. Thus, the existence of $\\kappa$-like suprathermal\ntails at shocks would ease the problem of extremely low injection fractions,\nespecially for electrons and especially at weak shocks such as those found in\nthe intracluster medium. We suggest that the injection fraction for protons\nranges $10^{-4}-10^{-3}$ for a $\\kappa$-distribution with $10 < \\kappa_p < 30$\nat quasi-parallel shocks, while the injection fraction for electrons becomes\n$10^{-6}-10^{-5}$ for a $\\kappa$-distribution with $\\kappa_e < 2$ at\nquasi-perpendicular shocks. For such $\\kappa$ values the ratio of cosmic ray\nelectrons to protons naturally becomes $K_{e/p}\\sim 10^{-3}-10^{-2}$, which is\nrequired to explain the observed ratio for Galactic cosmic rays.", + "authors": "Hyesung Kang, Vahe Petrosian, Dongsu Ryu, T. W. Jones", + "published": "2014-05-03", + "updated": "2014-05-03", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Acceleration of nonthermal particles is ubiquitous at astrophysical collisionless shocks, such as interplanetary shocks in the solar wind, supernova remnant (SNR) shocks in the interstellar medium (ISM) and structure formation shocks in the intracluster medium (ICM) (Blandford & Eichler 1987; Jones & Ellison 1991; Ryu et al. 2003). Plasma physical processes operating at collisionless shocks, such as excitation of waves via plasma instabilities and ensuing wave-particle interactions, depend primarily on the shock magnetic \ufb01eld obliquity as well as on the sonic and Alfv\u00e9nic Mach numbers, Ms and MA, respectively. Collisionless shocks can be classi\ufb01ed into two categories by the obliquity angle, \u0398BN, the angle between the upstream mean magnetic \ufb01eld and the shock normal: quasi-parallel (\u0398BN \u227245\u25e6) and quasi-perpendicular (\u0398BN \u227345\u25e6). Diffusive shock acceleration (DSA) at strong SNR shocks with Ms \u223cMA \u223c10 \u2212100 is reasonably well understood, especially for the quasi-parallel regime, and it has been tested via radio-to-\u03b3-ray observations of nonthermal emissions from accelerated cosmic ray (CR) protons and electrons (see Drury 1983; Blandford & Eichler 1987; Hillas 2005; Reynolds et al. 2012, for reviews). On the contrary, DSA at weak shocks in the ICM (Ms \u223c2 \u22123, MA \u223c10) is rather poorly understood, although its signatures have apparently been observed in a number of radio relic shocks (e.g. van Weeren et al. 2010; Feretti et al. 2012; Kang et al. 2012; Brunetti & Jones 2014). At the same time, in situ measurements of Earth\u2019s bow shock, or traveling shocks in the interplanetary medium (IPM) with spacecrafts have provided crucial insights and tests for plasma physical processes related with DSA at shocks with moderate Mach numbers (Ms \u223cMA \u227210) (e.g. Shimada et al. 1999; Oka et al. 2006; Zank et al. 2007; Masters et al. 2013). Table 1 compares characteristic parameters for plasmas in the IPM, ISM (warm phase), and ICM to highlight their similarities and differences. Here the plasma beta \u03b2p (= Pg/PB \u221d nHT/B2 0) is the ratio of the thermal to magnetic pressures, so the magnetic \ufb01eld pressure is dynamically more important in lower beta plasmas. The plasma alpha is de\ufb01ned as the ratio of the electron plasma frequency to cyclotron frequency: \u03b1p = \u03c9pe \u2126ce = 2\u03c0rge \u03bbDe \u2248 p me/mp \u00b7c vA \u221d \u221ane B0 , (1) where rge is the electron gyroradius, and \u03bbDe is the electron Debye length, vA = B0/\u221a4\u03c0\u03c1 is the Alfv\u00e9n speed. Plasma wave-particle interactions and ensuing stochastic acceleration are more signi\ufb01cant in lower alpha plasmas (e.g. Pryadko & Petrosian 1997). Among the three kinds of plasmas in Table 1, the ICM with the highest \u03b2p has dynamically least signi\ufb01cant magnetic \ufb01elds, but, with the smallest \u03b1p, plasma interactions are expected to be most important there. The last three columns of Table 1 show typical shock speeds, sonic Mach numbers, and Alfv\u00e9nic Mach numbers for interplanetary shocks near 1 AU, SNR shocks and ICM shocks. This paper focuses on the injection of suprathermal particles into the DSA process at astrophysical shocks. Since the shock thickness is of the order of the gyroradius of postshock thermal protons, only suprathermal particles (both protons and electrons) with momentum p \u2273pinj \u2248(3 \u22124)pth,p can re-cross to the shock upstream and participate in the DSA process (e.g. Kang et al. 2002). Here, pth,p = p2mpkBT2 is the most probable momentum of thermal protons with postshock temperature T2 and kB is the Boltzmann constant. Hereafter, we use the subscripts \u20181\u2019 and \u20182\u2019 to denote the conditions upstream and downstream of shock, respectively. At quasi-parallel shocks, in the so-called thermal injection \f2 Kang et al. leakage model, protons leaking out of the postshock thermal pool are assumed to interact with magnetic \ufb01eld \ufb02uctuations and become the CR population (e.g. Malkov & Drury 2001; Kang et al. 2002). In a somewhat different interpretation based on hybrid plasma simulations, protons re\ufb02ected off the shock transition layer are thought to form a beam of streaming particles, which in turn excite resonant waves that scatter particles into the DSA process (e.g. Quest 1988; Guo & Giacalone 2013). At quasi-perpendicular shocks, on the other hand, the self-excitation of waves is ineffective and the injection of suprathermal protons is suppressed signi\ufb01cantly (Caprioli & Spitkovsky 2013), unless there exists pre-existing MHD turbulence in the background plasma (Giacalone 2005; Zank et al. 2006). Assuming that downstream electrons and protons have the same kinetic temperature (Te \u2248Tp), for a Maxwellian distribution there will be fewer electrons than protons that will have momenta above the required injection momentum. Thus electrons must be pre-accelerated from the thermal momentum (pth,e = (me/mp)1/2pth,p) to the injection momentum (pinj \u2248(130 \u2212170)pth,e) in order to take part in the DSA process. Contrary to the case of protons, which are effectively injected at quasi-parallel shocks, according to in situ observations made by spacecrafts, electrons are known to be accelerated at Earth\u2019s bow shock and interplanetary shocks preferentially in the quasi-perpendicular con\ufb01guration (e.g. Gosling et al. 1989; Shimada et al. 1999; Simnett et al. 2005; Oka et al. 2006). However, in a recent observation of Saturn\u2019s bow shock by the Cassini spacecraft, the electron injection/acceleration has been detected also in the quasiparallel geometry at high-Mach, high-beta shocks (MA \u223c100 and \u03b2p \u223c10) (Masters et al. 2013). Riquelme & Spitkovsky (2011) suggested that electrons can be injected and accelerated also at quasi-parallel portion of strong shocks such as SNR shocks, because the turbulent magnetic \ufb01elds excited by the CR streaming instabilities upstream of the shock may have perpendicular components at the corrugated shock surface. So, locally transverse magnetic \ufb01elds near the shock surface seem essential for the ef\ufb01cient electron injection regardless of the obliquity of the large-scale, mean \ufb01eld. Non-Maxwellian tails of high energy particles have been widely observed in space and laboratory plasmas (e.g. Vasyliunas 1968; Hellberg et al. 2000). Such particle distributions can be described by the combination of a Maxwellianlike core and a suprathermal tail of power-law form, which is known as the \u03ba-distribution. There exists an extensive literature that explains the \u03ba-distribution from basic physical principles and processes relevant for collisionless, weakly coupled plasmas (e.g. Leubner 2004; Pierrard & Lazar 2010). The theoretical justi\ufb01cation for the \u03ba-distribution is beyond the scope of this paper, so readers are referred to those papers. Recently, the existence of \u03ba-distribution of electrons has been conjectured and examined in order to explain the discrepancies in the measurements of electron temperatures and metallicities in H II regions and planetary nebulae (Nicholls et al. 2012; Mendoza & Bautista 2014). The development of suprathermal tails of both proton and electron distributions are two outstanding problems in the theory of collisionless shocks, which involve complex waveparticle interactions such as the excitation of kinetic/MHD waves via plasma instabilities and the stochastic acceleration by plasma turbulence (see Petrosian 2012; Schure et al. 2012, for recent reviews). For example, stochastic acceleration of thermal electrons by electron-whistler interactions is known to be very ef\ufb01cient in low \u03b2p and low \u03b1p plasmas such as solar \ufb02ares (Hamilton & Petrosian 1992). Recently, the pre-heating of electrons and the injection of protons at non-relativistic collisonless shocks have been studied using Particle-in-Cell (PIC) and hybrid plasma simulations for a wide range of parameters (e.g. Amano & Hoshino 2009; Guo & Giacalone 2010, 2013; Riquelme & Spitkovsky 2011; Garat\u00e9 & Spitkovsky 2012; Caprioli & Spitkovsky 2013). In PIC simulations, the Maxwell\u2019s equations for electric and magnetic \ufb01elds are solved along with the equations of motion for ions and electrons, so full wave-particle interactions can be followed from \ufb01rst principles. In hybrid simulations, only ions are treated kinetically, while electrons are treated as a neutralizing, massless \ufb02uid. Using two and three-dimensional PIC simulations, Riquelme & Spitkovsky (2011) showed that for low Alfv\u00e9nic Mach numbers (MA \u227220), oblique whistler waves can be excited in the foot of quasi-perpendicular shocks (but not at perfectly perpendicular shocks with \u0398Bn = 90\u25e6). Electrons are then accelerated via wave-particle interactions with those whistlers, resulting in a power-law suprathermal tail. They found that the suprathermal tail can be represented by the energy spectrum ne(E) \u221dE\u2212a with the slope a = 3 \u22124, which is harder for smaller MA (i.e., larger vA or smaller \u03b1p). Nonrelativistic electrons streaming away from a shock can resonate only with high frequency whistler waves with right hand helicity, while protons and relativistic electrons (with Lorentz factor \u03b3 > mp/me) resonate with MHD (Alfv\u00e9n) waves. So the generation of oblique whistlers is thought to be one of the agents for pre-acceleration of electrons (Shimada et al. 1999). In fact, obliquely propagating whistler waves and high energy electrons are often observed together in the upstream region of quasi-perpendicular interplanetary shocks (e.g. Shimada et al. 1999; Wilson et al. 2009). Recently, Wilson et al. (2012) observed obliquely propagating whistler modes in the precursor of several quasiperpendicular interplanetary shocks with low Mach numbers (fast mode Mach number M f \u22482 \u22125), simultaneously with perpendicular ion heating and parallel electron acceleration. This observation implies that oblique whistlers could play an important role in the development of a suprathermal halo around the thermal core in the electron velocity distribution at quasi-perpendicular shocks with moderate MA. Using two-dimensional PIC simulations for perpendicular shocks with MA \u223c45, Matsumoto et al. (2013) found that several kinetic instabilities (e.g. Buneman, ion-acoustic, ion Weibel) are excited at the leading edge of the shock foot and that electrons can be energized to relativistic energies via the shock sur\ufb01ng mechanism. They suggested that the shock surfacing acceleration can provide the effective pre-heating of electrons at strong SNR shocks with high Alfv\u00e9nic Mach numbers (MA \u2273100). Because non-relativistic electrons and protons interact with different types of plasma waves and instabilities, they can have suprathermal tails with different properties that depend on plasma and shock parameters, such as \u0398Bn, \u03b1p, \u03b2p, Ms, and MA. So the power-law index of the \u03ba-distributions for electrons and protons, \u03bae and \u03bap, respectively, should depend on these parameters, and they could be signi\ufb01cantly different from each other. For example, the electron distributions measured in the IPM can be \ufb01tted with the \u03badistributions with \u03bae \u223c2 \u22125, while the proton distributions prefer a somewhat larger \u03bap (Pierrard & Lazar 2010). Us\fDiffusive Shock Acceleration 3 ing in situ spacecraft data, Neergaard-Parker & Zank (2012) suggested that the proton spectra observed downstream of quasi-parallel interplanetary shocks can be explained by the injection from the upstream (solar-wind) thermal Maxwellian or weak \u03ba-distribution with \u03bap \u227310. On the other hand, Neergaard-Parker et al. (2014) showed that the upstream suprathermal tail of the \u03bap = 4 distribution is the best to \ufb01t the proton spectra observed downstream of quasi-perpendicular interplanetary shocks1. They reasoned that the upstream proton distribution may form a relatively \ufb02at \u03ba-like suprathermal tail due to the particles re\ufb02ected at the magnetic foot of quasiperpendicular shocks, while at quasi-parallel shocks the upstream proton distribution remains more-or-less Maxwellian. In this paper, we consider a phenomenological model for the thermal leakage injection in the DSA process by taking the \u03ba-distributions as empirical forms for the suprathermal tails of the electron and proton distributions at collisionless shocks. The \u03ba-distribution is described in Section 2. The injection fraction is estimated in Section 3, followed by a brief summary in Section 4. 2. BASIC MODELS For the postshock nonrelativistic gas of kinetic temperature T2 and particle density n2, the Maxwellian momentum distribution is given as fM(p) = n2 \u03c01.5 p\u22123 th exp \" \u2212 \u0012 p pth \u00132# , (2) where pth = \u221a2mkBT2 is the thermal peak momentum and the mass of the particle is m = me for electrons and m = mp for protons. The distribution function is de\ufb01ned in general as R 4\u03c0p2 f(p)dp = n2. Here we assume that the electron and proton distributions have the same kinetic temperature, so that pth,e = p me/mp \u00b7 pth,p. The \u03ba-distribution can be described as f\u03ba(p) = n2 \u03c01.5 p\u22123 th \u0393(\u03ba+1) (\u03ba\u22123/2)3/2\u0393(\u03ba\u22121/2) \u0014 1 + p2 (\u03ba\u22123/2)p2 th \u0015\u2212(\u03ba+1) , (3) where \u0393(x) is the Gamma function (e.g. Pierrard & Lazar 2010). The \u03ba-distribution asymptotes to a power-law form, f\u03ba(p) \u221dp\u22122(\u03ba+1) for p \u226bpth, which translates into N(E) \u221d E\u22122\u03ba for relativistic energies, E \u2273mc2. For large \u03ba, it asymptotes to the Maxwellian distribution. For a smaller value of \u03ba, the \u03ba-distribution has a \ufb02atter, suprathermal, power-law tail, which may result from larger wave-particle interaction rates. Note that for the \u03ba-distribution in equation (3), the mean energy per particle, m\u27e8v2\u27e9/2 = (2\u03c0m/n2) R v2 f\u03ba(p)p2dp, becomes (3/2)kBT2 and the gas pressure becomes P2 = n2kBT2, providing that particle speeds are nonrelativisitic. The top panel of Figure 1 compares fM and f\u03ba for electrons and protons when T2 = 5\u00d7107 K (corresponding to the shock speed of us \u22481.9\u00d7103 km s\u22121 in the large Ms limit.) Here, the momentum is expressed in units of mec for both electrons and protons, so the distribution function f(p) is plotted in units of n2/(mec)3. Note that the plotted quantity is p3 f(p)d ln p = p2 f(p)dp \u221dn(p)dp. For smaller values of \u03ba, the low energy portion of f\u03ba(p) also deviates more signi\ufb01cantly from fM(p). 1 Note that Neergaard-Parker & Zank (2012) and Neergaard-Parker et al. (2014) model particles from the upstream suprathermal pool being injected into the DSA process, while here we assume that particles from the downstream suprathermal pool are injected. For the \u03ba-distribution, the most probable momentum (or the peak momentum) is related to the Maxwellian peak momentum as p2 mp = p2 th \u00b7(\u03ba\u22123/2)/\u03ba. So for a smaller \u03ba, the ratio of pmp/pth becomes smaller. In other words, the peak of f\u03ba(p) is shifted to a lower momentum for a smaller \u03ba, as can be seen in the top panel of Figure 1. To account for this we will suppose a hypothetical case in which the postshock temperature is modi\ufb01ed for a \u03ba-distribution as follows: T \u2032 2(\u03ba) = T2 \u03ba (\u03ba\u22123/2). (4) Then the most probable momentum becomes the same for different \u03ba\u2019s. The bottom panel of Figure 1 compares the Maxwellian distribution for T2 = 5 \u00d7 107 K and the \u03badistributions with the corresponding T \u2032 2(\u03ba)\u2019s. For such \u03badistributions, the distribution of low energy particles with p \u2272pth remains very similar to the Maxwellian distribution. In that case, low energy particles follow more-or-less the Maxwellian distribution, while higher energy particles above the thermal peak momentum show a power-law tail. This might represent the case in which thermal particles with p \u2273 pth gain energies via stochastic acceleration by pre-existing and/or self-excited waves in the shock transition layer, resulting in a \u03ba-like tail and additional plasma heating. Such \u03badistributions with plasma heating could be close to the real particle distributions behind collisionless shocks. So below we will consider two cases: the T2 model in which the postshock temperature is same and the T \u2032 2 model in which the postshock temperature depends on \u03ba as in equation (4). 3. INJECTION FRACTION We assume that the distribution function of the particles accelerated by DSA, which we refer to as cosmic rays (CRs), at the position of the shock has the test-particle power-law spectrum for p \u2265pinj \u2261Qinj \u00b7 pth,p, fCR(p) = f(pinj)\u00b7 \u0012 p pinj \u0013\u2212q , (5) where the power-law slope is given as q = 3(u1 \u2212vA,1) u1 \u2212vA,1 \u2212u2 . (6) Here u1 and u2 are the upstream and downstream \ufb02ow speeds, respectively, in the shock rest frame, and vA,1 = B1/\u221a4\u03c0\u03c11 is the upstream Alfv\u00e9n speed. This expression takes account of the drift of the Alfv\u00e9n waves excited by streaming instabilities in the shock precursor (e.g., Kang 2011, 2012). If vA,1 = 0, the power-law slope becomes q = 4 for Ms \u226b1 and q = 4.5 for Ms = 3. Note that in our phenomenological model, we assume the \u03ba-distribution extends only to p = pinj, above which the DSA power-law in equation (5) sets in. In Figure 1 the vertical dotted lines show the range of pinj = (3.5 \u22124) pth,p, above which the particles can participate the DSA process. With \u03bap = 30 for protons, \u03bae = 2 for electrons, and pinj = 4pth,p, for example, the ratio of fe(pinj)/ fp(pinj) \u224810\u22122.6 for the T2 model, while fe(pinj)/ fp(pinj) \u224810\u22121.9 for the T \u2032 2 model. The parameter Qinj determines the CR injection fraction, \u03be \u2261nCR/n2 as follows. In the case of the Maxwellian distribution the fraction is \u03beM = 4 \u221a\u03c0 Q3 inj (q \u22123) \u00b7exp(\u2212Q2 inj), (7) \f4 Kang et al. while in the case of the \u03ba-distribution it is \u03be\u03ba = 4 \u221a\u03c0 Q3 inj (q \u22123) \u00b7 \u0393(\u03ba+1) (\u03ba\u22123/2)3/2\u0393(\u03ba\u22121/2) \" 1 + Q2 inj (\u03ba\u22123/2) #\u2212(\u03ba+1) . (8) Note that both forms of the injection fraction are independent of the postshock temperature T2, but dependent on Qinj and the shock Mach number, through the slope q(Ms). For the Maxwellian distribution, \u03beM decreases exponentially with the parameter Qinj, which in general depends on the shock Mach number as well as on the obliquity. Since the injection process should depend on the level of pre-existing and self-excited plasma/MHD waves, Qinj is expected to increase with \u0398Bn. For example, in a model adopted for quasi-parallel shocks (e.g. Kang & Ryu 2010), Qinj \u2248\u03c7mpu2 pth,p = \u03c7 r \u03b3 2\u00b5 u2 cs,2 = \u03c7 r \u03b3 2\u00b5 \u0014 (\u03b3 \u22121)M2 s +2 2\u03b3M2 s \u2212(\u03b3 \u22121) \u00151/2 , (9) where \u03c7 \u22485.8 \u22126.6, \u03b3 is the gas adiabatic index, and \u00b5 is the mean molecular weight for the postshock gas. For \u03b3 = 5/3 and \u00b5 = 0.6, this parameter approaches to Qinj \u22483\u22124 for large Ms, depending on the level of MHD turbulence, and it increases as Ms decreases (see Figure 1 of Kang & Ryu (2010)). Using hybrid plasma simulations, Caprioli & Spitkovsky (2013) suggested Qinj = 3\u22124 at quasi-parallel shocks with Ms \u2248MA \u224820, leading to the injection fraction of \u03bep \u224810\u22124 \u221210\u22123 for protons. For highly oblique and perpendicular shocks, the situation is more complex and the modeling of Qinj becomes dif\ufb01cult, partly because MHD waves are not self-excited effectively and partly because the perpendicular diffusion is not well understood (e.g. Neergaard-Parker et al. 2014). So the injection process at quasi-perpendicular shocks depends on the preexisting MHD turbulence in the upstream medium as well as the angle \u0398Bn. For example, Zank et al. (2006) showed that in the case of interplanetary shocks in the solar wind located near 1AU from the sun, the injection energy is similar for \u0398Bn = 0\u25e6and 90\u25e6, but it peaks at highly oblique shocks with \u0398Bn \u223c60\u221280\u25e6. So Qinj would increase with \u0398Bn, but decrease as \u0398Bn \u219290\u25e6. The same kind of trend may apply for cluster shocks, but again the details will depend on the MHD turbulence in the ICM. Here we will consider a range of values, 3 \u2264Qinj \u22645. For the \u03ba-distribution, \u03be\u03ba also decreases with Qinj but more slowly than \u03beM does. This means that the dependence of injection fraction on the shock sonic Mach number would be weaker in the case of the \u03ba-distribution. Figure 2 shows the energy spectrum of protons for the two (i.e., T2 and T \u2032 2) models shown in Figure 1. Here the energy spectrum is calculated as np(E) = 4\u03c0p2 f(p)(dp/dE), where the kinetic energy is E = q p2c2 +m2 pc4 \u2212mpc2 and the distribution function f(p) is given in equations (2) or (3). The \ufb01lled and open circles mark the spectrum at the energies corresponding to 3.5 pth,p and 4 pth,p for the Maxwellian distribution and the \u03ba-distributions with \u03bap = 10 and 30. This shows that the injection ef\ufb01ciency for CR protons would be enhanced in the \u03ba-distributions, compared to the Maxwellian distribution, by a factor of \u03be\u03bap=10/\u03beM \u223c100\u2212300 and \u03be\u03bap=30/\u03beM \u223c10 \u221220. There are reasons why the cases of \u03bap = 10 \u221230 are shown here. It has been suggested that the upstream suprathermal populations can be represented by the \u03badistribution with \u03bap \u22484 at quasi-perpendicular IPM shocks (Neergaard-Parker et al. 2014) and \u03bap \u227310 at quasi-parallel IPM shocks (Neergaard-Parker & Zank 2012). However, the proton injection at quasi-parallel shocks is much more ef\ufb01cient than that at quasi-perpendicular shocks, because the injection energy is much higher at highly oblique shocks (e.g. Zank et al. 2006). Moreover, Caprioli & Spitkovsky (2013) showed that the proton injection at quasi-parallel shocks can be modeled properly with the thermal leakage injection from the Maxwellian distribution at pinj \u2248(3 \u22124)pth,p. They also showed that a harder suprathermal population forms at larger \u0398BN, which is consistent with the observations at IPM shocks. But the power-law CR spectrum does not develop at (almost) perpendicular shocks due to lack of self-excited waves in their hybrid simulations. As shown in Figure 1 the electron distribution needs a substantially more enhanced suprathermal tail, for example, the one in the \u03ba-distribution with \u03bae \u223c2, in order to achieve the electron-to-proton ratio Ke/p \u223c10\u22123 \u221210\u22122 with the thermal leakage injection model. Figure 3 shows the energy spectrum of electrons for the two models shown in Figure 1. Here the energy spectrum for electrons is calculated as ne(\u0393e \u22121) = 4\u03c0p2 f(p)(dp/d\u0393e), where the Lorentz factor is \u0393e = p 1 +(p/mec)2. The \ufb01lled and open circles mark the spectrum at the energies corresponding to pinj = (3.5 \u22124) pth,p for the \u03ba-distributions with \u03bae = 1.6,2.0, and 2.5. Note that the \u03ba-distribution is de\ufb01ned for \u03ba > 3/2. In the PIC simulations of quasi-perpendicular shocks by Riquelme & Spitkovsky (2011), the power-law slope of ne(E) at \u0393e \u223c10 \u2212100 ranges 2.7 < a < 4 for 3.5 \u2264MA \u226414, where mp/me = 1600 was adopted (see their Figure 12). This would translate roughly into \u03bae \u22722, which is consistent with the observations at quasiperpendicular IPM shocks (Pierrard & Lazar 2010). If the suprathermal tails of electrons and protons can be described by the \u03ba-distributions with \u03bae and \u03bap, respectively, for p \u2264pinj, and if both CR electrons and protons have simple power-laws given in equation (5) for p > pinj, then the injection fractions, \u03bep and \u03bee, for \u03ba-distributions can be estimated by equation (8). Figure 4 compares the injection fractions, \u03bep and \u03bee, for the two models shown in Figures 1-3. Note that the slope q depends on Ms, so \u03be(q \u22123) is plotted instead of just \u03be. Now the ratio of CR electron to proton numbers can be calculated as Ke/p(Qinj,\u03bae,\u03bap) \u2261\u03bee(Qinj,\u03bae) \u03bep(Qinj,\u03bap) = fe(pinj,\u03bae) fp(pinj,\u03bap). (10) In the \u03ba-distribution of protons with \u03bap = 30 (dot-dashed line), for example, \u03bep decrease from 10\u22123 to 10\u22124 when Qinj increase from 3.5 to 4. We note that for \u03bap \u227210 or for Qinj \u22723.5, the proton injection fraction would be too high (i.e., \u03bep > 10\u22123) to be consistent with commonly-accepted DSA modelings of observed shocks such as SNRs. The parameter Qinj would in general increase for a smaller Ms as illustrated in equation (9). The dependence of the injection fraction on Ms becomes weaker for the \u03ba-distribution than for the Maxwellian distribution. As a result, the suppression of the CR injection fraction at weak shocks will be less severe if the \u03ba-distribution is considered. For electrons, the injection fraction would be too small if they were to be injected by way of thermal leakage from the Maxwellian distribution. So that case is not included in Figure 4. The expected electron injection would be \u03bee \u223c10\u22126 \u221210\u22125, if one takes Ke/p \u223c10\u22123\u221210\u22122 and \u03bep \u223c10\u22124\u221210\u22123. Then, the \fDiffusive Shock Acceleration 5 suprathermal tails of the \u03ba-distributions with \u03bae \u22722 would be necessary. For electron distributions with such \ufb02at suprathermal tails, the injection fraction would not be signi\ufb01cantly suppressed even at weak shocks. Turbulent waves excited in the shock precursor/foot should decay away from the shock (both upstream and downstream), as seen in the interplanetary shocks (Wilson et al. 2012) and the PIC simulations (Riquelme & Spitkovsky 2011). Thus it is possible the \u03ba-like suprathermal electron populations exist only in a narrow region around the shock, and in any case the differences from Maxwellian form that we discuss here are too limited to produce easily observable signatures such as clearly nonthermal hard X-ray bremsstrahlung. During the very early stage of SNR expansion with us > 104 km s\u22121, the postshock electrons should be described by the relativistic Maxwellian distribution with a relatively slow exponential cutoff of exp(\u2212\u0393emec2/kBT) instead of equation (2). However, injection from relativistic electron plasmas at collisionless shocks could involve much more complex plasma processes and lie beyond the scope of this study. 4. SUMMARY In the so-called thermal leakage injection model for DSA, the injection fraction depends on the number of suprathermal particles near the injection momentum, pinj = Qinjpth,p, above which the particles can participate in the DSA process (e.g. Kang et al. 2002). The parameter Qinj should be larger for larger oblique angle, \u0398Bn, and for smaller sonic Mach number, Ms, leading to a smaller injection fraction. Moreover, it should depend on the level of magnetic \ufb01eld turbulence, both pre-existing and self-excited, which in turn depends on the plasma parameters such as \u03b2p and \u03b1p as well as the powerspectrum of MHD turbulence. Since the detailed plasma processes related with the injection process are not fully understood, here we consider a feasible range, 3 \u2264Qinj \u22645. Assuming that suprathermal particles, both protons and electrons, follow the \u03ba-distribution with a wide range of the power-law index, \u03bap and \u03bae, we have calculated the injection fractions for protons and electrons. A \u03ba-type distribution or distribution consisting of a quasi-thermal plus a nonthermal tail, with a short dynamic range as the one needed here, is expected in a variety of models for acceleration of nonrelativistic thermal particles (see e.g. Petrosian & East (2008) for acceleration in ICM or Petrosian & Lui (2004) for acceleration in Solar \ufb02ares). The fact that ef\ufb01cient accelerations of electrons and protons require \u03ba-type distributions with different values of \u03ba suggests that they are produced by interactions with different types of waves; e.g., Alfven waves for protons and whistler waves for electrons. We show that \u03bap \u223c10 \u221230 leads to the injection fraction of \u03bep \u223c10\u22124\u221210\u22123 for protons at quasi-parallel shocks, while \u03bae \u22722 leads to the injection fraction of \u03bee \u223c10\u22126 \u221210\u22125 for electrons at quasi-perpendicular shocks. The proton injection is much less ef\ufb01cient at quasiperpendicular shocks, compared to quasi-parallel shocks, because MHD waves are not ef\ufb01ciently self-excited (Zank et al. 2006; Caprioli & Spitkovsky 2013). For electrons, a relatively \ufb02at \u03ba-distribution may form due to obliquely propagating whistlers at quasi-perpendicular shocks with moderate Mach numbers (MA \u227220), and \u03bae is expected to decrease for a smaller MA (i.e. smaller \u03b1p or stronger magnetization) (Riquelme & Spitkovsky 2011). We note that these \u03ba-like suprathermal populations are expected to exist only in a narrow region around the shock, since they should be produced via plasma/MHD interactions with various waves, which could be excited in the shock precursor and then decay downstream. In addition, we point out that acceleration (to high CR energies) is less sensitive to shock and plasma parameters for a \u03ba-distribution than the Maxwellian distribution. So, the existence of \u03ba-like suprathermal tails in the electron distribution would alleviate the problem of extremely low injection fractions for weak quasi-perpendicular shocks such as those widely thought to power radio relics found in the outskirts of galaxy clusters (Kang et al. 2012; Pinzke et al. 2013; Brunetti & Jones 2014). Finally, we mention that electrons are not likely to be accelerated at weak quasi-parallel shocks, according to in situ measurements of interplanetary shocks (e.g. Oka et al. 2006) and PIC simulations (e.g. Riquelme & Spitkovsky 2011). At strong quasi-parallel shocks, on the other hand, Riquelme & Spitkovsky (2011) suggested that electrons could be injected ef\ufb01ciently through locally perpendicular portions of the shock surface, since turbulent magnetic \ufb01elds are excited and ampli\ufb01ed by CR protons streaming ahead of the shock. Thus the magnetic \ufb01eld obliquity, both global and local to the shock surface, and magnetic \ufb01eld ampli\ufb01cation via wave-particle interactions are among the key players that govern the CR injection at collisionless shocks and need to be further studied by plasma simulations. HK thanks KIPAC for hospitality during the sabbatical leave at Stanford University, where a part of work was done. HK was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2012013-2012S1A2A1A01028560). VP was supported by NASA grants NNX10AC06G, NNX13AF79G and NNX12AO78G. DR was supported by the National Research Foundation of Korea through grant 2007-0093860. TJ was supported by NSF grant AST1211595, NASA grant NNX09AH78G, and the Minnesota Supercomputing Institute. The authors would like to acknowledge the valuable comments from an anonymous referee." + }, + { + "url": "http://arxiv.org/abs/1308.6652v1", + "title": "Nonthermal Radiation from Supernova Remnants: Effects of Magnetic Field Amplification and Particle Escape", + "abstract": "We explore nonlinear effects of wave-particle interactions on the diffusive\nshock acceleration (DSA) process in Type Ia-like, SNR blast waves, by\nimplementing phenomenological models for magnetic field amplification,\nAlfv'enic drift, and particle escape in time-dependent numerical simulations of\nnonlinear DSA. For typical SNR parameters the CR protons can be accelerated to\nPeV energies only if the region of amplified field ahead of the shock is\nextensive enough to contain the diffusion lengths of the particles of interest.\nEven with the help of Alfv'enic drift, it remains somewhat challenging to\nconstruct a nonlinear DSA model for SNRs in which order of 10 % of the\nsupernova explosion energy is converted to the CR energy and the magnetic field\nis amplified by a factor of 10 or so in the shock precursor, while, at the same\ntime, the energy spectrum of PeV protons is steeper than E^{-2}. To explore the\ninfluence of these physical effects on observed SNR emissions, we also compute\nresulting radio-to-gamma-ray spectra. Nonthermal emission spectra, especially\nin X-ray and gamma-ray bands,depend on the time dependent evolution of CR\ninjection process, magnetic field amplification, and particle escape, as well\nas the shock dynamic evolution. This result comes from the fact that the high\nenergy end of the CR spectrum is composed of the particles that are injected in\nthe very early stages of blast wave evolution. Thus it is crucial to understand\nbetter the plasma wave-particle interactions associated with collisionless\nshocks in detail modeling of nonthermal radiation from SNRs.", + "authors": "Hyesung Kang, T. W. Jones, Paul P. Edmon", + "published": "2013-08-30", + "updated": "2013-08-30", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Supernova remnants (SNRs) are strong sources of nonthermal radiations, indicating clearly that they are sites of ef\ufb01cient particle acceleration. In fact, SNRs are thought to be responsible via the diffusive shock acceleration (DSA) mechanism for the production of most of the Galactic cosmic rays (CRs) at least up to the \ufb01rst knee energy of 1015.5eV (see Hillas 2005; Reynolds 2008, for reviews). The spectral and spatial distributions of the nonthermal emissions carry important information about how DSA works in SNRs. At present there are several signi\ufb01cant tensions in this comparison, especially in comparisons that account for likely nonlinear feedback of DSA on the shock dynamics and structure (e.g. Malkov et al. 2011; Caprioli 2012; Kang 2013). The possibility of strong magnetic \ufb01eld ampli\ufb01cation (MFA) as a consequence of nonlinear DSA has recently received considerable attention in this context (e.g. Reynolds et al. 2012; Schure et al. 2012). In DSA theory suprathermal particles go through pitchangle scatterings by magnetohydrodynamic (MHD) waves around collisionless shocks and can be accelerated to relativistic energies through the Fermi \ufb01rst-order process (Bell 1978; Drury 1983; Malkov & Drury 2001). In fact, those waves are known to be self-excited both resonantly and nonresonantly by CRs streaming away from the shock (e.g. Skilling 1975b; Lucek & Bell 2000; Bell 2004). Plasma and MHD simulations have shown that the CR streaming instability indeed excites MHD waves and ampli\ufb01es the turbulent magnetic \ufb01elds by as much as orders of magnitude in the shock precursor (e.g. Zirakashvili & Ptuskin 2008; Ohira et al. 2009; Riquelme & Spitkovsky 2009, 2010; 1 Author to whom any correspondence should be addressed. Bell et al. 2013). Thin X-ray rims of several young Galactic SNRs provide observational evidence that the magnetic \ufb01eld is ampli\ufb01ed up to several 100\u00b5G downstream of the forward shock (e.g. Bamba et al. 2003; Parizot et al. 2006; Eriksen et al. 2011). We note, however, there is as yet no direct observational evidence for the ampli\ufb01ed magnetic \ufb01eld in the upstream, precursor structures of SNR shocks. An immediate consequence of magnetic ampli\ufb01cation (MFA) in shock precursors is the potential to accelerate CR ions beyond the \ufb01rst knee, which is otherwise dif\ufb01cult in SNRs (Lagage & Cesarsky 1983). The maximum attainable energy, given by the so-called Hillas constraint, Emax \u223c (us/c)eZBrs, can reach up to 1015.5Z eV for typical SNRs only if the upstream, ISM, magnetic \ufb01eld, B, is ampli\ufb01ed in the precursor of the shock by a factor of at least 10 or so above typical ISM values (e.g., Lucek & Bell 2000; Hillas 2005). Here us and c are the shock speed and the light speed, respectively, eZ is the particle charge and rs is the shock radius. Generally this scenario depends on the highest energy CRs themselves driving the ampli\ufb01cation of turbulent \ufb01elds over the associated CR diffusion length scale of lmax = \u03ba(pmax)/us \u223crg(pmax) \u00b7 (c/3us), which is much larger than the gyroradius, rg(pmax) = pmaxc/(eZB), where \u03ba(p) is the CR diffusion coef\ufb01cient (Kang 2013). The diffusion length, lmax, corresponds to the scale height of the shock precursor, which in practical terms, if one adopts Bohm diffusion, is approximately lmax \u22480.65pc(pmaxc/1PeV)(B1/50 \u00b5G)\u22121 (us/3000 km s\u22121)\u22121 (where B1 is the ampli\ufb01ed magnetic \ufb01eld in the precursor). In the initial, linear stages of current-driven MFA the nonresonantly driven waves grow exponentially in time with characteristic \ufb02uctuation scales much smaller than rg of es\f2 Kang, Jones, and Edmon caping particles (Pelletier et al. 2006). The maximum linear growth rate and the corresponding length scale are determined by the return current, jCR \u221dusp3 max f(pmax), which depends on the \ufb02ux of escaping particles with p \u2273pmax (Bell et al. 2013). After the nonresonant mode becomes nonlinear (i.e., \u03b4B/B0 > 1), for typical SNR shocks (us < 0.1c), the growth is dominated by the resonant mode and the MHD turbulence continues to grow with \ufb02uctuations on increasingly larger scales up to rg, being limited by the advection time over which the shock sweeps the precursor plasma (Marcowith & Casse 2010). Moreover, other processes such as the acoustic instability may operate simultaneously and lead to the growth of MHD turbulence on \ufb02uctuation scales larger than rg (Schure et al. 2012). It has also been suggested that magnetic \ufb01eld \ufb02uctuations in shocks can grow on scales larger than rg(pmax) in the presence of nonresonant circularly-polarized waves through the mean-\ufb01eld dynamo (Bykov et al. 2011; Rogachevskii et al. 2012), or through other microphysical and hydrodynamical instabilities within the shock precursor including the \ufb01rehose, \ufb01lamentation, and acoustic instabilities (e.g. Reville & Bell 2012; Beresnyak et al. 2009; Drury & Downes 2012; Schure et al. 2012; Caprioli & Spitkovsky 2013). If MFA were con\ufb01ned only to a narrow region (\u226almax) close to the shock or occurred only downstream of the shock, the highest energy CRs with large diffusion lengths could not be accelerated ef\ufb01ciently, and pmax would still be limited by the background magnetic \ufb01eld, B0, instead of the ampli\ufb01ed \ufb01eld, B1. In most recent discussions CRs drive shock precursor MFA at rates that depend on the electric current associated with particle escape ahead of the shock (e.g., Bell 2004)2, although it remains to be understood if the CR current owing to the escaping highest energy CRs far upstream is a able to generate magnetic turbulence spanning lengths comparable to their own diffusion lengths. We do not attempt to address that issue here, but, instead apply several previously proposed phenomenological models for MFA motivated by this idea, in order to compare their impact on DSA and associated nonthermal emissions in evolving SNR shocks. Our particular aim in this regard is to evaluate the importance of the distribution of the ampli\ufb01ed magnetic \ufb01eld within the CR precursor, since various interpretations of MFA in the literature lead to different distributions. One of the signature consequences of standard nonlinear DSA theory in strong shocks is the hardening of the CR spectrum compared to test particle DSA theory at momenta approaching pmax from below, along with a steepening of the spectrum at low momenta (e.g., Caprioli et al. 2010b). That is, the predicted CR spectrum becomes concave between the injection momentum and the upper, cutoff momentum. This expected behavior does not, however, seem to be re\ufb02ected in observed nonthermal emissions, as discussed below. In fact, \u03b3-ray emissions seen in some SNRs seem best explained if the high energy CR spectra are actually steeper than predicted with test particle DSA theory. In\ufb02uences of MFA in the nonlinear DSA theory have been suggested as one remedy for this con\ufb02ict. We will explore that issue in this work. In fact a reduction of the CR acceleration ef\ufb01ciency and a steepening of the CR spectrum are potentially important consequences of MFA. These would result from increased rates of so-called Alfv\u00e9nic drift (e.g. Vladimirov et al. 2008; Caprioli 2 But, see e.g., Beresnyak et al. (2009); Drury & Downes (2012) for alternate views. 2012; Kang 2012), if the mean magnetic \ufb01eld is enhanced along the shock normal on scales large compared to particle gyroradii. Resonantly excited Alfv\u00e9n waves tend to drift along the mean \ufb01eld in the direction of CR streaming with respect to the background \ufb02ow, so opposite to the CR number (pressure) gradient. Then the mean convective velocity of the scattering centers becomes u + uw, where uw is the mean drift velocity of the scattering centers (Skilling 1975a; Bell 1978; Ptuskin & Zirakashvili 2005). For spherically expanding SNR shocks, the CR pressure peaks at the shock location. Thus, resonant waves moving outward into the background medium dominate in the upstream region, while inward propagating waves may dominate behind the shock. Then in mostly quasi-parallel spherical shocks it would be expected that the radial wave drift speed is uw,1 \u2248+vA in the upstream shock precursor (where vA = B/\u221a4\u03c0\u03c1 is the local Alfv\u00e9n speed), while uw,2 \u2248\u2212vA behind the shock (Skilling 1975a; Zirakashvili & Ptuskin 2012). It is conceivable, however, in the downstream region of the forward SNR shock that the forward and backward moving waves could be nearly balanced there (i.e. uw,2 \u22480) as a result of shock-related instabilities (e.g., Jones 1993). On the other hand, we do not have a fully self-consistent model for the wave generation and ampli\ufb01cation via wave-particle and wave-wave interactions around the shock. As we will demonstrate below, signi\ufb01cant post-shock drift in an ampli\ufb01ed \ufb01eld could strongly in\ufb02uence the resulting shock and CR properties (e.g., Zirakashvili & Ptuskin 2012). To illustrate that point simply, we will consider models in which either uw,2 \u22480 or uw,2 \u2248\u2212vA is adopted. These drifting effects probably are not relevant in the absence of MFA, since for typical Type Ia SNRs propagating into the interstellar medium, the effects of Alfv\u00e9nic drift can be ignored. The Alfv\u00e9nic Mach number is large for fast shocks in the interstellar medium, e.g., MA = us/vA \u223c200 for B0 \u22485 \u00b5G. However, if the magnetic \ufb01eld strength is increased by a factor of 10 or more in the precursor, Alfv\u00e9nic drift may affect signi\ufb01cantly DSA at such SNRs. In the presence of fast Alfv\u00e9nic drift due to ef\ufb01cient MFA, the velocity jumps that the scattering centers experience across the shock would become signi\ufb01cantly smaller than those of the underlying \ufb02ow (e.g., Bell 1978; Schlickeiser 1989). Since CR particles are isotropized in the mean frame of scattering centers rather than the underlying \ufb02uid, the resulting CR spectrum becomes softer than that predicted with the velocity jump for the background \ufb02ow (see, e.g., Equations (8) and (9) below). Then DSA extracts less energy from the shock \ufb02ow, because the rate at which particles gain energy is reduced compared to the rate of particle escape downstream. Consequently, there are fewer of the most energetic CRs (e.g., Kang 2012). For this reason Alfv\u00e9nic drift has been pointed out as a means to obtain a CR energy spectrum steeper than the conventional test-particle power-law for strong shocks, e.g. N(E) \u221dE\u22122.3 (e.g. Morlino & Caprioli 2012), as required to explain the observed \u03b3-ray spectra as a consequence of secondary pion decay in the GeV-TeV band of some young SNRs (Abdo et al. 2010; Acero et al. 2010; Acciari et al. 2011; Caprioli 2011; Giordano et al. 2012). Exploring this effect, Caprioli (2012) recently presented a nonlinear DSA model to produce a spectrum of SNR accelerated CRs that is steeper than the test-particle power-law at strong shocks by speci\ufb01cally accounting for Alfv\u00e9nic drift in strongly ampli\ufb01ed magnetic \ufb01elds. He adopted a magnetic \ufb01eld ampli\ufb01cation model in which the turbulent magnetic \fDiffusive Shock Acceleration at SNRs 3 \ufb01elds induced by CR streaming instabilities increase rapidly to a saturation level that is spatially uniform within the shock precursor. His model targeted spherical SNR shocks, but was based on sequential, semi-analytic, steady state DSA solutions; our work below, similarly motivated, applies spherical, explicitly time evolving numerical models to the problem. Beyond Alfv\u00e9nic drift, there is another potentially important property of CRs in SNR shocks that may lead to steeper spectra at the highest energies. First, it is expected that the highest energy particles may escape rapidly from the system when the diffusion length becomes greater than the shock curvature radius, i.e., lmax \u2273rs(t). In particular, note that the Hillas constraint given above corresponds, with Bohm diffusion, to the condition lmax \u223c(1/3)rs. This would steepen the CR spectrum with respect to the plane shock solution. which, in turn, would reduce the charge current driving instabilities, thus reducing the ef\ufb01ciency of particle scattering at the highest energies. However, considering that for typical SNRs, rs \u223c3 \u221210 pc, while lmax \u223c0.5pc(Emax/1PeV) in the case of ef\ufb01cient MFA in the upstream region, more stringent conditions due to reduction of MHD turbulence should be imposed here. As we described above in the discussion of MFA in the shock precursor, it remains uncertain up to what upstream location the highest energy particles can generate turbulent wave \ufb01elds that are strong enough to con\ufb01ne themselves around the shock via resonant scattering (Caprioli et al. 2010a; Drury 2011). Moreover, during the late Sedov-Taylor stage of SNRs evolving in partially ionized media wave dissipation due to ion-neutral collisions may weaken stochastic scattering on the relevant scales, facilitating free streaming of high energy CRs away from the SNR and out of the DSA process (Ptuskin & Zirakashvili 2005; Malkov et al. 2011). In order to account for such effects, we consider a free escape boundary located at rFEB = (1.1 \u22121.5)rs(t). We mention for completeness that alternate and more complex approaches to explaining the steep \u03b3-ray spectra in SNRs have been suggested. For example, Berezhko et al. (2013) have proposed recently that the observed, steep \u03b3-ray spectrum of Tycho\u2019s SNR could be explained by pion production from the combined populations of CR protons accelerated by shocks propagating into an ISM including two different phases. As noted earlier, multi-band observations of nonthermal emissions from radio to \u03b3-ray provide a powerful tool to test theoretical modeling of nonlinear DSA at SNRs (e.g. Berezhko et al. 2009, 2012; Caprioli 2011; Kang 2011; Morlino & Caprioli 2012). For instance the radio spectrum, F\u03bd \u221d\u03bd\u2212\u03b1, represents the energy spectrum of electrons, Ne(E) \u221dE\u2212r (with r = 2\u03b1 + 1 and E = \u03b3emec2). In a magnetic \ufb01eld of typical strength, B \u223c100\u00b5G, these electrons have a characteristic Lorentz factor, \u03b3e \u223c p \u03bdmec/(eB) \u223c103, for radio synchrotron emissions in the GHz band. If the peak CR electron energy is determined by a balance between DSA and synchrotron energy losses, and we assume for simplicity a steady shock, the X-ray synchrotron cutoff frequency is determined primarily by the shock speed; namely, h\u03bdc \u22484.1keV(us/3000 km s\u22121)2. However, under similar conditions for spherical, decelerating shocks, radiative cooling of the CR electrons within the SNR interior leads to a volumeintegrated electron spectrum steepened above a break energy that depends on the evolution of us(t) and B(r,t). Then the spatially unresolved, synchrotron radiation spectrum has a break at h\u03bdbr \u223c0.12keV(t/300yr)\u22122(B2/100 \u00b5G)\u22123 above which the photon spectral index, \u03b1, increases by 0.5 compared to the value without radiative cooling (Kang et al. 2012). The interpretation of the \u03b3-ray spectrum is more complicated, since \u03b3-ray emission can originate from both CR protons and CR electrons; namely, by way of the decay of neutral pions produced in p\u2212p interactions between CRs and the background medium, and from inverse Compton (iC) scattering of the background radiation by CRs electrons plus nonthermal electronic bremsstrahlung. The relative importance of the different components is governed by several factors, including the magnetic \ufb01eld strength, the background density, the background radiation \ufb01eld, and the CR electron to proton ratio, Ke/p. Given these ingredients, it is clearly crucial to incorporate MFA, Alfv\u00e9nic drift and particle escape in predicting nonthermal radiation spectrum of SNRs. In Kang (2013) (Paper I) phenomenological models for MFA, Alfv\u00e9nic drift and particle escape were implemented in time-dependent nonlinear DSA simulations of CR protons and electrons at the forward shock of Sedov-Taylor SNRs. Electronic synchrotron and iC losses were also included in the evolution of the electron spectra. Paper I demonstrated the following points for the MFA model employed there: 1) If scattering centers drift along the shock normal at the Alfv\u00e9n speed in highly ampli\ufb01ed magnetic \ufb01elds, the CR energy spectrum is steepened in evolving strong SNR shocks and the acceleration ef\ufb01ciency is signi\ufb01cantly reduced. 2) Even with fast Afv\u00e9nic drift, however, DSA can still be ef\ufb01cient enough to develop a substantial shock precursor and convert about 2030% of the SN explosion energy into CRs. 3) A CR proton spectrum steeper than E\u22122 was obtained only when Alfv\u00e9nic drift away from the shock was included in both upstream and downstream regions of the shock. 4) The maximum energy of CR ions accelerated by SNRs can increase signi\ufb01cantly over values predicted without MFA only when the magnetic \ufb01elds are ampli\ufb01ed in a volume spanning the full diffusion length of the highest energy particles. This length scale is larger than the gyroradius of those particles by a factor of (c/3us) when Bohm diffusion is applied. 5) Since the high energy end of the CR proton spectrum is composed of the particles that are injected in the early stages of shock evolution, the \u03b3-ray emission spectrum near the high energy cutoff depends on details of the time-dependent evolution of the CR injection, MFA, and particle escape as well as the dynamical evolution of the SNR shock. Steady shock solutions cannot capture these features properly. The present paper revisits these issues through a wider range of MFA models. Because of such interdependencies between MFA and DSA, a self-consistent picture of the full problem requires at the least time dependent MHD simulations combined with a kinetic treatment of nonlinear DSA. That work remains to be done. As a step in this direction, we implemented in Paper I a prescription for MFA and the resulting magnetic \ufb01eld pro\ufb01le based on a simple treatment of Caprioli (2012), of resonant ampli\ufb01cation of Alfv\u00e9n waves by streaming CRs (see Equation (2) below). In the present work we will include three additional recipes for the magnetic \ufb01eld pro\ufb01le in the shock precursor, applying models of Zirakashvili & Ptuskin (2008) and Marcowith & Casse (2010). Moreover, we consider here slightly different model parameters from Paper I. We also present the nonthermal radiation spectra calculated using the simulated CR proton and electron spectra along with the magnetic \ufb01eld strength and the gas density pro\ufb01les. Our main aim is to explore how wave-particle interactions affect \f4 Kang, Jones, and Edmon the energy spectra of CR protons and electrons in nonlinear DSA at SNRs, and their nonthermal radiation spectrum. In the next section we describe the numerical method for the simulations we report, phenomenological models for some key plasma interactions, and model parameters for the SedovTaylor blast wave initial conditions. Our results will be discussed in Section 3, followed by a brief summary in Section 4. 2. NEW DSA SIMULATIONS In this section we brie\ufb02y describe the numerical code and the phenomenological models for wave-particle interactions in DSA theory that we applied. Full details of similar DSA simulations can be found in Paper I. 2.1. CRASH Code for DSA We consider parallel shocks, in which the magnetic \ufb01elds can be roughly decoupled from the dynamical evolution of the underlying \ufb02ow. The pitch-angle-averaged phase space distribution function, f(p), for CR protons and electrons can be described by the following diffusion-convection equation (Skilling 1975a): \u2202g \u2202t +(u +uw)\u2202g \u2202r = 1 3r2 \u2202 \u2202r \u0002 r2(u +uw) \u0003\u0012\u2202g \u2202y \u22124g \u0013 + 1 r2 \u2202 \u2202r \u0014 r2\u03ba(r,y)\u2202g \u2202r \u0015 + p \u2202 \u2202y \u0012 b p2 g \u0013 , (1) where g = f p4, y = ln(p/mpc) is the logarithmic momentum variable, and \u03ba(r, p) is the spatial diffusion coef\ufb01cient3. In the last term b(p) = \u2212dp/dt is the electronic combined synchrotron and iC cooling rate. For protons b(p) = 0. The basic gasdynamic conservation laws with additional terms for the CR pressure, PCR, CR induced non-adiabatic heating, and an isotropic magnetic pressure, PB, are solved using the spherical version of CRASH (Cosmic-Ray Amr SHock) code (Kang & Jones 2006). The CR pressure is calculated self-consistently from the CR proton distribution function, gp(p), determined from the \ufb01nite difference solution to equation (1). The magnetic pressure is calculated according to our phenomenological models for MFA (see the following Section 2.2) rather than from direct solutions of the induction equation or MHD wave transport equations. 2.2. Magnetic Field Ampli\ufb01cation (MFA) CRs streaming across the gas subshock into the shock precursor are known to generate resonant and nonresonant waves via streaming instabilities (e.g., Lucek & Bell 2000; Bell 2004; Zirakashvili & Ptuskin 2008). In that event MHD perturbations of short wavelengths (\u03bb \u226arg) being advected into the shock precursor are ampli\ufb01ed by the return charge current induced by escaping high energy CR particles, jCR \u223c e\u03c0usp3 max f(pmax) (Bell et al. 2013). The linear nonresonant instability grows fastest on scales, k\u22121 max \u223cBc/(2\u03c0 jCR) at a rate, \u0393max \u223c(jCR/c) p \u03c0/\u03c1. So, the ampli\ufb01cation of MHD turbulence via nonresonant interactions depends on the \ufb02ux of escaping particles, which, in turn, is governed by the ef\ufb01ciency of the CR acceleration and the shape of the CR spectrum at large momenta. As the instability enters the nonlinear 3 For later discussion it is useful to note in the relativistic regime that the momentum and energy distributions are simply related as n(E) = 4\u03c0p2 f(p) with E = pc. regime (\u03b4B \u223cB0) during passage through the precursor, saturation processes start to limit the growth, and previously subdominant resonant interactions become dominant. This leads to a linear growth of magnetic \ufb02uctuations and extension to larger scales (Pelletier et al. 2006). As noted in the introduction, there may also exist other types of instabilities and dynamo mechanisms leading to the growth of MHD turbulence on scales larger than rg (e.g., Schure et al. 2012). A full understanding of complex interplay between MFA and DSA would require MHD simulations combined with nonlinear DSA in which the return current is calculated self-consistently from the accelerated CR spectrum (see Marcowith & Casse 2010, for a test-particle treatment). Our more limited objective in the present study is to explore broadly the impact of the resulting MFA pro\ufb01le. Consequently, we implement four heuristic models for MFA in the precursor designated M1 M4, each established by simple applications of MFA and compare their consequences in DSA within model SNR shocks. MFA model M1: Caprioli (2012) has shown for strong shocks with Ms \u226b1 and MA \u226b1, that the strength of the turbulent magnetic \ufb01eld ampli\ufb01ed via resonant Alfv\u00e9n waves excited by CR streaming instabilities can be approximated in terms of the \ufb02ow speed (compression) within the shock precursor, as B(r)2 B2 0 = 1 +(1 \u2212\u03c9H)\u00b7 4 25M2 A,0 (1 \u2212U(r)5/4)2 U(r)3/2 , (2) where U(r) = [us \u2212|u(r)|]/us = \u03c10/\u03c1(r) is the normalized \ufb02ow speed with respect to the shock, and MA,0 = us(t)/vA,0 is the Alfv\u00e9nic Mach number for the instantaneous shock speed with respect to the far upstream Alfven speed, vA,0 = B0/\u221a4\u03c0\u03c10. We hereafter designate this magnetic \ufb01eld pro\ufb01le as model M1 and use the subscripts \u20180\u2019, \u20181\u2019, and \u20182\u2019 to denote conditions far upstream of the shock, immediately upstream and downstream of the subshock, respectively. The factor (1\u2212\u03c9H) accounts for local wave dissipation and the ensuing reduction of MFA; i.e., a fraction, \u03c9H, of the energy transferred from CR streaming to MHD waves is dissipated as heat in the plasma by way of nonlinear damping processes. Some damping is likely; we arbitrarily set \u03c9H = 0.5 as a reasonable estimate. In Paper I, this M1 recipe was adopted to represent qualitatively the MFA process in the shock precursor. As we will show below, Alfv\u00e9nic drift of scattering centers upstream at the local Alfv\u00e9n speed, vA = B(r)/\u221a4\u03c0\u03c1(r), along the shock normal can steepen the CR spectrum signi\ufb01cantly in the presence of MFA. On the other hand, the magnetic \ufb01eld in the M1 MFA model (Equation (2)) increases gradually through the shock precursor from B0 at the FEB to B1 at the subshock (see also Figure 1, below). Consequently, the drift speed increases slowly through the precursor, so that the highest energy CRs, which diffuse on scales \u03ba(pmax)/us \u223cL, are scattered mostly by waves with the relatively slow drift speed, vA,0. (The length L, de\ufb01ned below, measures the full width of the precursor.) That makes Alfv\u00e9nic drift and associated CR spectral steepening ineffective at the high energy end of the CR spectrum. MFA model M2: On the other hand, Caprioli (2012) pointed out that Equation (2) does not account for several important effects, such as excitation of the nonresonant streaming instability that can rapidly amplify the \ufb01eld at the leading edge of the precursor. To allow for such in\ufb02uences, he proposed an alternative simple MFA pro\ufb01le in which the entire \fDiffusive Shock Acceleration at SNRs 5 upstream, precursor region, 0 < (r \u2212rs) < L, has the saturated magnetic \ufb01eld, B1; i.e., B(r) = B1, (3) where B1 is calculated according to Equation (2). We hereafter designate that MFA pro\ufb01le as model M2. With the help of this saturated, uniform precursor magnetic \ufb01eld pro\ufb01le, Caprioli (2012) obtained a CR energy spectrum steeper than E\u22122 from nonlinear DSA calculations of SNRs. MFA model M3: Zirakashvili & Ptuskin (2008) carried out MHD simulations following the evolution of the nonresonant current instability through shock precursors. Their MFA pro\ufb01le was approximately exponential (see their Figure 3). We adapt this behavior into the simple form (designated, hereafter, model M3) for 0 \u2264(r \u2212rs) \u2264L, B(r) = B0 +B1 \u00b7( B1 \u03b4B0 )\u2212(r\u2212rs)/L, (4) where \u03b4B0 = 0.01 is an assumed, arbitrary strength of the initial background magnetic \ufb01eld perturbations and L is the distance from the shock to the upstream boundary (see Section 2.4). Again, for the maximum magnetic \ufb01eld strength immediately upstream of the shock, we adopted B1 calculated according to Equation (2). MFA model M4: As a fourth model for MFA in the precursor, we adopt a linear magnetic strength pro\ufb01le as in Marcowith & Casse (2010) (see their Figure 2): so for 0 \u2264 (r \u2212rs) \u2264L\u2217, B(r) = B1 \u2212(B1 \u2212B0)\u00b7 r \u2212rs L\u2217 (5) where L\u2217= 0.9L is used somewhat arbitrarily. For (r \u2212rs) > L\u2217, B(r) = B0. This model represents an exponential growth to \u03b4B \u223cB0 on a short time scale at r \u223cL\u2217by nonresonant modes followed by a linear growth to B1 through combined nonresonant and resonant mode ampli\ufb01cation. The pro\ufb01les M1 through M4 can be compared in Figure 1. In the case of \u201cstrongly modi\ufb01ed\u201d shocks (e.g., U(rs) \u226a1), magnetic \ufb01eld energy density may increase to a signi\ufb01cant fraction of the upstream kinetic energy density, (1/2)\u03c10u2 s, which is not compatible with observations. V\u00f6lk et al. (2005) have found postshock magnetic pressures, B2 2/(8\u03c0), in several young, shell-type SNRs that are all several % of the upstream ram pressures, \u03c10u2 s . Using this for guidance in the simulations we restricted the ampli\ufb01cation factor within the precursor by the condition that (B2 1/8\u03c0) \u2272(B2 sat/8\u03c0) \u22610.005\u03c10u2 s. (6) For typical Type Ia SNRs in the warm ISM with nH = 0.3cm\u22123 (see the section 2.5 and Table 1), the unmodi\ufb01ed SedovTaylor solution is UST = 5.65 \u00d7 103 km s\u22121(t/to)\u22123/5, so Bsat = 168 \u00b5G(t/to)\u22123/5. In the case of a moderate precursor with U1 \u22480.8 or \u03c11/\u03c10 \u22481.25, for \u03c9H = 0.5 and B0 = 5 \u00b5G Equation (2) gives B1 \u22480.081MA,0B0 \u2248136 \u00b5G(t/to)\u22123/5. So unless the shock is modi\ufb01ed quite strongly, that is, U1 < 0.8, B1 should not exceed the saturation limit Bsat for the warm ISM models considered here. We note that the relation (6) was found observationally for young SNRs, so its validity for much slower shocks at late Sedov stage has not been established. Note that Equation (6) is equivalent to MA,1 = u1/vA,1 \u227310 \u221a\u03c31 , (7) where \u03c31 = u0/u1 measures compression through the precursor. This is useful in evaluating the in\ufb02uence of Alfv\u00e9nic drift within the precursor (see Equations 8)-(11)). We assume that the turbulent magnetic \ufb01eld is isotropic as it comes into the subshock and that the two transverse components are simply compressed across the subshock. The immediate postshock \ufb01eld strength is estimated by B2/B1 = p 1/3 +2/3(\u03c12/\u03c11)2. If subshock compression is large, B2/B1 \u22480.8(\u03c12/\u03c11). From Equation (6) this leads, as a rule of thumb, roughly to B2 2/(8\u03c0)/(\u03c10u2 s) \u22723% (see Figure 6). It is not well understood how the magnetic \ufb01elds diminish downstream in the \ufb02ow behind the forward shock (e.g. Pohl et al. 2005). We assume for simplicity that the postshock \ufb01eld strength behaves as B(r) = B2 \u00b7 \u0002 \u03c1(r)/\u03c12 \u0003 for r < rs. 2.3. Alfv\u00e9nic Drift As noted earlier, the Alf\u00b4 ven waves generated by the streaming instability drift along the local mean magnetic \ufb01eld with respect to the background plasma \ufb02ow in the direction opposite to the CR gradient. Ahead of the shock these waves would propagate into the background medium; behind a spherical shock those waves would propagate towards the SNR interior. Since scattering isotropizes the CRs with respect to the scattering centers, the effective velocity difference that the particles experience across the shock is reduced, if those waves dominate CR scattering. The reduced velocity jump softens (steepens) the CR momentum spectrum compared to the testparticle result for a shock without a precursor or Alfv\u00e9nic drift, q0 = 3u0/(u0 \u2212u2), where q = \u2212\u2202ln f/\u2202ln p. In a CR modi\ufb01ed shock with a precursor and Alfv\u00e9nic drift the slope of the momentum distribution function is momentum dependent, re\ufb02ecting the variation with momentum of the particle diffusion length and the different \ufb02ow conditions sampled by the particles as a result. Near the momentum extremes the slopes can be estimated for steady, plane shocks as: qs \u2248 3(u1 +uw,1) (u1 +uw,1)\u2212(u2 +uw,2), (8) for the low energy particles just above the injection momentum (p \u223cpinj), and qt \u2248 3(u0 +uw,0) (u0 +uw,0)\u2212(u2 +uw,2), (9) for the highest energy particles just below the cutoff; i.e., p \u2272 pmax. Here u0 = \u2212us, u1 = \u2212us/\u03c31, u2 = \u2212us/\u03c32. To make the point of the impact of Alfv\u00e9nic drift with minimal complication, we assume for drift velocities simply uw,0 = +vA,0, uw,1 = +vA,1, and uw,2 = \u2212vA,2, with \u03c31 = \u03c11/\u03c10, \u03c32 = \u03c12/\u03c10. The local A\ufb02v\u00e9n speed is de\ufb01ned as vA,\u2217= B\u2217/\u221a4\u03c0\u03c1\u2217(where \u2217= 0, 1, 2). The steepening of CR spectrum due to Alfv\u00e9nic drift can obviously be ignored for high Alfv\u00e9nic Mach numbers, MA,\u2217= us/vA,\u2217\u226b1. As noted earlier, postshock Alfv\u00e9nic drift is frequently assumed to vanish; i.e., uw,2 = 0, based on the argument that postshock turbulence is likely to be balanced (e.g., Jones 1993). Here we want to emphasize, on the other hand that the shocks in this discussion are not really steady, plane shocks, but evolving, spherical shocks. There should generally be a strong CR gradient behind a spherical shock to drive a streaming instability, so that one could reasonably expect uw,2 < 0 (e.g., Zirakashvili & Ptuskin 2012). If this happens, it could signi\ufb01cantly in\ufb02uence the CR spectrum. \f6 Kang, Jones, and Edmon To see the dependencies on shock modi\ufb01cation and Alfv\u00e9nic drift more clearly, suppose |uw,\u2217/u\u2217| \u226a1 (where \u2217= 0,1,2) and (\u03c31 \u22121)/(\u03c32 \u22121) \u226a1, so that we can expand the corrections of qs and qt compared to the \ufb01ducial slope, q0 = 3\u03c32/(\u03c32 \u22121). Then keeping only lowest order corrections to q0 we can write, qs \u2248q0 \u0014 1 + \u03c31 \u22121 \u03c32 \u22121 + \u03c31 \u03c32 \u22121 \u0012uw,2 u2 \u2212uw,1 u1 \u0013\u0015 , (10) and qt \u2248q0 \u0014 1 + 1 \u03c32 \u22121 \u0012uw,2 u2 \u2212uw,0 u0 \u0013\u0015 . (11) Compression through the precursor steepens qs compared to qt, illustrating the concavity of CR spectra usually predicted by nonlinear DSA. The slope, q0, set by the full compression, \u03c32, is, of course, \ufb02atter than the slope in an unmodi\ufb01ed shock of similar sonic Mach number. One can also see that the slopes, qs and qt, are increased compared to the case with no Alfv\u00e9nic drift, if scattering centers drift upwind ahead of the shock and downwind behind the shock. Suppose for the moment that the wave drift speed scales simply with the local Alfv\u00e9n speed, that B1 \u226bB0 (see Equation (2) and Figure 6) and for simplicity that B2/B1 \u223c \u03c12/\u03c11. Then the in\ufb02uence of drift just upstream of the subshock is large compared to that just inside the FEB, since |(uw,1/u1)/(uw,0/u0)| \u223c(B1/B0)\u03c31/2 1 \u226b1. One can also see that the downstream drift has greatest in\ufb02uence on both qs and qt, because (uw,2/u2)/|(uw,1/u1|) \u223c(\u03c32/\u03c31)3/2 \u226b1 and (\u03c31 \u22121)/(\u03c32 \u22121) \u226a1 in the simulations we present here. We note for clarity that the MFA constraint given in Equation (7) limits the preshock Alfv\u00e9nic drift correction term, |(uw,1/u1)|, in these simulations to |(uw,1/u1)| \u22720.1\u221a\u03c31. From this discussion it should be obvious that the presence and nature of downstream Alfv\u00e9nic drift has a very important effect on the DSA outcomes. It is also clear under these circumstances that these expressions always satisfy qs \u2265qt; i.e., measured at the extremes the nonlinear spectra will remain concave, at least in a steady, plane shock, even when Alfv\u00e9nic drift is included on both sides of the shock. On the other hand, spherical shocks are not steady, and relative postshock \ufb02ow speeds increase with distance downstream, opening up a wider range of possible outcomes. We shall see, in fact, that in our SNR simulations the CR spectra near the maximum, cutoff momentum can be signi\ufb01cantly \ufb02atter than one would predict from the above relations. That feature does result from strong evolution of the shock properties at early times that in\ufb02uences subsequent evolution. Shock and CR properties in spherical blast waves are not a simple superposition of intermediate properties at the \ufb01nal time. When Alfv\u00e9nic drift has been included in DSA models, it has been customary to assumed the drift speed is the local Alfv\u00e9n speed along the shock normal, vA = B/\u221a4\u03c0\u03c1, based on the total magnetic \ufb01eld strength. However, when the \ufb01elds become strongly ampli\ufb01ed by streaming instabilities the mean \ufb01eld direction is less clear, even when the upstream \ufb01eld is along the shock normal (e.g., Reville & Bell 2013). Then the drift speed should be reduced compared to the Alfv\u00e9n speed expressed in terms of the total \ufb01eld strength. In order to allow for this we model the local effective Alfv\u00e9nic drift speed simply as vA(r) = B0 + fA[B(r)\u2212B0] \u221a4\u03c0\u03c1(r) , (12) where the parameter fA \u22641 is a free parameter (Ptuskin et al. 2010; Lee et al. 2012). For the simulations presented here fA = 0.5 whereever Alfv\u00e9nic drift is active. The default in our models turns off Alfv\u00e9nic drift in the post shock \ufb02ow; i.e., we set uw,2 = 0. On the other hand, to allow for possible in\ufb02uences of postshock streaming in these spherical shocks we also consider the case of uw,2(r) = \u2212vA(r). Those models are identi\ufb01ed with the subscript tag, \u2019ad\u2019 (see Table 1). All the simulations presented here apply Alfv\u00e9nic drift upstream of the shock, with uw,1(r) = +vA(r). 2.4. Recipes for Particle Injection, Diffusion, and Escape We apply a thermal leakage model for CR injection in which only suprathermal particles in the tail of the thermal, Maxwellian distribution above a critical rigidity are allowed to cross the shock from downstream to upstream. CR protons are effectively injected above a prescribed injection momentum, pinj \u22481.17mp(us/\u03c32)(1 + 1.07\u01eb\u22121 B ), where \u01ebB is an injection parameter de\ufb01ned in Kang et al. (2002). We adopt \u01ebB = 0.2 \u22120.215 here, which leads to the injected proton fraction, \u03be = ncr,2/n2 \u227210\u22124. In Paper 1, \u01ebB = 0.23 was adopted, which led a higher injection fraction than we allow here and higher CR acceleration ef\ufb01ciencies, as well, that were nearly in the saturation regime. Electrons are expected to be injected with a much smaller injection rate than protons, since suprathermal electrons have much smaller rigidities at a given energy. Some preacceleration process is likely to control this (Reynolds 2008). Since this physics is still poorly understood, we follow the common practice of \ufb01xing the injected CR electron-to-proton ratio to a small number, Ke/p \u223c10\u22124 \u221210\u22122 (e.g., Morlino & Caprioli 2012). Because of the small number of particles, the electronic CR component is dynamically unimportant; we neglect its feedback in these simulations. In these simulations injected proton and electron CRs are accelerated in the same manner at the same rigidity, R = pc/Ze. For the spatial diffusion coef\ufb01cient, we adopt a Bohmlike momentum dependence (\u03baB \u223c(1/3)rgv) with \ufb02attened non-relativistic dependence to reduce computational costs (Kang & Jones 2006). Since acceleration to relativistic energies is generally very quick, this low energy form has little impact our results. In particular we set \u03ba(r, p) = \u03ban( B0 B\u2225(r))\u00b7( p mpc)\u00b7K(r), (13) where \u03ban = mpc3/(3eB0) = (3.13 \u00d7 1022cm2s\u22121)B\u22121 0 (B0 is expressed in units of microgauss) and the parallel component of the local magnetic \ufb01eld, B\u2225(r), is prescribed by our MFA models discussed above. The function K(r) \u22651 is intended to represent a gradual decrease in scattering ef\ufb01ciency relative to Bohm diffusion (so, \u03bbs > rg) upstream of the subshock due to such in\ufb02uences as predominantly sub-gyro-scale turbulent \ufb02uctuations. All the simulations set K(r) = 1 for r < rs (postshock region). But, except for one model (WM1Bohm), where K(r) = 1, we use for r \u2265rs, K(r) = exp[ck \u00b7(r \u2212rs) rs ]. (14) The numerical factor, ck = 20, is chosen somewhat arbitrarily. It can be adjusted to accommodate a wide range of effects (Zirakashvili & Ptuskin 2012). In these simulations we explicitly provide for particle escape from the system by implementing a so-called \u201cFree Escape Boundary\u201d or \u201cFEB\u201d a distance, L, upstream of the \fDiffusive Shock Acceleration at SNRs 7 shock. That is, we set f(rFEB, p) = 0 at rFEB(t) = rs + L = (1 + \u03b6)rs(t), where \u03b6 = 0.1 \u22120.5. Once CRs are accelerated to high enough momenta, pmax, that the diffusion length lmax = \u03ba(pmax)/us \u223cL(t), the length L becomes the effective width of the shock precursor. For \u03b6 = 0.1 and the shock radius, rs = 3 pc, the distance of the FEB from the shock is L = 0.3 pc, which is comparable to the diffusion length of PeV protons if B \u223c50 \u00b5G and us \u223c6700 km s\u22121. Note that, for ck = 20 and \u03b6 = 0.1 (0.25), the value of K(r) increases from unity at the shock to e2 = 7.4 (e5 = 150) at the FEB. The time-integrated spectrum of particles escaped from the shock from the start of the simulation, ti to time t is given by \u03a6esc(p) = Z t ti 4\u03c0rFEB(t)2[\u03ba\u00b7|\u2202f \u2202r |]rFEB dt. (15) Finally, we note that the rate of non-adiabatic gas heating due to wave dissipation in the precursor is prescribed as W(r,t) = \u2212\u03c9H \u00b7 vA(r)\u2202PCR/\u2202r, where a \ufb01ducial value of \u03c9H = 0.5 was assumed in these simulations. 2.5. Sedov-Taylor Blast Waves For a speci\ufb01c shock context we consider a Type Ia supernova explosion with the ejecta mass, Mej = 1.4M\u2299, expanding into a uniform ISM. All models have the explosion energy, Eo = 1051 ergs. Previous studies have shown that the shock Mach number is one of the key parameter determining the evolution and the DSA ef\ufb01ciency (e.g., Kang 2010), so two phases of the ISM are considered: the warm phase with nH = 0.3cm\u22123 and T0 = 3 \u00d7 104K (\u2018W\u2019 models), and the hot phase with nH = 0.01 cm\u22123 and T0 = 106K (\u2018H\u2019 models). The background gas is assumed to be completely ionized with the mean molecular weight, \u00b5 = 0.61. The background magnetic \ufb01eld strength is set to be B0 = 5 \u00b5G. For the warm ISM (\u2019W\u2019) models the upstream Alfv\u00e9n speed is then vA,0 = 16.8 km s\u22121. The associated shock Alfv\u00e9n Mach number is then MA,0 \u2248180(us/3000 km s\u22121). For the hot ISM (\u2019H\u2019) models, vA,0 = 183 km s\u22121, and MA,0 \u224816.4(us/3000 km s\u22121). The model parameters for the DSA simulations are summarized in Table 1. The second and third characters of the model name in column one indicate the MFA model pro\ufb01les; namely, \u2019M1\u2019 for the velocity-dependentpro\ufb01le given by Equation (2), \u2019M2\u2019 for the uniform, saturated pro\ufb01le given by Equation (3), \u2019M3\u2019 for the exponential pro\ufb01le given by Equation (4), and \u2019M4\u2019 for the linear pro\ufb01le given by Equation (5). The downstream default in all models sets uw,2 = 0; i.e., downstream Alfv\u00e9nic drift is turned off. Where downstream Alfv\u00e9nic drift is operating; i.e., where uw,2 = \u2212vA,2, the model labels include the subscript \u2019ad\u2019. In two models, WM1li and HM1li models, the injection rate is lowered by reducing slightly the injection parameter from \u01ebB = 0.215 to \u01ebB = 0.2. The WM1Bohm model is the same as the WM1 model except that the function K(r) = 1 for r > rs, instead of the defaut form given in Equation (13). Models WM1feb2 and WM1feb3 are included to study the effects of different FEB locations; namely, L = rFEB \u2212rs = 0.25rs and L = 0.5rs, respectively. For all models, fA = 0.5 is adopted for the Alfv\u00e9n drift parameter and \u03c9H = 0.5 for the wave dissipation parameter. The physical quantities are normalized, both in the numerical code and in the plots below, by the following constants: ro = \u00003Mej/4\u03c0\u03c1o \u00011/3, to = \u0000\u03c1or5 o/Eo \u00011/2, uo = ro/to, \u03c1o = (2.34 \u00d7 10\u221224gcm\u22123)nH, and Po = \u03c1ou2 o. For r = ro the mass swept up by the forward shock equals the ejected mass, Mej. For the warm ISM models, ro = 3.18pc and to = 255 years, while for the hot ISM models, ro = 9.89pc and to = 792 years. The true Sedov-Taylor (ST hereafter) dynamical phase of SNR evolution is established only after the reverse shock is re\ufb02ected at the explosion center. So, the dynamical evolution of young SNRs is much more complex than that of the ST similarity solution that we adopt for a simple initial condition at ti. In particular we start each simulation from the ST similarity solution, rST/ro = 1.15(t/to)2/5, without the contact discontinuity and the reverse shock. Although the early dynamical evolution of the model SNRs is not accurate in these simulations, the evolution of the forward shock is still qualitatively representative (e.g., Dohm-Palmer & Jones 1996). Our main goal is to explore how time dependent evolution of MFA, Afv\u00e9nic drift and FEB affect the high energy end of the CR spectra rather than to match the properties of a speci\ufb01c SNR. On the other hand, since the highest energy CRs generally correspond to those injected into the DSA process at very early times, before the ST stage, we begin the calculations at ti/to = 0.2. We then follow the evolution of the forward shock to t/to = 10, which corresponds roughly to the beginning of the true ST evolutionary phase. The spherical grid used in the simulations expands in a comoving way with the forward shock (Kang & Jones 2006). Continuation conditions are enforced at the inner boundary, which is located at r/rs = 0.1 at the start of each simulation and moves outward along with the forward shock. At the outer boundary the gasdynamic variables are continuous, while the CR distribution function is set by the FEB condition. 2.6. Emissions The nonthermal radio to \u03b3-ray emissions expected in these simulations from CR electrons and from CR proton secondary products, along with thermal Bremsstrahlung were computed using an updated version of the COSMICP (renamed AURA) code described in Edmon et al. (2011). We include only information essential for clarity here. Speci\ufb01cally we include for CR electrons synchrotron, iC and Bremsstrahlung. The iC emissions properly account for electron recoil in the KleinNishina limit. Hadronic interactions include inelastic protonproton and photon-proton collisions. The low energy protonproton cross-section was updated using Kamae et al. (2006). Helium and heavy ion contributions are ignored. The photopion production rates and iC rates are based on a background radiation \ufb01eld with a total energy density 1.04 eV cm\u22123 including the cosmic microwave background, plus contributions from cold dust, old yellow and young blue stars, as described in Edmon et al. (2011). 3. DSA SIMULATION RESULTS 3.1. CR Properties Figure 1 shows at times t/t0 = 0.5,1,2,5 the ampli\ufb01ed magnetic \ufb01eld in the radial coordinate normalized by the shock radius, r/rs(t), and the distribution function of CR protons at the subshock, gp(xs), for warm ISM simulations with the four different MFA models: WM1, WM2, WM3, and WM4. In the \ufb01gures below the momentum is expressed in units of mpc and the particle distribution function is de\ufb01ned in such a way that R \u221e 0 4\u03c0p2 f0(p)dp = nH far upstream. So both f(p) and g(p) = f(p)p4 are given in units of nH, while the volumeintegrated distribution functions F(p) = 4\u03c0 R f(r, p)r2dr and G(p) = p4F(p) are given in units of nHr3 o. \f8 Kang, Jones, and Edmon During the early stage (t/to < 1) the precursor grows to a time-asymptotic structure and the magnetic \ufb01elds are ampli\ufb01ed rapidly to saturation. Afterwards, the value for B1 declines as B1 \u221dMA,0 \u221d(t/to)\u22123/5, as the shock slows down in time. The time evolution of MFA along with other measures, including the precursor modi\ufb01cation and the postshock CR pressure, will be discussed below in Figure 6. The distance from the subshock to the FEB location is the same, L = 0.1rs, in these models, but the magnetic \ufb01eld pro\ufb01les differ in shape. The \u201ceffective width\u201d of the magnetic \ufb01eld precursor (measured, say, by the half-power width) would increase through the sequence M1, M3, M4, M2. One can see that the DSA ef\ufb01ciency increases among the WM1-4 models in the same order at early times (t/t0 \u22721). This ef\ufb01ciency in\ufb02uence is re\ufb02ected in Figure 1 most clearly in the variation in pp,max. This demonstrates that pp,max is determined not only by the strength of B1 but also by the width of the magnetic \ufb01eld precursor. As the acceleration proceeds from ti/to = 0.2, the maximum momentum increases and reaches its highest value at t/to \u223c 0.5, depending in detail on the pro\ufb01le of B(r,t). Afterwards, pp,max decreases in time as the shock slows down and MFA becomes weaker (as shown in the \ufb01gure). For the M2, M3 and M4 models, the highest proton energy reaches Ep,max \u223c1 PeV at t/to \u223c0.5. For the M1 model, the magnetic \ufb01eld precursor is too narrow to provide a signi\ufb01cant enhancement of DSA over that from the upstream \ufb01eld, B0, so the proton energy increases only to Ep,max \u223c0.05 PeV. Intuitively, the high momentum end of gp(rs) is obviously populated by particles injected during the earliest period of very ef\ufb01cient DSA, when the shock was especially strong. At the beginning of the simulations (ti/to = 0.2), the shock is fast and strong with us \u22481.5\u00d7104 km s\u22121, Ms,0 \u2248580, and MA,0 \u2248890, so the initial slope of fp(rs) starts with q = 4. As the precursor develops, the CR spectrum becomes concave (dq/d ln p < 0). Although Alfv\u00e9nic drift increases both qs and qt as MFA proceeds in the precursor, these effects are small initially because of large values of MA,1. In the WM2 model, with a uniform precursor magnetic \ufb01eld, for example, MA,1 \u2248MA,0 \u224820 for B1 = 200 \u00b5G and us \u22481.5\u00d7104 km s\u22121, resulting in only small corrections to qs and qt (see Equations (10)-(11)). Even in the later stage (ti/to \u22731), MA,1 > 10 according to Equation (7), so steepening of CR spectra due to Alfv\u00e9nic drift remains moderate. As can be seen in Figure 1, the concavity of gp is gradually reduced in time, but does not disappear entirely in these models. Although the early evolution of SNRs may not be realistic in our simulations, this exercise demonstrates that it is important to follow the initial evolution of SNRs, including us(t) and B(r,t), in order to establish the CR spectrum around Ep,max. Figure 2 shows the spatial pro\ufb01le of the CR pressure, the volume-integrated distribution functions and Gp,e(p) for protons and electrons at t/to = 1, and the time-integrated spectrum of escaped protons, \u03a6esc(p), at t/to = 10 for the same models shown in Figure 1. Once again, the proton spectrum, Gp, clearly depends sensitively on the magnetic \ufb01eld pro\ufb01le in the precursor, as well as its evolution. The slope of Gp(p) near pp,max approaches q \u223c3.8 for the WM1 model and q \u223c3 in the WM2 WM4 models (see also Figure 5), which results from the early nonlinear evolution of DSA. Hence the \u03c00 decay emission in these models could not explain observed \u03b3-ray spectra of some young SNRs, which indicate proton spectra steeper than Fp(p) \u221dp\u22124. In the lower left panel of Figure 2, the sum of Gp(p,t) + \u03a6esc(p,t) is shown in thick lines, while \u03a6esc(p,t) itself is shown in thin lines. Assuming that the CRs included in Gp will be injected eventually into the ISM when the SNR weakens and merges to the ambient medium, this sum at large t would represent the total, time-integrated proton spectrum that the model SNR injects into the ISM. The shape of this integrated spectrum in the four models would be mostly inconsistent with the Galactic CR proton spectrum around the \ufb01rst knee, because the spectrum is too \ufb02at just below the cutoff. As demonstrated in the lower right panel of Figure 2, the volume integrated electron spectrum, Ge(p), steepens approximately by one power of the momentum compared to the proton spectrum due to radiative cooling above a break momentum, pe,br(t) \u221d(B2 2 t)\u22121. In the WM1 model (solid line), there is an additional peak of Ge(p) at a higher energy, which is close to the maximum momentum at the subshock, pe,max. This component comes from the electron population in the upstream region, which cools much less ef\ufb01ciently due to weaker magnetic \ufb01eld there (Edmon et al. 2011). In the other models, magnetic \ufb01elds are stronger in the precursor, so upstream electrons have cooled as well. One can see that in these warm-ISM models the X-ray synchrotron emitting electrons would cut off at the break energy, Ee,br \u223c0.5 \u22121 TeV at t/to = 1, depending mainly on the strength of B2. Figure 3 shows for comparison the results of the warm ISM models with downstream Alfv\u00e9nic drift; i.e., models \u2019WM*ad\u2019. Compared to the WM* models shown in Figure 2, due to the inclusion of downstream Alfv\u00e9nic drift, the CR acceleration is less ef\ufb01cient, the precursor is weaker, and the shock decelerates less. As a result, the shock radius is slightly larger in these models, compared to the WM* models. A weaker precursor also leads to weaker MFA, with reduced B1 \u2272100 \u00b5G and B2 \u2248300 \u00b5G at t/to = 1 (see Figure 6). In the WM2ad, WM3ad, and WM4ad models, the high energy end of Fp is still much \ufb02atter than p\u22124, while it is close to p\u22124 in WM1ad model. Below p < 104mpc, the CR proton spectra in the WM2ad, WM3ad and WM4ad models become as steep as p\u22124.2. Since in those models the proton spectral slopes become as \ufb02at as p\u22123 just below their cutoffs, the concave curvatures in the spectra become more severe than in the models without downstream Alfv\u00e9nic drift, WM*. These models, because of their broader magnetic \ufb01eld precursors, experience more rapid acceleration early on. Later, as the shocks slow down, and Alfv\u00e9nic drift becomes more signi\ufb01cant, and acceleration of CRs injected at later times is less ef\ufb01cient. This effectively leaves an \u201cisland\u201d of CRs at the top of the momentum distribution. Once again, this serves as a reminder that the details of early DSA strongly in\ufb02uence the form of the particle distribution for a long time. In these WM*ad models, the electron break energy, Ee,br \u223c1\u22123 TeV at t/to = 1. It is slightly higher than that of the analogous WM* models, because of weaker magnetic \ufb01elds that result in reduced cooling. We ran two WM1 simulations with increased precursor width due to FEB placement, L = \u03b6rs (WM1feb2,3, \u03b6 = 0.25,0.5), and one with slow (Bohm) diffusion throughout the precursor (WM1Bohm, K(r) = 1 for r > rs). Figure 4 compares the WM1 model (\u03b6 = 0.1), the WM1feb3 model and the WM1Bohm model. We note the results of WM1feb2 are essentially the same as those of WM1feb3 model, and so not shown here. The diffusion length of protons with pp,max/mpc = 105 is lmax \u22480.41pc for B0 = 5 \u00b5G and us = 5000 km s\u22121. Since \fDiffusive Shock Acceleration at SNRs 9 lmax is greater than the FEB distance, L = 0.32pc at t/to = 1 in the WM1 model, the high energy end of the CR spectrum is strongly affected by particle escape through the FEB. In the WM1feb2 and WM1feb3 models, on the other hand, lmax < L = 0.8 \u22121.6pc. So, escape of the highest particles is not signi\ufb01cant, and both WM1feb2,3 models have similar CR spectra extending to pp,max slightly higher than that in WM1 model. One can see that the overall distribution of PCR is about the same in these models, except far upstream, near rFEB, where PCR is dominated by the highest energy particles that can diffuse to the FEB location. Referring now to the comparison between the WM1 and WM1Bohm models, we can see that Gp in the WM1Bohm model (dot-dashed line) extends to higher pp,max by a factor of three or so. In addition, the spatial pro\ufb01le of PCR is slightly broader, compared to the WM1 model. These differences have similar causes; namely, reduced CR escape at high energies, re\ufb02ecting stronger scattering upstream of the subshock in the WM1Bohm model. Note that the introduction of a FEB upstream of the shock or a diffusion scale parameter, K(r), in the precursor does not in fact soften the CR spectrum near pp,max. Instead, it simply affects where an exponential cutoff sets in, without altering the CR spectrum just below pp,max. Figure 5 compares the volume integrated proton spectrum, Gp and its slope for different models (at t/to = 1 for the warmISM models and at t/to = 0.5 for the hot-ISM models). Most of these behaviors have already been addressed for the warmISM simulations, but this representation provides a good general summary and a simple illustration of the comparative properties of the hot-ISM models. In the \ufb01rst and second rows from the top, we show WM* models with uw,2 = 0 and WM*ad models with uw,2 = \u2212vA,2, respectively. In the third row from the top, the models with the different FEB position are compared: \u03b6 = 0.1 (WM1), 0.25 (WM1feb2), and 0.5 (WMfeb3). As mentioned previously, we can see here that the WM1feb2 (red dotted lines) and WM1feb3 (blue dashed) models are almost identical, while pp,max of the integrated spectrum is slightly lower in WM1 model (black solid). For the WM1Bohm model with the Bohm-like diffusion coef\ufb01cient, pp,max is higher than other models because of smaller \u03ba. In the bottom row of Figure 5, the three hot-ISM models, HM1, HM1ad, and HM1li, are illustrated. In these models DSA is less ef\ufb01cient compared to the warm-ISM models, because of smaller sonic Mach number, Ms, and because MFA is less ef\ufb01cient in response to smaller Alfv\u00e9nic Mach number, MA,0. As a result, the CR spectra deviate only slightly from the test-particle power-law and the downstream magnetic \ufb01elds are weaker. We also show in the bottom row of Figure 5 results from the one case we computed with a reduced CR injection rate; model HM1li. As expected, because PCR is reduced, the precursor \ufb02ow is less modi\ufb01ed and the CR acceleration is almost in the test-particle regime. From the results illustrated in Figure 5 it seems dif\ufb01cult for the typical SNR parameters considered here to obtain a proton momentum spectrum steeper than p\u22124 near pp,max even when the nominally rather strong in\ufb02uences of postshock Alfv\u00e9nic drift are included. The comparison of different models in Figure 5 further illustrates how the spectrum of accelerated CRs depends on the nonlinear interplay among MFA, Alfv\u00e9nic drift and particle escape. Turning brie\ufb02y to the simulated CR electron properties, since they are responsible for the radio through X-ray nonthermal emissions in SNRs, we note for p < pe,br that the slopes of Ge and Gp should be similar, because the electron spectrum is not affected by radiative cooling at those momenta. According to Figure 5, the slope of Ge(p) near p/mpc \u223c1 (\u03b3e \u223c2000) would be about 4.2, virtually independent of the model parameters. So the radio spectral index is expected to be similar in these models. Figure 6 shows the time evolution of the various dynamical shock properties for different models, including the density compression factors, ampli\ufb01ed magnetic \ufb01eld strengths, the postshock CR pressure, CR injection fraction and the fraction of the explosion energy transferred to CRs for the models without (left column) or with (right column) postshock Alfv\u00e9nic drift. The left column also includes for comparison the WM1li model with a lower injection. Several of these comparisons have already been addressed. According to Equation (2), the MFA factor, B1/B0, depends on the precursor strength (i.e., U1) and the Alfv\u00e9nic Mach number, MA,0. So the preshock magnetic \ufb01eld, B1, increases rapidly in the early stage during which the precursor develops, but later it decreases in time with diminishing MA,0. In the descending order of the effective width of the magnetic \ufb01eld precursor, M2, M4, M3, M1, the precursor grows and the value of B1(t) peaks at progressively earlier times. After reaching its peak value, the B1 values decline as B1 \u2248136 \u00b5G(t/to)\u22123/5 and become similar in all four models. Here the straight dotted lines show the limiting magnetic \ufb01eld, Bsat = 168 \u00b5G(t/to)\u22123/5, given in Equation (6). One can see that B1 stays below Bsat for the models considered here. From the third row of Figure 6 we can see that the MFA model pro\ufb01le strongly in\ufb02uences the early evolution of the postshock CR pressure, PCR,2. But once the precursor growth and MFA saturate, the postshock CR pressure is similar for a given model class (postshock Alfv\u00e9nic drift on or off). Because of the steepening of the CR spectrum (see Figure 5), PCR,2 is smaller in WM*ad models, compared to WM* models. But the difference is less than a factor of two. The bottom panels of Figure 6 show that by the end of the simulations about 30% of the SN explosion energy is transferred to CRs in WM* models, while WM*ad models are somewhat less ef\ufb01cient, as expected. As can be seen in Figures 2 and 3, PCR,2 is smaller but the shock radius rs is larger in WM*ad models, resulting in a rather small difference in the volume integrated energy, ECR, between the two model classes. A reduced CR injection rate also reduces this energy transfer in the WM1li model. Thus, in our models the amount of the CR energy produced in the SNRs is smaller than what had been reported in some previous DSA simulations in which the Alfv\u00e9nic drift was not included or Alfv\u00e9n speed in the background \ufb01eld, B0, was adopted (e.g. Berezhko & V\u00f6lk 1997; Berezhko et al. 2009; Kang & Jones 2006). Even though the CR acceleration ef\ufb01ciency is reduced in these simulations, the estimated values are still suf\ufb01cient to replenish CR energy escape from the Galaxy. 3.2. Emissions It is useful to know how the differences in CR particle spectra are translated into differences in nonthermal emissions spectra (e.g., Edmon et al. 2011). For a power-law electron distribution, fe(p) \u221dp\u2212qe, the radiation spectra due to synchrotron and iC processes will have a power-law form, F\u03bd \u221d \u03bd\u2212\u03b1 with a slope \u03b1syn = \u03b1iC = (qe \u22123)/2. For the test-particle slope, qe = 4, \u03b1syn = \u03b1iC = 0.5, so \u03bdL\u03bd \u221d\u03bd+0.5, where L\u03bd is the luminosity spectrum in units of erg s\u22121eV\u22121. The same power\f10 Kang, Jones, and Edmon law momentum distribution for protons, fp(p) \u221dp\u2212qp with a cutoff at Ep,max, \u03c00 decay emission spectrum has roughly a power-law form with \u03b1\u03c00 = qp \u22123, for the photon energy 80MeV \u2272E\u03b3 \u22720.1Ep,max (Kang et al. 2012). So for \u03c00 decay emissions with qp = 4, \u03bdL\u03bd has a \ufb02at shape (\u03b1\u03c00 + 1 = 0) typically for the 100 MeV 10 TeV band (note: \u03bd = 1024 Hz corresponds to E\u03b3 = 4 GeV). In Figure 7 the volume integrated radiation spectra, \u03bdL\u03bd, are shown for different models, where Ke/p = 10\u22124 is adopted for convenience. In the upper two panels the contributions from the different processes are compared for WM1ad and W4 models. The lower left panel shows the results for four models with different pro\ufb01les of B(r) in the precursor: WM1, WM2, WM3, and WM4 models. The results for the hot-ISM models are shown in the lower right panel. As shown in Figure 5, the slope of the electron spectrum is almost universally qe \u22484.2 for \u03b3e \u223c103 \u2212105, so the spectral shape of the radio synchrotron emission should be similar in different models. On the other hand, the X-ray synchrotron emission can be affected by the radiative cooling. In the hot ISM models, the postshock CR electron population has not cooled signi\ufb01cantly by t/to = 0.5, so \u03bdL\u03bd has a relatively sharp peak in the X-ray band. In the warm ISM models with much stronger ampli\ufb01ed magnetic \ufb01elds, the rise of \u03bdL\u03bd in X-ray synchrotron emission before the cutoff is \ufb02attened by the radiative cooling. This demonstrates that the spectral shape in the X-ray band is affected by the evolution of B(r,t) as well as the shock dynamics, us(t). In particular, for the WM4 model a distinctive signature of electron cooling can be seen in iC emission for E\u03b3 = 1\u2212100 GeV (in the case where iC emission dominates over \u03c00 decay emission). With the assumed value of Ke/p and a background radiation \ufb01eld representative of the galactic plane, \u03c00 decay emission dominates over electronic iC scattered emission in the GeVTeV \u03b3-ray band. As can be seen in Figures 2 and 3, the high energy end of CR proton spectra vary widely among different models, depending on the models for MFA and Alfv\u00e9nic drift, so the ensuing \u03c00 decay emission spectra can be very different. Model WM1ad shown in the top left panel has the proton cutoff at Ep,max \u223c104 GeV and the \u03b3-ray cutoff at E\u03b3 \u223c4 TeV (1027 Hz). For this model, the shape of \u03bdL\u03bd due to \u03c00 decay is slightly steeper than the \ufb02at spectrum expected for the test-particle power law with qp = 4. The warm ISM models with other MFA models, M2-M4, have the \u03b3-ray cutoff at higher energies, E\u03b3 \u223c10 \u2212100 TeV, and their \u03c00 decay emission spectra show a concave curvature. Such a signature of nonlinear DSA has not been observed in real SNRs, so these models are probably unrealistic and a signi\ufb01cant revision for the current DSA theory is called for. But as stated before, our focus here is to demonstrate how different models for MFA, Afv\u00e9nic drift and FEB affect the high energy end of the CR proton spectra and their nonthermal emission spectra. We note that the sharp feature near E\u03b3 \u223c100 MeV comes from the low energy cross-section for the p \u2212p collision. 4. SUMMARY The most direct evidence for the particle acceleration at SNR shocks can be provided by multi-wavelength observations of nonthermal radiation emitted by CR protons and electrons (e.g. Berezhko et al. 2012; Morlino & Caprioli 2012; Kang et al. 2012). Recent observations of young SNRs in the GeV-TeV bands seem to imply that the accelerated proton spectrum might be much steeper than predicted by conventional nonlinear DSA theory, which is based on some simplifying assumptions for wave-particle interactions (e.g. Acero et al. 2010; Acciari et al. 2011; Giordano et al. 2012). Thus a detailed understanding of plasma physical processes at collisonless shocks is essential in testing the DSA hypothesis for the origins of Galactic CRs (e.g. Caprioli 2012; Kang 2013). In this study, we have explored how magnetic \ufb01eld ampli\ufb01cation, drift of scattering centers, and particle escape could affect the outcomes of nonlinear DSA at the outer shock of Type Ia SNRs, implementing some phenomenological models for such processes into the exiting DSA simulation code. Given the current lack of full understanding of different essential processes, we have considered several heuristic models and included a moderately large number of models and parameters in order to examine underlying model sensitivities. We do not claim that any of these models accurately represent real, young SNRs. We have, in particular, considered: a) four different models for magnetic \ufb01eld ampli\ufb01cation (MFA), B(r,t), in the precursor, b) inclusion of wave damping (\u2192 plasma heating) through a parameter, \u03c9H, c) Alfv\u00e9nic drift, with allowance for super-gyro-scale \ufb01eld disorder, through a drift speed adjustment parameter, fA, d) inclusion of downstream, postshock Alfven\u00edc drift in some cases, e) a free escape boundary (FEB) with adjustable scale through a parameter, \u03b6, f) reduced CR scattering ef\ufb01ciency towards the front of the shock precursor, through a diffusion scale parameter, K(r), and g) variation of the thermal leakage injection rate through a parameter, \u01ebB. In these, kinetic DSA simulations the time-dependent evolution of the pitch-angle averaged phasespace distribution functions of CR protons and electrons are followed along with the coupled, dynamical evolution of an initially Sedov-Taylor blast wave. Radiation spectra from the CR electrons and protons have been calculated through post processing of the DSA simulation data. Since the spatial pro\ufb01le of the ampli\ufb01ed magnetic \ufb01eld, B(r,t), is not in general a simple step function, the timedependent evolution of the particle acceleration depends on interplay between smaller diffusion coef\ufb01cient and faster Alfv\u00e9nic drift, which have opposite effects on DSA. Stronger magnetic \ufb01elds result in smaller scattering lengths and faster acceleration, leading to higher pmax and greater precursor compression. On the other hand, faster Alfv\u00e9nic drift away from the shock steepens the CR spectrum and reduces the CR acceleration ef\ufb01ciency (Ptuskin et al. 2010; Caprioli 2012). The main results can be summarized as follows. 1. The high energy end of the proton spectrum depends sensitively on the strength and pro\ufb01le of the ampli\ufb01ed magnetic \ufb01eld in the precursor. For typical SNR shock properties considered here, the maximum proton energy can reach up to Ep,max \u22481 PeV except for one MFA model (M1, Equation (2)) which has a very thin region of ampli\ufb01ed magnetic \ufb01eld in the precursor, such that its effective width is less than the diffusion length of the highest energy particles produced in the other MFA models. Consequently, in the M1 models, in which the magnetic \ufb01eld is not ampli\ufb01ed signi\ufb01cantly for most of the precursor, DSA is too slow to reach such high energies. This model exhibits relatively less CR spectral concavity as a byproduct of its relative inability to accelerate to very high energies. 2. The MFA models M2 M4 given in Equations (3)(5) produce broader magnetic \ufb01eld precursors than the M1 model. In those models the CR spectrum is indeed steepened \fDiffusive Shock Acceleration at SNRs 11 signi\ufb01cantly for E < 10 TeV by fast Alfv\u00e9nic drift. But the CR spectrum for E > 10 TeV shows a strong concave curvature, since the high energy ends of the spectra are established early on, when magnetic \ufb01elds are being ampli\ufb01ed through the initial development of the precursor. In fact, the CR spectra just below Ep,max in these models can approach the very hard spectral slope, E\u22121. 3. In the models with postshock Alfv\u00e9nic drift (uw,2 \u2248\u2212vA), the CR spectrum is slightly softer than that of the models without it, although it remains somewhat concave. In these models, the CR acceleration ef\ufb01ciency is reduced by a factor of less than two and MFA in the precursor becomes also weaker, compared to the models without postshock Alfv\u00e9nic drift. 4. Reduced CR scattering ef\ufb01ciencies far upstream of the shock due, for instance to the turbulent \ufb01eld being dominated by small scale, non-resonant\ufb02uctuations, and resulting escape of the highest energy particles also regulate the high energy end of the CR proton spectrum in important ways (i.e., K(r) \u2265 1). We demonstrated, in addition, that the CR proton spectrum near the high energy cutoff is strongly in\ufb02uenced by the FEB location if the high energy diffusion length, lmax, approaches the width of the precursor; i.e., if L < lmax, where L = \u03b6rs is the FEB distance. However, the introduction of a diffusion scale parameter, K(r), or a FEB upstream of the shock simply lowers pp,max where the exponential cutoff sets in, rather than steeping the CR spectrum p \u2272pp,max. 5. Nonthermal radiation from SNRs carries signi\ufb01cant information about nonlinear DSA beyond the simple testparticle predictions. In particular, the shape of X-ray synchrotron emission near the cutoff is determined by the evolution of the ampli\ufb01ed magnetic \ufb01eld strength as well as the shock dynamics. 6. Since the high energy end of the CR proton spectrum consists of the particles that are injected during the early stages of SNRs, the spectral shape of \u03c00 decay emission near the high energy cutoff depends on the time-dependent evolution of the CR injection, MFA, and particle escape as well as the early dynamical evolution of the SNR shock. These properties, in return, depend on dynamical feedback from the CRs and MFA. The end results are not likely well modeled by a succession of independent, static solutions. This study demonstrates that a detailed understanding of plasma physical processes operating at collisionless shocks is crucial in predicting the CR energy spectra accelerated at SNR shocks and nonthermal emissions due to those CRs. H.K. was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-001065). T.W.J. was supported in this work at the University of Minnesota by NASA grant NNX09AH78G, NSF grant AST-1211595 and by the Minnesota Supercomputing Institute for Advanced Computational Research. P.P.E. was supported by Harvard Research Computing and the Institute for Theory and Computation at the Center for Astrophysics. The authors would like to thank the anonymous referee for the constructive suggestions and comments. H.K. also thanks Vahe Petrosian and KIPAC for their hospitality during the sabbatical leave at Stanford University where a part of the paper was written." + }, + { + "url": "http://arxiv.org/abs/1212.3246v1", + "title": "Diffusive Shock Acceleration at Cosmological Shock Waves", + "abstract": "We reexamine nonlinear diffusive shock acceleration (DSA) at cosmological\nshocks in the large scale structure of the Universe, incorporating\nwave-particle interactions that are expected to operate in collisionless\nshocks. Adopting simple phenomenological models for magnetic field\namplification (MFA) by cosmic-ray (CR) streaming instabilities and Alfv'enic\ndrift, we perform kinetic DSA simulations for a wide range of sonic and\nAlfv'enic Mach numbers and evaluate the CR injection fraction and acceleration\nefficiency. In our DSA model the CR acceleration efficiency is determined\nmainly by the sonic Mach number Ms, while the MFA factor depends on the\nAlfv'enic Mach number and the degree of shock modification by CRs. We show that\nat strong CR modified shocks, if scattering centers drift with an effective\nAlfv'en speed in the amplified magnetic field, the CR energy spectrum is\nsteepened and the acceleration efficiency is reduced significantly, compared to\nthe cases without such effects. As a result, the postshock CR pressure\nsaturates roughly at ~ 20 % of the shock ram pressure for strong shocks with\nMs>~ 10. In the test-particle regime (Ms<~ 3), it is expected that the magnetic\nfield is not amplified and the Alfv'enic drift effects are insignificant,\nalthough relevant plasma physical processes at low Mach number shocks remain\nlargely uncertain.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2012-12-13", + "updated": "2012-12-13", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "astro-ph.CO" + ], + "main_content": "INTRODUCTION It is expected that hierarchical gravitational clustering of matter induces shock waves in baryonic gas in the large-scale structure (LSS) of the Universe (Kang et al. 1996; Miniati et al. 2000). Simulations for the LSS formation suggest that strong shocks (Ms \u227310) form in relatively cooler environments in voids, \ufb01laments, and outside cluster virial radii, while weak shock (Ms \u2272several) are produced by mergers and \ufb02ow motions in hotter intracluster media (ICMs) (Ryu et al. 2003; Pfrommer et al. 2006; Kang et al. 2007; Skillman et al. 2008; Hoeft et al. 2008; Vazza et al. 2009; Br\u00a8 uggen et al. 2012). Observationally, the existence of such weak shocks in ICMs has been revealed through temperature jumps in the X-ray emitting gas and Mpc-scale relics with radio spectra softening downstream of the shock (see Markevitch & Vikhlinin 2007; Feretti et al. 2012, for reviews). These cosmological shocks are the primary means through which the gravitational energy released during the LSS formation is dissipated into the gas entropy, magnetic \ufb01eld, turbulence and nonthermal particles (Ryu et al. 2008). In fact, shocks are ubiquitous in astrophysical environments from the heliosphere to galaxy clusters and they are thought to be the main \u2018cosmic accelerators\u2019 of high energy cosmic-ray (CR) particles (Blandford & Eichler 1987). In di\ufb00usive shock acceleration (DSA) theory, suprathermal particles are scattered by magnetohydrodynamic (MHD) waves and isotropized in the local wave frames, and gain energy through multiple crossings of the shock (Bell 1978; Drury 1983; Malkov & Drury 2001). While most postshock thermal particles are advected downstream, some suprathermal particles energetic enough to swim against downstream turbulent waves can cross the shock and be injected into the Fermi \ufb01rst-order process. Then these streaming CRs generate resonant waves via two-stream instability and nonresonant waves via CR current-driven instability, which in turn amplify turbulent magnetic \ufb01elds in the preshock region (Bell 1978; Lucek & Bell 2000; Bell 2004; Schure et al. 2012). Thin X-ray synchrotron emitting rims observed in several young supernova remnants (SNRs) indicate that CR electrons are accelerated to 10-100 TeV and cool radiatively in the magnetic \ufb01eld of several 100 \u00b5G behind the forward shock (e.g., Parizot et al. 2006; Reynolds et al. 2012). This provides clear evidence for e\ufb03cient magnetic \ufb01eld ampli\ufb01cation during CR acceleration at strong CR modi\ufb01ed shocks. These plasma physical processes, i.e., injection of suprathermal particles into the CR population, excitation of MHD waves and ampli\ufb01cation of turbulent magnetic \ufb01elds via plasma instabilities, and further acceleration of CRs via Fermi \ufb01rst-order process are important ingredients of DSA and should operate at all types of astrophysical shocks including cosmological shocks in the LSS (e.g., Malkov & Drury 2001; Zweibel & Everett 2010; Schure et al. 2012; Br\u00a8 uggen et al. 2012). In addition, relativistic particles can be accelerated \f\u2013 3 \u2013 stochastically by MHD turbulence, most likely driven in ICMs of merging clusters (Petrosian 2001; Cassano & Brunetti 2005; Brunetti & Lazarian 2007). CRs can be also injected into the intergalactic space by radio galaxies (Kronberg et al. 2001) and through winds from star-forming galaxies (V\u00a8 olk & Atoyan 1999), and later re-accelerated by turbulence and/or shocks. Di\ufb00use synchrotron emission from radio halos and relics in galaxy clusters indicates the presence of GeV electrons gyrating in \u00b5G-level magnetic \ufb01elds on Mpc scales (e.g., Carilli & Taylor 2002; Govoni & Feretti 2004; van Weeren et al. 2010; Kang et al. 2012). On the other hand, non-detection of \u03b3-ray emission from galaxy clusters by Fermi-LAT and VERITAS observations, combined with radio halo observations, puts rather strong constraints on the CR proton population and the magnetic \ufb01eld strength in ICMs, if one adopts the \u201chadronic\u201d model, in which inelastic collisions of CR protons with ICM protons produce the radio emitting electrons and the \u03c00 decay (Ackermann et al. 2010; Donnert et al. 2010; Jeltema & Profumo 2011; Arlen et al. 2012). Alternatively, in the \u201cre-acceleration\u201d model, in which those secondary electrons produced by p-p collisions are accelerated further by MHD turbulence in ICMs, the CR proton pressure not exceeding a few % of the gas thermal pressure could be consistent with both the Fermi-LAT upper limits from the GeV \u03b3-ray \ufb02ux and the radio properties of cluster halos (Brunetti et al. 2012). Recently, ampli\ufb01cation of turbulent magnetic \ufb01elds via plasma instabilities and injection of CR protons and electrons at non-relativistic collisonless shocks have been studied, using Particle-in-Cell (PIC) and hybrid plasma simulations (e.g. Riquelme & Spitkovsky 2009, 2011; Guo et al. 2010; Garat\u00b4 e & Spitkovsky 2012). In PIC simulations, the Maxwell\u2019s equations for electric and magnetic \ufb01elds are solved along with the equations of motion for ions and electrons, so the full wave-particle interactions can be followed from \ufb01rst principles. However, extremely wide ranges of length and time scales need to be resolved mainly because of the large proton to electron mass ratio. In hybrid simulations, only the ions are treated kinetically while the electrons are treated as a neutralizing, massless \ufb02uid, alleviating severe computational requirements. However, it is still prohibitively expensive to simulate the full extent of DSA from the thermal energies of background plasma to the relativistic energies of cosmic rays, following the relevant plasma interactions at the same time. So we do not yet have full understandings of injection and di\ufb00usive scattering of CRs and magnetic \ufb01eld ampli\ufb01cation (MFA) to make precise quantitative predictions for DSA. Instead, most of kinetic DSA approaches, in which the di\ufb00usion-convection equation for the phase-space distribution of particles is solved, commonly adopt phenomenological models that may emulate some of those processes (e.g., Kang et al. 2002; Berezhko et al. 2009; Ptuskin et al. 2010; Lee et al. 2012; Caprioli 2012; Kang 2012). Another approximate method is a steady-state Monte Carlo simulation approach, in which parameterized models for particle di\ufb00usion, growth of self-generated MHD turbulence, wave dissipation and plasma heating are implemented (e.g., \f\u2013 4 \u2013 Vladimirov et al. 2008). In our previous studies, we performed DSA simulations of CR protons at cosmological shocks, assuming that the magnetic \ufb01eld strength is uniform in space and constant in time, and presented the time-asymptotic values of fractional thermalization, \u03b4(Ms), and fractional CR acceleration, \u03b7(Ms), as a function of the sonic Mach number Ms (Kang & Jones 2007; Kang et al. 2009). These energy dissipation e\ufb03ciencies were adopted in a post-processing step for structure formation simulations in order to estimate the CR generation at cosmological shocks (e.g., Skillman et al. 2008; Vazza et al. 2009). Recently, Vazza et al. (2012) have used those e\ufb03ciencies to include self-consistently the CR pressure terms in the gasdynamic conservation equations for cosmological simulations. In this paper, we revisit the problem of DSA e\ufb03ciency at cosmological shocks, including phenomenological models for MFA and drift of scattering centers with Alfv\u00b4 en speed in the ampli\ufb01ed magnetic \ufb01eld. Ampli\ufb01cation of turbulent magnetic \ufb01elds driven by CR streaming instabilities is included through an approximate, analytic model suggested by Caprioli (2012). As in our previous works, a thermal leakage injection model and a Bohm-like di\ufb00usion coe\ufb03cient (\u03ba(p) \u221dp) are adopted as well. This paper is organized as follows. The numerical method and phenomenological models for plasma physical processes in DSA theory, and the model parameters for cosmological shocks are described in Section 2. We then present the detailed simulation results in Section 3 and summarize the main conclusion in Section 4. 2. DSA MODEL In the di\ufb00usion approximation, where the pitch-angle distribution of CRs is nearly isotropic, the Fokker-Plank equation of the particle distribution function is reduced to the following di\ufb00usion-convection equation: \u2202f \u2202t + (u + uw)\u2202f \u2202x = p 3 \u2202(u + uw) \u2202x \u2202f \u2202p + \u2202 \u2202x \u0014 \u03ba(x, p)\u2202f \u2202x \u0015 , (1) where f(x, p, t) is the isotropic part of the pitch-angle averaged CR distribution function, \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient along the direction parallel to the mean magnetic \ufb01eld and uw is the drift speed of local Alfv\u00b4 enic wave turbulence with respect to the plasma (Skilling 1975). Here, we consider quasi-parallel shocks in one-dimensional planar geometry, in which the mean magnetic \ufb01eld is roughly parallel to the \ufb02ow direction. The \ufb02ow velocity, u, is calculated by solving the momentum conservation equation with dynamical feedback of the CR pressure and self-generated magnetic \ufb01elds, \u2202(\u03c1u) \u2202t + \u2202(\u03c1u2 + Pg + Pc + PB) \u2202x = 0. (2) \f\u2013 5 \u2013 The CR pressure, Pc, is calculated self-consistently with the CR distribution function f, while the magnetic pressure, PB, is calculated according to our phenomenological model for MFA (see Section 2.4) rather than solving the induction equation (Caprioli et al. 2009). We point that the dynamical e\ufb00ects of magnetic \ufb01eld are not important with PB \u22720.01\u03c10u2 s. The details of our DSA numerical code, the CRASH (Cosmic-Ray Amr SHock), can be found in Kang et al. (2002). 2.1. Thermal Leakage Injection Injection of protons from the postshock thermal pool into the CR population via waveparticle interactions is expected to depend on several properties of the shock, including the sonic and Alfv\u00b4 enic Mach numbers, the obliquity angle of mean magnetic \ufb01eld, and the strength of pre-existing and self-excited MHD turbulence. As in our previous studies, we adopt a simple phenomenological model in which particles above an \u201de\ufb00ective\u201d injection momentum pinj get injected to the CR population: pinj \u22481.17mpu2 \u0012 1 + 1.07 \u01ebB \u0013 , (3) where \u01ebB = B0/B\u22a5is the ratio of the mean magnetic \ufb01eld along the shock normal, B0, to the amplitude of the postshock MHD wave turbulence, B\u22a5(Malkov & Drury 2001; Kang et al. 2002). This injection model re\ufb02ects plasma physical arguments that the particle speed must be several times larger than the downstream \ufb02ow speed, u2, depending on the strength of MHD wave turbulence, in order for suprathermal particles to leak upstream across the shock transition layer. Since the physical range of the parameter \u01ebB is not tightly constrained, we adopt \u01ebB = 0.25 as a canonical value, which results in the injected particle fraction, \u03be = ncr,2/n2 \u223c10\u22124 \u221210\u22123 for Ms \u22733 (see Figure 3 below). Previous studies showed that DSA saturates for \u03be \u227310\u22124, so the acceleration e\ufb03ciency obtained here may represent an upper limit for the e\ufb03cient injection regime (e.g., Kang et al. 2002; Caprioli 2012). In fact, this injection fraction is similar to the commonly adopted values for nonlinear DSA modeling of SNRs (e.g., Berezhko et al. 2009). If we adopt a smaller value of \u01ebB for stronger wave turbulence, pinj has to be higher, leading to a smaller injection fraction and a lower acceleration e\ufb03ciency. 2.2. Bohm-like Di\ufb00usion Model In our model, turbulent MHD waves are self-generated e\ufb03ciently by plasma instabilities driven by CRs streaming upstream in the shock precursor, so we can assume that CR particles \f\u2013 6 \u2013 are resonantly scattered by Alfv\u00b4 en waves with fully saturated spectrum. Then the particle di\ufb00usion can be approximated by a Bohm-like di\ufb00usion coe\ufb03cient, \u03baB \u223c(1/3)rgv, but with \ufb02attened non-relativistic momentum dependence (Kang & Jones 2007): \u03ba(x, p) = \u03ba\u2217 B0 B\u2225(x) \u00b7 p mpc, (4) where \u03ba\u2217= mpc3/(3eB0) = (3.13\u00d71022cm2s\u22121)B\u22121 0 , and B0 is the magnetic \ufb01eld strength far upstream expressed in units of \u00b5G. The strength of the parallel component of local magnetic \ufb01eld, B\u2225(x), will be described in the next section. Hereafter, we use the subscripts \u20180\u2019, \u20181\u2019, and \u20182\u2019 to denote conditions far upstream of the shock, immediate upstream and downstream of the subshock, respectively. 2.3. Magnetic Field Ampli\ufb01cation It was well known that CRs streaming upstream in the shock precursor excite resonant Alfv\u00b4 en waves with a wavelength (\u03bb) comparable with the CR gyroradius (rg), and turbulent magnetic \ufb01elds can be ampli\ufb01ed into the nonlinear regime (i.e., \u03b4B \u226bB0) (Bell 1978; Lucek & Bell 2000). Later, it was discovered that the nonresonant (\u03bb \u226arg), fast-growing instability driven by the CR current (jcr = encrus) can amplify the magnetic \ufb01eld by orders of magnitude, up to the level consistent with the thin X-ray rims at SNRs (Bell 2004). Several plasma simulations have shown that both B\u2225/B0 and B\u22a5/B0 can increase by a factor of up to \u223c10 \u221245 via the Bell\u2019s CR current-driven instability (Riquelme & Spitkovsky 2009, 2010; Ohira et al. 2009). Moreover, it was suggested that long-wavelength magnetic \ufb02uctuations can grow as well in the presence of short-scale, circularly-polarized Alfv\u00b4 en waves excited by the Bell-type instability (Bykov et al. 2011). Recently, Rogachevskii et al. (2012) have also shown that large-scale magnetic \ufb02uctuations can grow along the original \ufb01eld by the \u03b1 e\ufb00ect driven by the nonresonant instability and both the parallel and perpendicular components can be further ampli\ufb01ed. There are several other instabilities that may amplify the turbulent magnetic \ufb01eld on scales greater than the CR gyroradius such as the \ufb01rehose, \ufb01lamentation, and acoustic instabilities (e.g. Beresnyak et al. 2009; Drury & Downes 2012; Schure et al. 2012). Although Bell\u2019s (2004) original study assumed parallel background magnetic \ufb01eld, it turns out that the non-resonant instability operates for all shocks, regardless of the inclination angle between the shock normal and the mean background magnetic \ufb01eld (Schure et al. 2012), and so the isoptropization of the ampli\ufb01ed magnetic \ufb01eld can be a reasonable approximation (Riquelme & Spitkovsky 2009; Rogachevskii et al. 2012). Here, we adopt the prescription for MFA due to CR streaming instabilities that was suggested by Caprioli (2012), based on the assumption of isotropization of the ampli\ufb01ed \f\u2013 7 \u2013 magnetic \ufb01eld and the e\ufb00ective Alfv\u00b4 en speed in the local, ampli\ufb01ed \ufb01eld: \u03b4B2/(8\u03c0\u03c10u2 s) = (2/25)(1 \u2212U5/4)2U\u22121.5, where \u03b4B = B \u2212B0 and U = (us \u2212u)/us is the \ufb02ow speed in the shock rest frame normalized by the shock speed us. In the test-particle regime where the \ufb02ow structure is not modi\ufb01ed, the upstream magnetic \ufb01eld is not ampli\ufb01ed in this model (i.e., U(x) = 1). In the shock precursor (x > xs, where xs is the shock position), the MFA factor becomes \u03b4B(x)2 B2 0 = 4 25M2 A,0 (1 \u2212U(x)5/4)2 U(x)3/2 , (5) where MA,0 = us/vA,0 is the Alfv\u00b4 enic Mach number for the far upstream Alfv\u00b4 en speed, and vA,0 = B0/\u221a4\u03c0\u03c10. This model predicts that MFA increases with MA,0 and the precursor strength (i.e., degree of shock modi\ufb01cation by CRs) (Vladimirov et al. 2008). In the case of a \u201cmoderately modi\ufb01ed\u201d shock, in which the immediate preshock speed is U1 \u22480.8, for example, the ampli\ufb01ed magnetic pressure increases to \u03b4B2 1/8\u03c0 \u22486.6 \u00d7 10\u22123\u03c10u2 s and the ampli\ufb01cation factor scales as \u03b4B1/B0 \u22480.12MA,0. We will show in the next section that the shock structure is modi\ufb01ed only moderately owing to the Alfv\u00b4 enic drift, so the magnetic \ufb01eld pressure is less than a few % of the shock ram pressure even at strong shocks (Ms \u227310). For the highest Mach number model considered here, Ms = 100, the preshock ampli\ufb01cation factor becomes \u03b4B1/B0 \u2248100, which is somewhat larger than what was found in the plasma simulations for the Bell-type current-driven instability (Riquelme & Spitkovsky 2009, 2010). Considering possible MFA beyond the Bell-type instability by other large-scale instabilities (e.g. Bykov et al. 2011; Rogachevskii et al. 2012; Schure et al. 2012), this level of MFA may not be out of reach. Note that this recipe is intended to be a heuristic model that may represent qualitatively the MFA process in the shock precursor. Assuming that the two perpendicular components of preshock magnetic \ufb01elds are completely isotropized and simply compressed across the subshock, the immediate postshock \ufb01eld strength can be estimated by B2/B1 = p 1/3 + 2/3(\u03c12/\u03c11)2. (6) We note that the MFA model described in equations (5)-(6) is also used for the di\ufb00usion coe\ufb03cient model given by equation (4). 2.4. Alfv\u00b4 enic Drift Resonant Alfv\u00b4 en waves excited by the cosmic ray streaming are pushed by the CR pressure gradient (\u2202Pc/\u2202x) and propagate against the underlying \ufb02ow in the shock precursor (e.g. Skilling 1975; Bell 1978). The mean drift speed of scattering centers is commonly \f\u2013 8 \u2013 approximated as the Alfv\u00b4 en speed, i.e., uw,1(x) \u2248+vA \u2248B(x)/ p 4\u03c0\u03c1(x), pointing upstream away from the shock, where B(x) is the local, ampli\ufb01ed magnetic \ufb01eld strength estimated by equation (5). For isotropic magnetic \ufb01elds, the parallel component would be roughly B\u2225\u2248 B(x)/ \u221a 3. But we simply use B(x) for the e\ufb00ective Alfv\u00b4 en speed, since the uncertainty in this model is probably greater than the factor of \u221a 3 (see Section 3 for a further comment on this factor). In the postshock region the Alfv\u00b4 enic turbulence is probably relatively balanced, so the wave drift can be ignored, that is, uw,2 \u22480 (Jones 1993). Since the Alfv\u00b4 enic drift reduces the velocity di\ufb00erence between upstream and downstream scattering centers, compared to that of the bulk \ufb02ow, the resulting CR spectrum becomes softer than estimated without considering the wave drift. Here, we do not consider loss of turbulent magnetic energy and gas heating due to wave dissipation in order to avoid introducing additional free parameters to the problem. 2.5. Set-up for DSA Simulations Previous studies have shown that the DSA e\ufb03ciency depends primarily on the shock sonic Mach number (Kang et al. 2007). So we considered shocks with a wide range of the sonic Mach number, Ms = 1.5 \u2212100, propagating into the intergalactic medium (IGM) of di\ufb00erent temperature phases, T0 = 104 \u22125 \u00d7 107 K (Kang et al. 2005). Then, the shock speed is given by us = (150 km s\u22121)Ms(T0/106K)1/2. We specify the background magnetic \ufb01eld strength by setting the so-called plasma beta, \u03b2P = Pg/PB, the ratio of the gas pressure to the magnetic pressure. So the upstream magnetic \ufb01eld strength is given as B2 0 = 8\u03c0Pg/\u03b2P, where \u03b2P \u223c100 is taken as a canonical value in ICMs (see, e.g., Ryu et al. 2008). Then, the ratio of the background Alfv\u00b4 en speed to the sound speed, vA,0/cs = p 2/(\u03b2P\u03b3g) (where \u03b3g is the gas adiabatic index), which determines the signi\ufb01cance of Alfv\u00b4 enic drift, depends only on the parameter \u03b2P. Moreover, the upstream Alfv\u00b4 enic Mach number, MA,0 = us/vA,0 = Ms p \u03b2P\u03b3g/2, controls the magnetic \ufb01eld ampli\ufb01cation factor as given in equation (5). For \u03b2P = 100 and \u03b3g = 5/3, the background A\ufb02v\u00b4 en speed is about 10 % of the sound speed, i.e., vA,0 = 0.11cs (independent of Ms and T0), and MA,0 = 9.1Ms. For a higher value \u03b2P (i.e., weaker magnetic \ufb01elds), of course, the Alfv\u00b4 enic drift e\ufb00ect will be less signi\ufb01cant. With a \ufb01xed value of \u03b2P, the upstream magnetic \ufb01eld strength can be speci\ufb01ed by the upstream gas pressure, nH,0T0, as follow: B0 = 0.28 \u00b5G \u0012 nH,0T0 103 cm\u22123K \u00131/2 \u0012100 \u03b2P \u00131/2 . (7) \f\u2013 9 \u2013 We choose the hydrogen number density, nH,0 = 10\u22124 cm\u22123, as the \ufb01ducial value to obtain speci\ufb01c values of magnetic \ufb01eld strength shown in Figures 1 2 below. But this choice does not a\ufb00ects the time asymptotic results shown in Figures 3 4, since the CR modi\ufb01ed shock evolves in a self-similar manner and the time-asymptotic states depend primarily on Ms and MA,0, independent of the speci\ufb01c value of B0. Since the tension in the magnetic \ufb01eld lines hinders Bell\u2019s CR current-driven instability, MFA occurs if the background \ufb01eld strength satis\ufb01es the condition, B0 < Bs = (0.87 \u00b5G)(ncrus)1/2 (Zweibel & Everett 2010). For a typical shock speed of us \u223c103 km s\u22121 formed in the IGM with nH,0 \u223c10\u22126 \u221210\u22124 cm\u22123 with the CR injection fraction, \u03be \u223c 10\u22124 \u221210\u22123, the maximum magnetic \ufb01eld for the growth of nonresonant waves is roughly Bs \u223c0.1 \u22121 \u00b5G. The magnetic \ufb01eld strength estimated by equation (7) is B0 \u22480.28 \u00b5G for nH,0 = 10\u22124 cm\u22123 and T0 = 107 K (ICMs) and B0 \u224810\u22123 \u00b5G for nH,0 = 10\u22126 cm\u22123 and T0 = 104 K (voids). Considering the uncertainties in the model and the parameters, it seems reasonable to assume that MFA via CR streaming instabilities can be e\ufb00ective at cosmological shocks in the LSS (Zweibel & Everett 2010). In the simulations, the di\ufb00usion coe\ufb03cient, \u03ba\u2217in equation (4), can be normalized with a speci\ufb01c value of \u03bao. Then, the related length and time scales are given as lo = \u03bao/us and to = \u03bao/u2 s, respectively. Since the \ufb02ow structure and the CR pressure approach the timeasymptotic self-similar states, a speci\ufb01c physical value of \u03bao matters only in the determination of pmax/mpc \u22480.1u2 st/\u03ba\u2217at a given simulation time. For example, with \u03bao = 106\u03ba\u2217, the highest momentum reached at time t becomes pmax/mpc \u2248105(t/to). It was suggested that non-linear wave damping and dissipation due to ion-neutral collisions may weaken stochastic scatterings, leading to slower acceleration and escape of highest energy particles from the shock (Ptuskin & Zirakashvili 2005). Since these processes are not well understood in a quantitative way, we do not include wave dissipation in the simulations. Instead we implement a free escape boundary (FEB) at an upstream location by setting f(xFEB, p) = 0 at xFEB = 0.5 lo, which may emulate the escape of the highest energy particles with the di\ufb00usion length, \u03ba(p)/us \u2273xFEB. Under this FEB condition, the CR spectrum and the shock structure including the precursor approach the time-asymptotic states in the time scale of t/to \u223c1 (Kang 2012). As noted in the introduction, CR protons can be accelerated by merger and accretion shocks, injected into the intergalactic space by star forming galaxies and active galaxies, and accelerated by turbulence. Because of long life time and slow di\ufb00usion, CR protons should be accumulated in the LSS over cosmological times. So it seems natural to assume that ICMs contains pre-existing populations of CR protons. But their nature is not well constrained, except that the pressure of CR protons is less than a few % of the gas thermal \f\u2013 10 \u2013 pressure (Arlen et al. 2012; Brunetti et al. 2012). For a model spectrum of pre-existing CR protons, we adopt a simple power-law form, f0(p) = fpre\u00b7(p/pinj)\u2212s for p \u2265pinj, with the slope s = 4.5, which corresponds to the slope of the test-particle power-law momentum spectrum accelerated at M = 3 shocks. We note that the slope of the CR proton spectrum inferred from the radio spectral index (i.e., \u03b1R \u2248(s \u22122)/2) of cluster halos ranges 4.5 \u2272s \u22725 (e.g., Jeltema & Profumo 2011). The amplitude, fpre, is set by the ratio of the upstream CR to gas pressure, R \u2261Pc,0/Pg,0, where R = 0.05 is chosen as a canonical value. Table 1 lists the considered models: the weak shock models with T0 \u2265107 K, the strong shock models with T0 = 105 \u2212106 K, and the strongest shock models with T0 = 104 K represent shocks formed in hot ICMs, in the warm-hot intergalactic medium (WHIM) of \ufb01laments, and in voids, respectively. Simulations start with purely gasdynamic shocks initially at rest at xs = 0. 3. DSA SIMULATION RESULTS Figures 1 2 show the spatial pro\ufb01les of magnetic \ufb01eld strength, B(x), and CR pressure, Pc(x), and the distribution function of CRs at the shock location, gs(p), at t/to = 0.5, 1, 2 for models without or with pre-existing CRs: from top to bottom panels, Ms = 3 and T0 = 5 \u00d7 107K, Ms = 5 and T0 = 107K, Ms = 10 and T0 = 106K, and Ms = 100 and T0 = 104K. Note that the models with Ms = 3 \u22125 represent shocks formed in hot ICMs, while those with Ms = 10 and 100 reside in \ufb01laments and voids, respectively. The background magnetic \ufb01eld strength corresponds to B0 = 0.63, 0.28, 0.089, and 8.9\u00d7 10\u22123 \u00b5G for the models with Ms = 3, 5, 10, and 100, respectively, for the \ufb01ducial value of nH,0 = 10\u22124 cm\u22123 (see equation (7)). With our MFA model the postshock \ufb01eld can increase to B2 \u22482\u22123 \u00b5G for all these models, which is similar to the \ufb01eld strengths observed in radio halos and radio relics. The postshock CR pressure increases with the sonic Mach number, but saturates at Pc,2/(\u03c10u2 s) \u22480.2 for Ms \u227310. One can see that the precursor pro\ufb01le and gs(p) have reached the time-asymptotic states for t/to \u22731 for the Ms = 100 model, while the lower Mach number models are still approaching to steady state at t/to = 2. This is because in the Ms = 100 model, by the time t/to \u22481 the CR spectrum has extended to pmax that satis\ufb01es the FEB condition. For strong shocks of Ms = 10 \u2212100, the power-law index, q \u2261\u2212\u2202ln f/\u2202ln p, is about 4.3 4.4 at p \u223cmpc instead of q = 4, because the Alfv\u00b4 enic drift steepens the CR spectrum. For the models with pre-existing CRs in Figure 2, the pre-existing population is important only for weak shocks with Ms \u22725, because the injected population dominates in \f\u2013 11 \u2013 shocks with higher sonic Mach numbers. As mentioned in the Introduction, the signatures of shocks observed in ICMs through X-ray and radio observations can be interpreted by low Mach number shocks (Markevitch & Vikhlinin 2007; Feretti et al. 2012). In particular, the presence of pre-existing CRs is expect to be crucial in explaining the observations of radio relics (Kang et al. 2012). Figure 3 shows time-asymptotic values of downstream gas pressure, Pg,2, and CR pressure, Pc,2, in units of \u03c10u2 s, density compression ratios, \u03c31 = \u03c11/\u03c10 and \u03c32 = \u03c12/\u03c10, the ratios of ampli\ufb01ed magnetic \ufb01eld strengths to background strength, B2/B0 and B1/B0, and postshock CR number fraction, \u03be = ncr,2/n2, as a function of Ms for all the models listed in Table 1. We note that for the models without pre-existing CRs (left column) two di\ufb00erent values of T0 (and so us) are considered for each of Ms = 3, 4, 5, 10, 30, and 50 models, in order to explore the dependence on T0 for a given sonic Mach number. The \ufb01gure demonstrates that the DSA e\ufb03ciency and the MFA factor are determined primarily by Ms and MA,0, respectively, almost independent of T0. For instance, the two Mach 10 models with T0 = 105K (open triangle) and 106K (\ufb01lled triangle) show the similar results as shown in the left column of Figure 3. But note that the curves for Pcr,2 and \u03be increase somewhat unevenly near Ms \u22484 \u22127 for the models with pre-existing CRs in the right column, because of the change in T0 (see Table 1). At weak shocks with Ms \u22723, the injection fraction is \u03be \u227210\u22124 and the CR pressure is Pc,2/\u03c10u2 s \u22725 \u00d7 10\u22123 without pre-existing CRs, while both values depend on Pc,1 in the presence of pre-existing CRs. Since the magnetic \ufb01eld is not ampli\ufb01ed in the test-particle regime, these results remain similar to what we reported earlier in Kang & Ryu (2011). For a larger value of \u01ebB, the injection fraction and the CR acceleration e\ufb03ciency would increase. As shown in Kang & Jones (2007), however, \u03be and Pcr,2/\u03c10u2 s depend sensitively on the injection parameter \u01ebB for Ms \u22725, while such dependence becomes weak for Ms \u227310. Furthermore, there are large uncertainties in the thermal leakage injection model especially at weak shocks. Thus it is not possible nor meaningful to discuss the quantitative dependence of these results on \u01ebB, until we obtain more realistic pictures of the wave-particle interactions through PIC or hybrid plasma simulations of weak collisionless shocks. In the limit of large Ms, the postshock CR pressure saturates at Pc,2 \u22480.2\u03c10u2 s, the postshock density compression ratio at \u03c32 \u22485, and the postshock CR number fraction at \u03be \u22482 \u00d7 10\u22123. The MFA factors are B1/B0 \u223c0.12MA,0 \u223cMs and B2/B0 \u223c3Ms for Ms \u22735, as expected from equation (5). In Kang et al. (2007) we found that Pc,2 \u22480.55\u03c10u2 s in the limit of large Ms, when the magnetic \ufb01eld strength was assumed to be uniform in space and constant in time. Here we argue that MFA and Alfv\u00b4 enic drift in the ampli\ufb01ed magnetic \ufb01eld steepen the CR spectrum and reduce the DSA e\ufb03ciency drastically. \f\u2013 12 \u2013 Again, the presence of pre-existing CRs (right column) enhances the injection fraction and acceleration e\ufb03ciency at weak shocks of Ms \u22725, while it does not a\ufb00ect the results at stronger shocks. Since the upstream CR pressure is Pc,0 = 0.05Pg,0 = (0.03/Ms)\u03c10u2 s in these models, the enhancement factor, Pc,2/Pc,0 \u22481.5 \u22126 for Ms \u22643. So the DSA acceleration e\ufb03ciency exceeds only slightly the adiabatic compression factor, \u03c3\u03b3c 2 , where \u03b3c \u22484/3 is the adiabatic index of the CR population. As in Kang et al. (2007), the gas thermalization and CR acceleration e\ufb03ciencies are de\ufb01ned as the ratios of the gas thermal and CR energy \ufb02uxes to the shock kinetic energy \ufb02ux: \u03b4(Ms) \u2261[eg,2 \u2212eg,0(\u03c12/\u03c10)\u03b3g]u2 (1/2)\u03c10u3 s , \u03b7(Ms) \u2261[ec,2 \u2212ec,0(\u03c12/\u03c10)\u03b3c]u2 (1/2)\u03c10u3 s , (8) where eg and ec are the gas thermal and CR energy densities. The second terms inside the brackets subtract the e\ufb00ect of adiabatic compression occurred at the shock. Alternatively, the energy dissipation e\ufb03ciencies not excluding the e\ufb00ect of adiabatic compression across the shock can be de\ufb01ned as: \u03b4 \u2032(Ms) \u2261[eg,2u2 \u2212eg,0u0] (1/2)\u03c10u3 s , \u03b7 \u2032(Ms) \u2261[ec,2u2 \u2212ec,0u0] (1/2)\u03c10u3 s , (9) which may provide more direct measures of the energy generation at the shock. Note that \u03b7 = \u03b7 \u2032 for the models with Pc,0 = 0. Figure 4 shows these dissipation e\ufb03ciencies for all the models listed in Table 1. Again, the CR acceleration e\ufb03ciency saturates at \u03b7 \u22480.2 for Ms \u227310, which is much lower than what we reported in the previous studies without MFA (Ryu et al. 2003; Kang et al. 2007). The CR acceleration e\ufb03ciency is \u03b7 < 0.01 for weak shocks (Ms \u22723) if there is no pre-existing CRs. But the e\ufb03ciency \u03b7 \u2032 can be as high as 0.1 even for these weak shocks, depending on the amount of pre-existing CRs. The e\ufb03ciency \u03b7 for weak shocks is not a\ufb00ected by the new models of MFA and Alfv\u00b4 enic drift, since the magnetic \ufb01eld is not ampli\ufb01ed in the test-particle regime. If we choose a smaller value of \u03b2P, the ratio vA,0/cs is larger, leading to less e\ufb03cient acceleration due to the stronger Afv\u00b4 enic drift e\ufb00ects. For example, for \u03b2P \u223c1 (i.e., equipartition \ufb01elds), which is relevant for the interstellar medium in galaxies, the CR acceleration e\ufb03ciency in the strong shock limit reduces to \u03b7 \u22480.12 (Kang 2012). On the other hand, if we were to choose a smaller wave drift speed, the CR e\ufb03ciency \u03b7 will increase slightly. For example, if we choose uw \u22480.3vA instead of uw \u2248vA, the value of \u03b7 in the high Mach number limit would increase to \u223c0.25 for the models considered here. On the other hand, if we choose a smaller injection parameter, for example, \u01ebB = 0.23, the injection fraction reduces from \u03be = 2.1\u00d710\u22124 to 6.2\u00d710\u22125 and the postshock CR pressure \f\u2013 13 \u2013 decreases from Pc,2/\u03c10u2 s = 0.076 to 0.043 for the Ms = 5 model, while \u03be = 2.2 \u00d7 10\u22123 to 3.3 \u00d7 10\u22124 and Pc,2/\u03c10u2 s = 0.18 to 0.14 for the Ms = 50 model. Considering that the CR injection fraction obtained in these simulations (\u03be > 10\u22124) is in the saturation limit of DSA, the CR acceleration e\ufb03ciency, \u03b7, for M \u227310 in Figure 4 should be regarded as an upper limit. 4. SUMMARY We revisited the nonlinear DSA of CR protons at cosmological shocks in the LSS, incorporating some phenomenological models for MFA due to CR streaming instabilities and Alfv\u00b4 enic drift in the shock precursor. Our DSA simulation code, CRASH, adopts the Bohm-like di\ufb00usion and thermal leakage injection of suprathermal particles into the CR population. A wide range of preshock temperature, 104 \u2264T0 \u22645 \u00d7 107K, is considered to represent shocks that form in clusters of galaxies, \ufb01laments, and voids. We found that the DSA e\ufb03ciency is determined mainly by the sonic Mach number Ms, but almost independent of T0. We assumed the background intergalactic magnetic \ufb01eld strength, B0, that corresponds to the plasma beta \u03b2P = 100. This is translated to the ratio of the Alfv\u00b4 en speed in the background magnetic \ufb01eld to the preshock sound speed, vA,0/cs = p 6/5\u03b2P \u22480.11. Then the Alfv\u00b4 enic Mach number MA,0 = p 5\u03b2P/6 Ms determines the extent of MFA (i.e., B1/B0), which in turn controls the signi\ufb01cance of Alfv\u00b4 enic drift in DSA. Although the preshock density is set to be nH,0 = 10\u22124 cm\u22123 just to give a characteristic scale to the magnetic \ufb01eld strength in the IGM, our results for the CR proton acceleration, such as the dissipation e\ufb03ciencies, do not depend on a speci\ufb01c choice of nH,0. If one is interested in CR electrons, which are a\ufb00ected by synchrotron and inverse Compton cooling, the electron energy spectrum should depend on the \ufb01eld strength B0 and so on the value of nH,0T0 (see equation (7)). The main results of this study can be summarized as follows: 1) With our phenomenological models for DSA, the injected fraction of CR particles is \u03be \u224810\u22124 \u221210\u22123 and the postshock CR pressure becomes 10\u22123 \u2272Pc,2/(\u03c10u2 s) \u22720.2 for 3 \u2264 Ms \u2264100, if there are no pre-existing CRs. A population of pre-existing CRs provides seed particles to the Fermi process, so the injection fraction and acceleration e\ufb03ciency increase with the amount of pre-existing CRs at weak shocks. But the presence of pre-existing CRs does not a\ufb00ect \u03be nor Pc,2 for strong shocks with Ms \u227310, in which the freshly injected particles dominate over the re-accelerated ones. 2) The nonlinear stage of MFA via plasma instabilities at collisioness shocks is not fully \f\u2013 14 \u2013 understood yet. So we adopted a model for MFA via CR streaming instabilities suggested by Caprioli (2012). We argue that the CR current, jcr \u223ce\u03be\u03c32nH,0us, is high enough to overcome the magnetic \ufb01eld tension, so the Bell-type instability can amplify turbulent magnetic \ufb01elds at cosmological shocks considered here (Zweibel & Everett 2010). For shocks with M \u22735, DSA is e\ufb03cient enough to develop a signi\ufb01cant shock precursor due to the CR feedback, and the ampli\ufb01ed magnetic \ufb01eld strength in the upstream region scales as B1/B0 \u22480.12MA,0 \u2248(\u03b2P/100)1/2Ms. This MFA model predicts that the postshock magnetic \ufb01eld strength becomes B2 \u22482 \u22123 \u00b5G for the shock models considered here (see Table 1). 3) This study demonstrates that if scattering centers drift with the e\ufb00ective Alfv\u00b4 en speed in the local, ampli\ufb01ed magnetic \ufb01eld, the CR energy spectrum can be steepened and the acceleration e\ufb03ciency is reduced signi\ufb01cantly, compared to the cases without MFA. As a result, the CR acceleration e\ufb03ciency saturates at \u03b7 = 2ec,r/\u03c10u3 s \u22480.2 for Ms \u227310, which is signi\ufb01cantly lower than what we reported in our previous study, \u03b7 \u22480.55 (Kang et al. 2007). We note that the value \u03b7 at the strong shock limit can vary by \u223c10 %, depending on the model parameters such as the injection parameter, plasma beta and wave drift speed. Inclusion of wave dissipation (not considered here) will also a\ufb00ect the extent of MFA and the acceleration e\ufb03ciency. This tells us that detailed understandings of plasma physical processes are crucial to the study of DSA at astrophysical collisionless shocks. 4) At weak shocks in the test-particle regime (Ms \u22723), the CR pressure is not dynamically important enough to generate signi\ufb01cant MHD waves, so the magnetic \ufb01eld is not ampli\ufb01ed and the Alfv\u00b4 enic drift e\ufb00ects are irrelevant. 5) Finally, we note that the CR injection and the CR streaming instabilities are found to be less e\ufb03cient at quasi-perpendicular shocks (e.g. Garat\u00b4 e & Spitkovsky 2012). It is recognized, however, streaming of CRs is facilitated through locally parallel inclination of turbulent magnetic \ufb01elds at the shock surface, so the CR injection can be e\ufb00ective even at quasi-perpendicular shocks in the presence of pre-existing large-scale MHD turbulence (Giacalone 2005; Zank et al. 2006). At oblique shocks the acceleration rate is faster and the di\ufb00usion coe\ufb03cient is smaller due to drift motion of particles along the shock surface (Jokipii 1987). In fact, the di\ufb00usion convection equation (1) should be valid for quasi-perpendicular shocks as long as there exists strong MHD turbulence su\ufb03cient enough to keep the pitch angle distribution of particles isotropic. In that case, the time-asymptotic states of the CR shocks should remain the same even for much smaller \u03ba(x, p), as mentioned in Section 2.5. In addition, the perpendicular current-driven instability is found to be e\ufb00ective at quasiperpendicular shocks (Riquelme & Spitkovsky 2010; Schure et al. 2012). Thus we expect that the overall conclusions drawn from this study should be applicable to all non-relativistic \f\u2013 15 \u2013 shocks, regardless of the magnetic \ufb01eld inclination angle, although our quantitative estimates for the CR injection and acceleration e\ufb03ciencies may not be generalized to oblique shocks with certainty. HK was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-001065). DR was supported by the National Research Foundation of Korea through grant 2007-0093860. The authors would like to thank D. Capriloi, T. W. Jones, F. Vazza and the anonymous referee for the constructive suggestions and comments to the paper. HK also would like to thank Vahe Petrosian and KIPAC for their hospitality during the sabbatical leave at Stanford university where a part of the paper was written." + }, + { + "url": "http://arxiv.org/abs/1209.5203v1", + "title": "Diffusive shock acceleration with magnetic field amplification and Alfvenic drift", + "abstract": "We explore how wave-particle interactions affect diffusive shock acceleration\n(DSA) at astrophysical shocks by performing time-dependent kinetic simulations,\nin which phenomenological models for magnetic field amplification (MFA),\nAlfvenic drift, thermal leakage injection, Bohm-like diffusion, and a free\nescape boundary are implemented. If the injection fraction of cosmic-ray (CR)\nparticles is greater than 2x10^{-4}, for the shock parameters relevant for\nyoung supernova remnants, DSA is efficient enough to develop a significant\nshock precursor due to CR feedback, and magnetic field can be amplified up to a\nfactor of 20 via CR streaming instability in the upstream region. If scattering\ncenters drift with Alfven speed in the amplified magnetic field, the CR energy\nspectrum can be steepened significantly and the acceleration efficiency is\nreduced. Nonlinear DSA with self-consistent MFA and Alfvenic drift predicts\nthat the postshock CR pressure saturates roughly at 10 % of the shock ram\npressure for strong shocks with a sonic Mach number ranging 20< M_s< 100. Since\nthe amplified magnetic field follows the flow modification in the precursor,\nthe low energy end of the particle spectrum is softened much more than the high\nenergy end. As a result, the concave curvature in the energy spectra does not\ndisappear entirely even with the help of Alfvenic drift. For shocks with a\nmoderate Alfven Mach number (M_A<10), the accelerated CR spectrum can become as\nsteep as E^{-2.1}-E^{-2.3}, which is more consistent with the observed CR\nspectrum and gamma-ray photon spectrum of several young supernova remnants.", + "authors": "Hyesung Kang", + "published": "2012-09-24", + "updated": "2012-09-24", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Di\ufb00usive shock acceleration (DSA) theory explains how nonthermal particles are produced through their interactions with MHD waves in the converging \ufb02ows across collisionless shocks in astrophysical plasmas (Bell 1978; Drury 1983; Blandford & Eichler 1987). Theoretical studies have shown that some suprathermal particles with velocities large enough to swim against the downstream \ufb02ow can return across the shock and stream upstream, and that streaming motions of high energy particles against the background \ufb02uid generate both resonant and nonresonant waves upstream of the shock (Bell 1978; Lucek & Bell 2000; Bell 2004; Riquelme & Spitkovsky 2009; Rogachevskii et al. 2012). Those waves in turn scatter CR particles and amplify turbulent magnetic \ufb01elds in the preshock region. These plasma physical processes, i.e., injection of suprathermal particles into the CR population, selfexcitation of MHD waves, and ampli\ufb01cation of magnetic \ufb01elds are all integral parts of DSA (e.g., Malkov & Drury 2001). Multi-band observations of nonthermal radio to \u03b3ray emissions from supernova remnant (SNR) shocks have con\ufb01rmed the acceleration of CR electrons and protons up to \u223c100 TeV (e.g., Abdo et al. 2010, 2011; Acero et al. 2010; Acciari et al. 2011; Giordano et al. 2012). Moreover, thin rims of several young SNRs in high-resolution X-ray observations indicate the presence of downstream magnetic \ufb01elds as strong as a few 100\u00b5G, implying e\ufb03cient magnetic \ufb01eld ampli\ufb01cation (MFA) at these shocks (e.g., Parizot et al. 2006; Eriksen et al. 2011; Reynolds et al. 2012). The most attractive feature of the DSA theory is the simple prediction of power-law energy spectra of CRs, N(E) \u221dE\u2212(\u03c3+2)/(\u03c3\u22121) (where \u03c3 is the shock compression ratio) in the test particle limit. For strong, adiabatic gas shocks with \u03c3 = 4, this gives a power-law index of 2, which is reasonably close to the observed \u2018universal\u2019 index of the CR spectra in many environments. However, nonlinear treatments of DSA predict that at strong shocks there are highly nonlinear back-reactions from CRs to the underlying \ufb02ow, creating a shock precursor (e.g., Berezhko & V\u00a8 olk 1997; Kang & Jones 2007). So the particles just above the injection momentum (pinj) sample mostly the compression across the subshock (\u03c3s), while those near the highest momentum (pmax) experience the greater, total compression across the entire shock structure (\u03c3t). This leads to the CR energy spectrum that behaves as N(E) \u221dE\u2212(\u03c3s+2)/(\u03c3s\u22121) for p \u223cpinj, but \ufb02attens gradually to N(E) \u221dE\u2212(\u03c3t+2)/(\u03c3t\u22121) toward p \u223cpmax (Kang et al. 2009). For example, the power-law index \u2013 111 \u2013 \f112 H. KANG becomes 1.5 for \u03c3t = 7. In contrast to such expectations, however, the GeVTeV \u03b3-ray spectra of several young SNRs seem to require the proton spectrum as steep as N(E) \u221dE\u22122.3, if the observed \u03b3-ray photons indeed originate from \u03c00 decay (Abdo et al. 2010; Giordano et al. 2012). This is even softer than the test-particle power-law for strong shocks. Moreover, Ave et al. (2009) showed that the spectrum of CR nuclei observed at Earth can be \ufb01tted by a single power law of J(E) \u221dE\u22122.67 below 1014 eV. Assuming an energy-dependent propagation path length (\u039b \u221dE\u22120.6), they suggested that a soft source spectrum, N(E) \u221dE\u2212\u03b1 with \u03b1 \u223c2.3 \u22122.4 is preferred by the observed data. These observational data appear to be inconsistent with \ufb02at CR spectra predicted by nonlinear DSA model for the SNR origin of Galactic CRs. It has been suggested that non-linear wave damping and wave dissipation due to ion-neutral collisions may weaken the stochastic scattering on relevant scales, leading to slower acceleration than predicted based on the the so-called Bohm-like di\ufb00usion, and escape of the highest energy particles from the shock (e.g. Ptuskin & Zirakashvili 2005; Caprioli et al. 2009). These processes may lead to the particle energy spectrum at the highest energy end that is much steeper than predicted by nonlinear DSA. Escape of high energy protons from SNRs is an important yet very complex problem that needs further investigation (Malkov et al. 2011; Drury 2011). Recently some serious e\ufb00orts have been underway to understand at least some of the complex plasma processes through Particle-in-Cell (PIC) and hybrid plasma simulations (e.g. Riquelme & Spitkovsky 2009; Guo et al. 2012; Garat\u00b4 e & Spitkovsky 2012). However, these types of plasma simulations are too much demanding and too expensive to study the full extent of the DSA problem. So we do not yet understand them in enough detail to make precise quantitative predictions for the injection and acceleration rate and e\ufb03ciency. Instead, most of kinetic approaches commonly adopt phenomenological models that can emulate more or less self-consistently some of those plasma interactions, for example, the thermal leakage injection, magnetic \ufb01eld ampli\ufb01cation, wave-damping and etc (e.g., Berezhko et al. 2009; Kang 2010; Ptuskin et al. 2010; Lee et al. 2012; Caprioli 2012). In our previous studies, we considered DSA of CR protons, assuming that magnetic \ufb01eld strength is uniform in space and constant in time without selfconsistent MFA (e.g. Kang & Jones 2007; Kang et al. 2009). In the present paper, we explore how the following processes a\ufb00ect the energy spectra of CR protons and electrons accelerated at plane astrophysical shocks: 1) magnetic \ufb01eld ampli\ufb01ed by CR streaming instability in the precursor 2) drift of scattering centers with Alfv\u00b4 en speed in the ampli\ufb01ed magnetic \ufb01eld, and 3) escape of highest energy particles from the shock. Toward this end we have performed timedependent numerical simulations, in which DSA of CR protons and electrons at strong planar shocks is followed along with electronic synchrotron and inverse Compton (IC) losses. Magnetic \ufb01eld ampli\ufb01cation due to resonant waves generated by CR streaming instability is included through an approximate, analytic model suggested by Caprioli (2012). Escape of highest energy particles near maximum momentum, pmax, is included by implementing a free escape boundary (FEB) at a upstream location. As in our previous works (e.g. Kang 2010, 2011), a thermal leakage injection model, a Bohm-like di\ufb00usion coe\ufb03cient (\u03ba(p) \u221dp), and a model for wave dissipation and heating of the gas are adopted as well. In the next section we describe the numerical method and phenomenological models for the plasma interactions in DSA theory, and the model parameters for planar shocks. The simulation results will be discussed in Section 3, followed by a brief summary in Section 4. 2. DSA MODEL 2.1 CRASH Code for DSA Here we consider the CR acceleration at quasiparallel shocks where the mean background magnetic \ufb01eld lines are parallel to the shock normal. So we solve the standard gasdynamic equations with CR proton pressure terms added in the conservative, Eulerian formulation for one dimensional plane-parallel geometry. The basic gasdynamic equations and details of the CRASH (Cosmic-Ray Amr SHock) code can be found in Kang et al. (2002) and Kang (2011). We solve the following di\ufb00usion-convection equations for the pitch-angle-averaged phase space distribution function for CR protons, gp = fpp4, and for CR electron, ge = fep4 (Skilling 1975): \u2202g \u2202t + (u + uw)\u2202g \u2202x = 1 3 \u2202 \u2202x(u + uw) \u0012\u2202g \u2202y \u22124g \u0013 + \u2202 \u2202x \u0014 \u03ba(x, y)\u2202g \u2202x \u0015 + p \u2202 \u2202y \u0012 b p2 g \u0013 , (1) where y = ln(p/mpc). Here the particle momentum is expressed in units of mpc and so the spatial di\ufb00usion coe\ufb03cient, \u03ba(x, p), has the same form for both protons and electrons. The velocity uw represents the e\ufb00ective relative motion of scattering centers with respect to the bulk \ufb02ow velocity, u, which will be described in detail in section 2.5. The cooling term b(p) = \u2212dp/dt takes account for electron synchrotron/IC losses, while it is set to be b(p) = 0 for protons. Here the synchrotron/IC cooling constant for electrons is de\ufb01ned as b(p) = 4e4 9m4 ec6 B2 ep2 (2) \fDIFFUSIVE SHOCK ACCELERATION 113 in cgs units, where e and me are electron charge and mass, respectively. Here B2 e = B2 + B2 r as the e\ufb00ective magnetic \ufb01eld strength for radiative losses including the energy density of the ambient radiation \ufb01eld. We set Br = 6.5 \u00b5G, including the cosmic background and mean Galactic radiation \ufb01elds (Edmon et al. 2011). The dynamical e\ufb00ects of the CR proton pressure are included in the DSA simulations, while the CR electrons are treated as test-particles. In order to include the dynamical e\ufb00ects of ampli\ufb01ed magnetic \ufb01eld, the magnetic pressure, PB = B2/8\u03c0, is added to the momentum conservation equation as follows: \u2202(\u03c1u) \u2202t + \u2202(\u03c1u2 + Pg + Pc + PB) \u2202x = 0. (3) However, our model magnetic \ufb01eld ampli\ufb01cation typically results in PB/\u03c10u2 s < 0.01 in the precursor, where \u03c10u2 s is the shock ram pressure (see Section 2.4). 2.2 Thermal Leakage Injection The injection rate with which suprathermal particles are injected into CRs at the subshock depends in general upon the shock Mach number, \ufb01eld obliquity angle, and strength of Alfv\u00b4 enic turbulence responsible for scattering. In thermal leakage injection models suprathermal particles well into the exponential tail of the postshock Maxwellian distribution leak upstream across a quasi-parallel shock (Malkov & Drury 2001; Kang et al. 2002). We adopt a simple injection scheme in which the particles above an e\ufb00ective injection momentum pinj cross the shock and get injected to the CR population: pinj \u22481.17mpu2 \u0012 1 + 1.07 \u01ebB \u0013 , (4) where the injection parameter, \u01ebB = B0/B\u22a5, is the ratio of the large-scale magnetic \ufb01eld along the shock normal, B0, to the amplitude of the postshock MHD wave turbulence, B\u22a5(Kang et al. 2002). With a larger value of \u01ebB (i.e., weaker turbulence), pinj is smaller, which results in a higher injection rate. We consider \u01ebB = 0.23 here. We de\ufb01ne the injection e\ufb03ciency as the fraction of particles that have entered the shock from far upstream and then injected into the CR distribution: \u03be(t) = R dx R \u221e pinj 4\u03c0fp(p, x, t)p2dp n0ust (5) where n0 is the particle number density far upstream and us is the shock speed. Since postshock thermal electrons need to be preaccelerated before they can be injected into Fermi process, it is expected that electrons are injected at the shock with a much smaller injection rate, i.e., the CR electron-to-proton ratio is estimated to be small, Ke/p \u223c10\u22124 \u221210\u22122 (Reynolds 2008; Morlino & Caprioli 2012). Since this ratio is not yet constrained accurately by plasma physics and we do not consider nonthermal emissions from CR particles in this paper, both protons and electrons are injected in the same manner in our simulations (i.e. basically Ke/p = 1). But Ke/p = 0.1 will be used just for clarity of some \ufb01gures below. 2.3 Bohm-like Di\ufb00usion Model It is assumed that CR particles are resonantly scattered by Alfv\u00b4 en waves, which are excited by CR streaming instability in the upstream region and then advected and compressed in the down stream region (Bell 1978; Lucek & Bell 2000). So in DSA modeling the Bohm di\ufb00usion model, \u03baB = (1/3)rgv, is commonly used to represent a saturated wave spectrum. We adopt a Bohm-like di\ufb00usion coe\ufb03cient that includes a \ufb02attened non-relativistic momentum dependence, \u03ba(x, p) = \u03ban B0 B(x) \u00b7 p mpc, (6) where \u03ban = mpc3/(3eB0) = (3.13 \u00d7 1022cm2s\u22121)B\u22121 0 , and B0 is the magnetic \ufb01eld strength far upstream expressed in units of microgauss. The local magnetic \ufb01eld strength, B(x), will be described in the next section. Hereafter we use the subscripts \u20180\u2019, \u20181\u2019, and \u20182\u2019 to denote conditions far upstream of the shock, immediately upstream and downstream of the subshock, respectively. 2.4 Magnetic Field Ampli\ufb01cation Since the resonant interactions amplify mainly the turbulent magnetic \ufb01eld perpendicular to the shock normal in the quasi-linear limit, it was commonly assumed that the parallel component is B\u2225\u2248B0, the unperturbed upstream \ufb01eld (Caprioli et al. 2009). In a strong MFA case, however, the wave-particle interaction and the CR transport are not yet understood fully. For example, plasma simulations by Riquelme & Spitkovsky (2009) showed that both B\u2225/B0 and B\u22a5/B0 can increase to \u223c10 \u221230 via Bell\u2019s CR current driven instability. Here we follow the prescription for MFA that was formulated by Caprioli (2012) based on the assumption of isotropization of ampli\ufb01ed magnetic \ufb01eld. In the upstream region (x > xs), B(x)2 B2 0 = 1 + (1 \u2212\u03c9H) \u00b7 4 25M 2 A,0 (1 \u2212U(x)5/4)2 U(x)3/2 , (7) where MA,0 = us/vA,0 is the Alfv\u00b4 en Mach number for the far upstream Alfv\u00b4 en speed, vA,0 = B0/\u221a4\u03c0\u03c10, and U(x) = [us \u2212u(x)]/us is the \ufb02ow speed in the shock rest frame normalized by the shock speed. The factor (1 \u2212\u03c9H) is introduced to take account of the loss of magnetic energy due to wave dissipation, which will be discussed in Section 2.5. Obviously, \u03c9H = 0 means no \f114 H. KANG dissipation, while \u03c9H = 1 means complete dissipation of waves (i.e., no MFA). Here \u03c9H = 0.1 will be considered as a \ufb01ducial case, since we are interested in the case where the e\ufb00ects of MFA and ensuing wave drift are the greatest. This MFA model predicts no ampli\ufb01cation in the test-particle regime, where the \ufb02ow structure is not modi\ufb01ed (i.e., U(x) = 1). In the case of \u201cmoderately modi\ufb01ed\u201d shocks, for example, if U1 \u22480.8 and \u03c9H = 0.1, the ampli\ufb01ed magnetic \ufb01eld strength scales as B1/B0 \u22480.11MA,0. So for MA,0 \u2248150, the preshock ampli\ufb01cation factor could become B1/B0 \u2248 17. On the other hand, the ratio of the magnetic pressure to the shock ram pressure becomes PB,1/\u03c10u2 s = (2/25)(1\u2212U 5/4 1 )2/U 3/2 1 \u22486.6\u00d710\u22123. So we expect that even the ampli\ufb01ed \ufb01eld is not dynamically important in the precursor. The magnetic \ufb01led strength immediately upstream of the subshock, B1, is estimated by Equation (7) and assumed to be completely turbulent. Moreover, assuming that the two perpendicular components are simply compressed across the subshock, the immediate postshock \ufb01eld strength can be estimated by B2/B1 = p 1/3 + 2/3(\u03c12/\u03c11)2. (8) So for the case with \u03c12/\u03c11 \u22484.2, B2/B1 \u22483.5. Then we assume in the downstream region the \ufb01eld strength scales with the gas density: B(x) = B2 \u00b7 [\u03c1(x)/\u03c12] . (9) We note the MFA model described in Equations (7)(9) is used also for the di\ufb00usion coe\ufb03cient model given in Equation (6). Hence the maximum momentum pmax is controlled by the degree of MFA as well. 2.5 Alfv\u00b4 enic Drift The resonant waves generated by CR streaming instability will drift with respect to the underlying \ufb02ow and also transfer energy to the gas through dissipation (e.g. Skilling 1975; Jones 1993). These two e\ufb00ects in\ufb02uence the accelerated particle spectrum and the DSA e\ufb03ciency as follows. The scattering by Alfv\u00b4 en waves tends to isotropize the CR distribution in the wave frame rather than in the gas frame (Bell 1978), which reduces the velocity di\ufb00erence between upstream and downstream scattering centers, compared to that of the bulk \ufb02ow. The resulting CR spectrum becomes softer than estimated without considering the wave drift. The mean drift speed of scattering centers is commonly set to be the Alfv\u00b4 en speed, i.e., uw,1(x) = +vA = +B\u2225/\u221a4\u03c0\u03c1, pointing away from the shock, where B\u2225is the local magnetic \ufb01eld strength parallel to the shock normal. As described in Equation (7) here we assume that both B\u2225and B\u22a5are ampli\ufb01ed and isotropized, so scattering centers drift with Alfv\u00b4 en speed in the local ampli\ufb01ed magnetic \ufb01eld. In order to take account of the uncertainty regarding this issue, we model the local A\ufb02v\u00b4 en speed as vA(x) = B0 + (B(x) \u2212B0)fA p 4\u03c0\u03c1(x) , (10) where the parameter fA is a free parameter (Zirakashvili & Ptuskin 2008; Lee et al. 2012). If scattering centers drift along the ampli\ufb01ed \ufb01eld (fA = 1), the Alfv\u00b4 enic drift will have the maximum e\ufb00ects. Here we will consider the models with fA = 0.5 \u22121.0 (see Table 1). In the postshock region the Alfv\u00b4 enic turbulence is probably relatively balanced, so the wave drift can be ignored, that is, uw,2 \u22480 (Jones 1993). On the other hand, if the scattering centers drift away from the shock in both upstream and downstream regions, the accelerated particle spectrum could be softened drastically (e.g. Zirakashvili & Ptuskin 2008). We will consider one model (H2d) in which uw,2 \u2248\u2212vA is adopted in the downstream of the shock (see Table 1). As mentioned in the Introduction, the CR spectrum develops a concave curvature when the preshock \ufb02ow is modi\ufb01ed by the CR pressure. If we include the Alfv\u00b4 enic drift only in the upstream \ufb02ow, the slope of the momentum distribution function, q = \u2212\u2202ln f/\u2202ln p, can be estimated as qs \u2248 3(u1 \u2212uw,1) (u1 \u2212uw,1) \u2212u2 \u2248 3\u03c3s(1 \u2212M \u22121 A,1) \u03c3s(1 \u2212M \u22121 A,1) \u22121 (11) for p \u223cpinj, and qt \u2248 3(u0 \u2212uw,0) (u0 \u2212uw,0) \u2212u2 \u2248 3\u03c3t(1 \u2212M \u22121 A,0) \u03c3t(1 \u2212M \u22121 A,0) \u22121 (12) for p \u223cpmax. Here MA,1 = u1/vA,1 is Alfv\u00b4 enic Mach number immediately upstream of the subshock. As can be seen in these equations, a signi\ufb01cant steepening will occur only if MA \u223c < 10 (Caprioli 2012). According to the MFA prescription given in Equation (7), the ampli\ufb01cation factor depends on the precursor modi\ufb01cation, that is, the ratio B(x)/B0 is unity far upstream and increases through the precursor toward the subshock. So the Afv\u00b4 enic drift speed is highest immediately upstream of the subshock, while it is the same as the unperturbed Alfv\u00b4 en speed, vA,0 at the far upstream region (MA,1 \u226aMA,0). Thus the Alfv\u00b4 enic drift is expected to steepen preferentially the lower energy end of the CR spectrum, since the lowest energy particles di\ufb00use mostly near the subshock. For the highest energy particles, which di\ufb00use over the distance of \u223c\u03ba(pp,max)/us, however, the Alf\u00b4 evnic drift does not steepenthe CR spectrum signi\ufb01cantly, if MA,0 \u226b1. 2.6 Wave Dissipation and Particle Escape As discussed in the Introduction, non-linear wave damping and dissipation due to ion-neutral collisions \fDIFFUSIVE SHOCK ACCELERATION 115 Table 1. Model Parametersa Modelb us nH (ISM) T0 Ms MA,0 fA c \u03c9H d uw,1 uw,2 km s\u22121 (cm\u22123) (K) W1a 3 \u00d7 103 1.0 4.0 \u00d7 104 100 164. 1.0 0.1 +vA 0 W1b 3 \u00d7 103 1.0 4.0 \u00d7 104 100 164. 1.0 0.1 0 0 H1a 3 \u00d7 103 1.0 106 20 164. 1.0 0.1 +vA 0 H1b 3 \u00d7 103 1.0 106 20 164. 1.0 0.1 0 0 H1c 3 \u00d7 103 1.0 106 20 164. 0.5 0.5 +vA 0 H2a 3 \u00d7 103 0.01 106 20 16.4 1.0 0.1 +vA 0 H2b 3 \u00d7 103 0.01 106 20 16.4 1.0 0.1 0 0 H2d 3 \u00d7 103 0.01 106 20 16.4 1.0 0.1 +vA \u2212vA H3a 103 0.01 106 6.67 5.46 1.0 0.1 +vA 0 H3b 103 0.01 106 6.67 5.46 1.0 0.1 0 0 H4a 4.5 \u00d7 103 0.01 106 30 24.6 1.0 0.1 +vA 0 H4b 4.5 \u00d7 103 0.01 106 30 24.6 1.0 0.1 0 0 a For all the models the background magnetic \ufb01eld is B0 = 5 \u00b5G and the injection parameter is \u01ebB = 0.23. b \u2018W\u2019 and \u2018H\u2019 stands for the warm and hot phase of the ISM, respectively. c See (9) for the Alfv\u00b4 en parameter. d See Equation (6) for the wave dissipation parameter. may weaken the stochastic scattering, leading to slower acceleration and escape of highest energy particles from the shock. These processes are not understood quantitatively well, so we adopt a simple model in which waves are dissipated locally as heat in the precursor. Then gas heating term in the upstream region is prescribed as W(x, t) = \u2212\u03c9H \u00b7 vA(x)\u2202Pc \u2202x , (13) where Pc is the CR pressure (Jones 1993). The parameter \u03c9H is introduced to control the degree of wave dissipation and a \ufb01ducial value of \u03c9H = 0.1 is assumed. As shown previously in SNR simulations (e.g. Berezhko & V\u00a8 olk 1997; Kang & Jones 2006), this precursor heating reduces the subshock Mach number thereby reducing the DSA e\ufb03ciency. For larger values of \u03c9H, the magnetic \ufb01eld ampli\ufb01cation is suppressed (see Equation (7)), which also reduces the maximum momentum of protons and so the DSA e\ufb03ciency. In addition, we implement a free escape boundary (FEB) at a upstream location by setting f(xFEB, p) = 0 at xFEB = 0.1Rs = 0.3pc (here the shock is located at xs = 0). This FEB condition can mimic the escape of the highest energy particles with the di\ufb00usion length, \u03ba(p)/us \u223c > xFEB. For typical supernova remnant shocks, this FEB leads to the size-limited maximum momentum, pp,max mpc \u22484.4\u00d7104( B0 5\u00b5G)( us 3000 km s\u22121 )( xFEB 0.3pc). (14) As can be seen in Section 3, the CR proton spectrum and the shock structure approach to time-asymptotic states, if this FEB is employed (Kang et al. 2009). On the other hand, the maximum electron momentum can be estimated by pe,max mpc \u22482.8 \u00d7 104 \u0012 B1 30 \u00b5G \u0013\u22121/2 \u0010 us 3000 km s\u22121 \u0011 , (15) which is derived from the equilibrium condition that the DSA momentum gains per cycle are equal to the synchrotron/IC losses per cycle (Kang 2011). The electron spectrum at the shock position, fe(xs, p), cuts o\ufb00 exponentially at \u223cpe,max. On the other hand, the postshock electron spectrum cuts o\ufb00at a progressively lower momentum downstream from the shock due to the energy losses. That results in the steepening of the volume integrated electron energy spectrum, Fe(p) = R fe(x, p)dx, by one power of the momentum (Kang et al. 2012). At the shock age t, the break momentum can be estimated from the condition t = p/b(p): pe,br(t) mpc \u22481.3 \u00d7 103 \u0012 t 103 yr \u0013\u22121 \u0012 Be,2 100 \u00b5G \u0013\u22122 , (16) which depends only on the postshock magnetic \ufb01eld strength and the shock age (Kang 2011). 2.7 Planar Shock Parameters We consider planar shocks with us = 1000 \u2212 4500 km s\u22121, propagating into a uniform ISM magnetized with B0 = 5 \u00b5G. The model parameters are summarized in Table 1. Previous studies have shown that the shock sonic Mach number is one of the key parameter governing the evolution and the DSA e\ufb03ciency \f116 H. KANG Fig. 1.\u2014 Time evolution of the magnetic \ufb01eld strength, CR pressure, gas density, and volume integrated distribution functions of protons (Gp) and electrons (Ge) for H1a (solid lines) and H1b (dotted lines) models at t/tn = 0.1, 2.5 and 5. See Table 1 for other model parameters and normalization constants. In the bottom right panel the upper curves are for the proton spectra, while the lower curves are for the electron spectra. Note that both Gp/t and Ge/t are given in arbitrary units and Ke/p = 0.1 is adopted here for clarity. (e.g. Kang & Jones 2007; Kang et al. 2009), so here two phases of the ISM are considered: the warm phase with T0 = 4 \u00d7 104K (W models), and the hot phase with T0 = 106K (H model). The sonic Mach number of each model is given as Ms = 20(T0/106K)\u22121/2u3000, where u3000 = us/3000 km s\u22121. Two values of the gas density nH = 0.01 cm\u22123 and 1 cm\u22123 are considered. The upstream Alfv\u00b4 en speed is then vA,0 = B0/\u221a4\u03c0\u03c10 = (18.3 km s\u22121)n\u22121/2 H , so the Alfv\u00b4 enic Mach number is MA,0 = us/vA,0 = 164\u221anHu3000. We consider W1a, H1a, H2a, H3a and H4a models as \ufb01ducial cases with canonical values of model parameters: fA = 1.0 and \u03c9H = 0.1. In models H1b, H2b, H3b and H4b, Alfv\u00b4 enic drift is turned o\ufb00(uw,1 = 0) for comparison. But we note that these models are not self-consistent with our MFA model, which assumes that Alf\u00b4 ven waves propagate along the ampli\ufb01ed magnetic \ufb01eld. In H1c model, MFA is reduced by setting fA = 0.5 and \u03c9H = 0.5 . Model H2d is chosen to see the e\ufb00ects of Alfv\u00b4 enic drift in the postshock region. The physical quantities are normalized in the numerical code and in the plots below by the following characteristic values: un = us, xn = Rs = 3 pc, tn = xn/un = (978 yr)u3000, \u03ban = unxn, \u03c1n = (2.34 \u00d7 10\u221224g cm\u22123) \u00b7 nH, and Pn = \u03c1nu2 n = (2.11 \u00d7 10\u22127erg cm\u22123) \u00b7 nHu2 3000. 3. DSA SIMULATION RESULTS Figures 1-3 show the spatial pro\ufb01les of the model magnetic \ufb01eld, CR pressure, gas density, and the volume integrated distribution functions of protons (Gp = R gp(x, p)dx) and electrons (Ge = R ge(x, p)dx) for H1a and H1b (MA,0 = 164), H2a and H2b (MA,0 = 16.4), and H3a and Hb (MA,0 = 5.46) models, respectively. In these simulations, the highest level of re\ufb01nement is lg,max = 8 and the factor of each re\ufb01nement is two, so the ratio of the \ufb01nest grid spacing to the base grid spacing is \u2206x8/\u2206x0 = 1/256 (Kang et al. 2002). Since Figures 1-3 show the \ufb02ow structure on the base grid, the precursor pro\ufb01le may appear to be resolved rather poorly here. More accurate values of the precursor den\fDIFFUSIVE SHOCK ACCELERATION 117 Fig. 2.\u2014 Same as Figure 1 except that H2a (solid lines) and H2b (dotted lines) are shown. sity compression (\u03c11) and magnetic \ufb01eld ampli\ufb01cation (B1) can be found in Figure 4 below. Also note that the FEB is located at xFEB = 0.3pc and we use Ke/p = 0.1 here in order to show the proton and electron spectra together. These \ufb01gures demonstrate that 1) the shock structure reaches the time-asymptotic state and evolves in a self-similar fashion for t/tn \u223c > 0.1 (Kang & Jones 2007). 2) Also the proton spectrum approaches the steady state for t/tn \u223c > 0.5 due to the FEB, while the electron spectrum continues to cool down in the downstream region. 3) Magnetic \ufb01eld is ampli\ufb01ed by a greater factor for a higher MA,0. 4) Alfv\u00b4 enic drift steepens the CR spectrum, and reduces the CR acceleration e\ufb03ciency and \ufb02ow modi\ufb01cation by CR feedback, resulting in lesser MFA. 5) At low energies the CR spectra are much steeper than the test-particle power-law due to the velocity pro\ufb01le and magnetic \ufb01eld structure in the precursor. In H1a model with Alfv\u00b4 enic drift (solid lines), the gas density immediately upstream of the subshock increases to \u03c11/\u03c10 \u22481.2, while the total compression ratio becomes \u03c12/\u03c10 \u22484.5. So the \ufb02ow structure is moderately modi\ufb01ed by the CR pressure feedback: U1 \u22480.9 and Pc,2/\u03c10u2 s \u22480.12. Since the Alfv\u00b4 en Mach number is high (MA,0 = 164), the self-ampli\ufb01ed magnetic \ufb01eld strength, based on the model in equation (7), increases to B1 \u2248100\u00b5G, which results in the immediate postshock \ufb01elds of B2 \u2248350\u00b5G. Compared to the model without Alfv\u00b4 enic drift (H1b, dotted lines), the CR distribution functions are softer. Although Alfv\u00b4 enic drift steepens the distribution function in H1a model, Gp(p) still exhibits a signi\ufb01cant concave curvature and it is slightly \ufb02atter than the test-particle power-law (E\u22122) at the highest energy end. This is because MA,0 = 164 is too large to induce the signi\ufb01cant enough softening for p \u223cpp,max (see Equation (12)). Note that pp,max is lower in H1a model than that in H1b model, because of weaker MFA. The structures of the integrated electron spectra are complex for p \u223c < pe,max. As shown in Equation (16), the volume integrated electron energy spectrum steepens by one power of the momentum due to radiative cooling. One can see that the break momentum, pbr(t), shifts to lower momenta in time. The peak near pe,max comes from the electron population in the upstream region, which cools much less e\ufb03ciently due to weaker magnetic \ufb01eld there (Edmon et al. 2011). Since MFA is much stronger in H1b model, compared to H1a model, the electron spectrum cools down to lower momentum \f118 H. KANG Fig. 3.\u2014 Same as Figure 1 except that H3a (solid lines) and H3b (dotted lines) are shown. in the downstream region. Comparing H2a/b in Figure 2 with H1a/b in Figure 2, one can see that the degree of shock modi\ufb01cation is similar in the these models. Because of a lower gas density (nH = 0.01 cm\u22123) in H2a/b models, the Alfv\u00b4 en Mach number is smaller (MA,0 = 16.4) and so MFA is much less e\ufb03cient, compared to H1a/b models. In H2a model the ampli\ufb01ed preshock \ufb01eld increases to only B1 \u224810 \u00b5G, while the postshock \ufb01eld reaches B2 \u224835 \u00b5G. Because of much weaker magnetic \ufb01eld, compared to H1a/b model, the electron spectra are affected much less by radiative cooling. Since H3a/b models in Figure 3 have a lower sonic Mach Number (Ms = 6.7), the \ufb02ow structures are almost test-particle like with B1 \u2248B0, \u03c12/\u03c10 \u22484, and Pc,2/\u03c10u2 s \u22480.05 \u22120.13. So the CR acceleration, \ufb02ow modi\ufb01cation, and MFA are all less e\ufb03cient, compared to H1a/b and H2a/b models. In H3a model, especially, the CR spectra are as steep as E\u22122.1 \u2212E\u22122.3 and electrons do not su\ufb00er signi\ufb01cant cooling. Figure 4 shows how various shock properties change in time for di\ufb00erent models: the CR injection fraction, postshock CR pressure, density compression factors and magnetic \ufb01eld strengths. As discussed above, the magnetic \ufb01eld ampli\ufb01cation is more e\ufb03cient in the models with higher MA,0: B1/B0 \u224820 for MA,0 = 164 (H1a, H1c, W1a models), B1/B0 \u22483 for MA,0 = 24.6 (H4a), B1/B0 \u22482 for MA,0 = 16.4 (H2a), B1/B0 \u22481 for MA,0 = 5.46 (H3a). According to previous studies, nonlinear DSA without self-consistent MFA and Alfv\u00b4 enic drift predicts that the DSA e\ufb03ciency depends strongly on the sonic Mach number Ms and the CR pressure asymptotes to Pc,2/\u03c10u2 s \u223c0.5 for Ms \u223c > 20. (Kang & Jones 2007; Kang et al. 2009). However, Figure 4 shows that in the models with MFA and Alfv\u00b4 enic drift the CR acceleration and MFA are reduced in such a manner that the DSA e\ufb03ciency saturates roughly at Pc,2/\u03c10u2 s \u223c0.1 for 20 \u223c < Ms \u223c < 100. We can see that models with a wide range of sonic Mach number, i.e. W1a(Ms = 100), H4a (Ms = 30), H2a and H1a (Ms = 20), all have similar results: \u03c12/\u03c10 \u22484.5, and Pc,2/\u03c10u2 s \u22480.1. Figures 5 and 6 show the volume-integrated distribution function, Gp(p)/(n0ust), for protons and, Ge(p)/(n0ust), for electrons, respectively, for di\ufb00erent models. Again the proton spectrum approaches to the steady state for t/tn \u223c > 0.5, when pp,max(t) satis\ufb01es the condition, \u03ba(pp,max)/us \u223cxFEB. We note that for W1b model only the curve at t/tn = 0.1 is shown, because the simulation was terminated when the sub\fDIFFUSIVE SHOCK ACCELERATION 119 Fig. 4.\u2014 Time evolution of the injection e\ufb03ciency, \u03be, postshock CR pressure, Pc,2, the gas density \u03c11 (\u03c12) immediately upstream (downstream) of the subshock, and magnetic \ufb01eld strengths, B1 and B2, in di\ufb00erent models: H1a (black solid lines), H1b (red dotted), H1c (blue dashed), W1a (green dot-dashed) in the left column, and H2a (black solid lines), H2b (red dotted), H3a (blue dashed), H4a (green dot-dashed) in the right column. Note H1b and H2b models are shown for comparison, but B1 and B2 for those models are not included in the bottom panels. shock disappears because of very e\ufb03cient DSA. These \ufb01gures demonstrate that the CR spectra is steepened by Alfv\u00b4 enic drift, especially at lower energies, and that the degree of softening is greater for smaller MA,0. In H2d model, in which the downstream drift is included (uw,2 = \u2212vA) in addition to the upstream drift, the CR spectra are steepened drastically. In the volume-integrated electron spectrum, the lowenergy break corresponds to the momentum at which the electronic synchrotron/IC loss time equals the shock age. In the models with stronger magnetic \ufb01eld (e.g., H1a and W1a models), this spectral break occurs at a lower pe,br, and the separate peak around pe,max composed of the upstream population becomes more prominent. 4. SUMMARY Using the kinetic simulations of di\ufb00usive shock acceleration at planar shocks, we have calculated the timedependent evolution of the CR proton and electron spectra for the shock parameters relevant for typical young supernova remnants. In order to explore how various wave-particle interactions a\ufb00ect the DSA process, we adopted the following phenomenological models: 1) magnetic \ufb01eld ampli\ufb01cation (MFA) induced by CR streaming instability in the precursor, 2) drift of scattering centers with Alfv\u00b4 en speed in the ampli\ufb01ed magnetic \ufb01eld, 3) particle injection at the subshock via thermal leakage injection, 4) Bohm-like di\ufb00usion coe\ufb03cient, 5) wave dissipation and heating of the gas, and 6) escape of highest energy particles through a free escape \f120 H. KANG Fig. 5.\u2014 Volume integrated distribution function of CR protons for di\ufb00erent models: H1a, W1a, H2a, H1c, H3a, H4a, (black solid lines), H1b, W1b, H2b, H2d, H3b, H4b (red dotted lines). For W1b model, only the curve for t/tn = 0.1 is shown, because the simulation was terminated afterward. The curves for H3a/b are multiplied by a factor of 10 to show them in the same scale as other models. See Table 1 for model parameters. boundary. The MFA model assumes that the ampli\ufb01ed magnetic \ufb01eld is isotropized by a variety of turbulent processes and so the Alfv\u00b4 en speed is determined by the local ampli\ufb01ed magnetic \ufb01eld rather than the background \ufb01eld (Caprioli 2012). This model predicts the magnetic \ufb01eld ampli\ufb01cation factor scales with the upstream Alfv\u00b4 enic Mach number as B1/B0 \u221dMA,0, and also increases with the strength of the shock precursor (see Equation (7)). Moreover, we assume that self-generated MHD waves drift away from the shock with respect to the background \ufb02ow, leading to smaller velocity jumps that particles experience scattering across the shock. The ensuing CR distribution function becomes steeper than that calculated without Alfv\u00b4 enic drift, so the CR injection/acceleration e\ufb03ciencies and the \ufb02ow modi\ufb01cation due to CR feed back are reduced. The expected power-law slope depends on the Alfv\u00b4 enic Mach number as given in Equations (11)-(12). With our MFA model that depends on the precursor modi\ufb01cation, the upstream Alfv\u00b4 enic drift a\ufb00ects lower energy particles more strongly, steepening the low energy end of the spectrum more than the high energy end. Hence, for MA,0 \u223c > 10, the CR spectra still retain the concave curvature and they can be slightly \ufb02atter than E\u22122 at the high energy end. For weaker shocks with Ms = 6.7 and MA,0 = 5.5 (H3a model), on the other hand, the Alfv\u00b4 enic drift e\ufb00ects are more substantial, so the energy spectrum becomes as steep as N(E) \u221dE\u22122.1 \u2212E\u22122.3. We can explain how MFA and Alfv\u00b4 enic drift regulate the DSA as follows. As CR particles stream upstream of the shock, magnetic \ufb01eld is ampli\ufb01ed and Alfv\u00b4 en speed in the local B(x) increases in the precursor. Then scattering centers drift with enhanced vA, the CR spectrum is steepened and the CR acceleration e\ufb03ciency is reduced, which in turn restrict the growth of the precursor (see also Caprioli 2012). So the \ufb02ow modi\ufb01cation due to the CR pressure is only moderate with \u03c12/\u03c10 \u22484.5. As a result, the DSA e\ufb03ciency saturates roughly at Pc,2/\u03c10u2 s \u223c0.1 for 20 \u223c < Ms \u223c < 100. For Ms = 20 shocks with us = 3000 km s\u22121, for example, in the models with Alfv\u00b4 enic drift (H1a and H2a), the CR injection fraction is reduced from \u03be \u223c2 \u00d7 10\u22123 \fDIFFUSIVE SHOCK ACCELERATION 121 Fig. 6.\u2014 Same as Figure 5 except that the volume integrated distribution function of CR electrons are shown. to \u223c2 \u00d7 10\u22124, while the CR pressure decreases from Pc,2/\u03c10u2 s \u223c0.25 to \u223c0.12, compared to the model without Alfv\u00b4 enic drift (H1b and H2b) (see Figure 4). This study demonstrates that detailed nonlinear treatments of wave-particle interactions govern the CR injection/acceleration e\ufb03ciencies and the spectra of CR protons and electrons. Thus it is crucial to understand in a quantitative way how plasma interactions amplify magnetic \ufb01eld and a\ufb00ect the transportation of waves in the shock precursor through detailed plasma simulations such as PIC and hybrid simulations. Moreover, the time-dependent behaviors of self-ampli\ufb01ed magnetic \ufb01eld and CR injection as well as particle escape will determine the spectra of the highest energy particles accelerated at astrophysical shocks. We will present elsewhere the results from more comprehensive DSA simulations for a wide range of sonic and Alfv\u00b4 en Mach numbers. ACKNOWLEDGMENTS This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-001065)." + }, + { + "url": "http://arxiv.org/abs/1205.1895v1", + "title": "Diffusive Shock Acceleration Simulations of Radio Relics", + "abstract": "Recent radio observations have identified a class of structures, so-called\nradio relics, in clusters of galaxies. The radio emission from these sources is\ninterpreted as synchrotron radiation from GeV electrons gyrating in\nmicroG-level magnetic fields. Radio relics, located mostly in the outskirts of\nclusters, seem to associate with shock waves, especially those developed during\nmergers. In fact, they seem to be good structures to identify and probe such\nshocks in intracluster media (ICMs), provided we understand the electron\nacceleration and re-acceleration at those shocks. In this paper, we describe\ntime-dependent simulations for diffusive shock acceleration at weak shocks that\nare expected to be found in ICMs. Freshly injected as well as pre-existing\npopulations of cosmic-ray (CR) electrons are considered, and energy losses via\nsynchrotron and inverse Compton are included. We then compare the synchrotron\nflux and spectral distributions estimated from the simulations with those in\ntwo well-observed radio relics in CIZA J2242.8+5301 and ZwCl0008.8+5215.\nConsidering that the CR electron injection is rather inefficient at weak shocks\nwith Mach number M <~ a few, the existence of radio relics could indicate the\npre-existing population of low-energy CR electrons in ICMs. The implication of\nour results on the merger shock scenario of radio relics is discussed.", + "authors": "Hyesung Kang, Dongsu Ryu, T. W. Jones", + "published": "2012-05-09", + "updated": "2012-05-09", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "astro-ph.CO" + ], + "main_content": "INTRODUCTION The presence of energetic nonthermal particles, especially electrons, in clusters of galaxies has been inferred from observations of so-called \u201cradio halos\u201d and \u201cradio relics\u201d (see, e.g., Carilli & Taylor 2002; Govoni & Feretti 2004; Ferrari et al. 2008; Br\u00a8 uggen et al. 2011, for reviews). The radio emission from these sources is interpreted as synchrotron radiation of cosmic-ray (CR) electrons. The radio halos center roughly in cluster cores and have low surface brightness with steep radio spectrum and low polarization. Radio relics, on the contrary, are isolated structures, typically located in the cluster outskirts but within virial radii. They often exhibit sharp edges, and most of them show strong polarization. In fact, with occasional pairings found in the opposite side of clusters and elongated morphologies, radio relics are commonly thought to reveal shock waves in intracluster media (ICMs) produced during mergers (e.g., En\u00dflin et al. 1998; Roettiger, Burns & Stone 1999; Miniati et al. 2001). Unfortunately, relics are found mostly too far from cluster cores for their X-ray signatures to be easily detected. So only in a few cases, their association with ICM shocks have been established by X-ray observations (e.g., Finoguenov et al. 2010; Akamatsu & Kawahara 2011). More than 40 relics have been identi\ufb01ed in radio observations so far (Nuza et al. 2012, and references therein). Based on the spatial distribution of shocks seen in cluster formation simulations, it is predicted that coming radio surveys will easily identify hundreds more (e.g., Skillman et al. 2011; Vazza et al. 2012; Nuza et al. 2012). The observed synchrotron radiation is expected to come from CR electrons with Lorentz factors \u03b3e \u2273104, spiraling in \u223c\u00b5G magnetic \ufb01elds. The cooling time scale of such CR electrons due to synchrotron emission and inverse Compton (IC) scattering does not much exceed \u223c108 yrs (see equation (2)). Advection or di\ufb00usion over that time would typically be limited to \u2272100 kpc. So, the electrons have very likely been injected, accelerated or re-accelerated close to where they are seen in emission. Shocks, believed to be associated to observed radio relics, are obvious candidates for the acceleration or re-acceleration of the CR electrons. Suprathermal particles are known to be produced as an inevitable consequence of the formation of collisionless shocks in tenuous plasmas (e.g., Garat\u00b4 e & Spitkovsky 2012). If postshock suprathermal particles have su\ufb03cient rigidity to recross the shock transition, they can be further accelerated to become CRs through so-called Di\ufb00usive Shock Acceleration (DSA) (Bell 1978; Drury 1983; Malkov & Drury 2001). Only a very small fraction of in\ufb02owing plasma particles are \u201cinjected\u201d from the thermal pool into the CR population. Yet, in strong shocks, a su\ufb03cient number of CRs reach high energies, so that they extract a substantial fraction of the dissipated energy, allowing DSA to be e\ufb03cient. Shock waves are indeed common in the intergalactic space (e.g., Miniati et al. 2000; \f\u2013 3 \u2013 Ryu et al. 2003). They are induced by the supersonic \ufb02ow motions produced during the hierarchical formation of the large-scale structure (LSS) in the universe. Those shocks are, in fact, the dominant means to dissipate the gravitational energy which is released during the LSS formation. They broadly re\ufb02ect the dynamics of baryonic matter in the LSS of the universe, and, indirectly, dark matter. Simulations suggest that while very strong shocks form in relatively cooler environments in \ufb01laments and outside cluster virial radii, shocks produced by mergers and \ufb02ow motions in hotter ICMs are relatively weak with Mach number M \u2272a few (Ryu et al. 2003; Pfrommer et al. 2006; Kang et al. 2007; Skillman et al. 2008; Hoeft et al. 2008; Vazza et al. 2009; Br\u00a8 uggen et al. 2011). At weak shocks, however, DSA should be ine\ufb03cient. This is expected from the fact that the particle energy spectrum associated with DSA is steep when the density compression across a shock is small. Also the relative di\ufb00erence between the postshock thermal and \ufb02ow speeds is greater in weaker shocks. Consequently, the injection from thermal to nonthermal particles should be ine\ufb03cient at weak shocks (e.g., Kang et al. 2007). At shocks with M \u2272 a few, much less than \u223c10\u22123 of protons are thought to be injected into CRs and much less than \u223c1 % of the shock ram pressure be converted into the downstream pressure of CR protons (Kang & Ryu 2010). For reference, recent Fermi observations of \u03b3-ray emission from galaxy clusters, searching for \u03b3-ray by-products of p \u2212p collisions, limit the pressure due to CR protons to less than \u223c10 % of the gas thermal pressure there (Abdo et al. 2010; Donnert et al. 2010). IACT (Imaging Atmospheric Cherenkov Technique) observations of TeV \u03b3-ray suggest even a lower limit of \u22721 \u22122 % in core regions of some clusters (Alaksi\u00b4 c et al. 2012; Arlen et al. 2012). Injection and acceleration of electrons are even more problematic at weak shocks. Relativistic electrons and protons of the same energy are accelerated the same in DSA, since they have the same rigidity. But nonrelativistic electrons of a given energy have substantially smaller rigidities than protons, making them much harder to be injected at shocks from the thermal pool. As a consequence, the number of electrons injected and accelerated to the CR population is likely to be signi\ufb01cantly smaller than that of CR protons, and so is the pressure of CR electrons at weak shocks. Hot ICMs, on the other hand, should have gone \ufb01rst through accretion shocks of high Mach numbers around clusters and \ufb01laments and then through weaker shocks inside those nonlinear structures (Ryu et al. 2003; Kang et al. 2007). Hence, it is expected that ICMs contain some CR populations produced through DSA at the structure formation shocks. In addition, in ICMs, nonthermal particles can be produced via turbulent re-acceleration (e.g., Brunetti & Lazarian 2007, 2011). Moreover, secondary CR electrons are also continuously generated through p \u2212p collisions of CR protons with thermal protons of ICMs \f\u2013 4 \u2013 (e.g., Miniati et al. 2001; Pfrommer & En\u00dflin 2004). If radio relics form in media with such \u201cpre-existing\u201d CRs, the problem of ine\ufb03cient injection at weak shocks can be alleviated. In this paper, we study DSA of CR electrons at shocks expected to be found in ICMs, with and without pre-existing CR electrons. Since the shocks are mostly weak, the CR pressure is likely to be a small fraction of the thermal pressure (see Kang & Ryu 2011). So we apply DSA in the test-particle regime. In the time-asymptotic limit without radiative losses, the test-particle DSA theory predicts a steady-state distribution of power-law for downstream CR electrons, fe,2(p) \u221dp\u2212q with q = 3\u03c3/(\u03c3 \u22121), where \u03c3 is the density compression ratio across a shock, when no pre-existing CR is assumed (Drury 1983). If preexisting CR electrons of a power-law distribution, fe,1 \u221dp\u2212s, are assumed, the distribution of re-accelerated electrons approaches fe,2(p) \u221dp\u2212r with r = min(q, s) at large momenta (Kang & Ryu 2011, and also see equation (9)). The power-law distributions of fe,2(p) translate into the synchrotron/IC spectra of j\u03bd \u221d\u03bd\u2212\u03b1 with \u03b1 = (q \u22123)/2 or (r \u22123)/2 (e.g., Zirakashvili & Aharonian 2007; Blasi 2010; Kang 2011). These properties provide essential benchmarks for expected spectral properties We perform \u201ctime-dependent\u201d, DSA simulations of CR electrons for plane-parallel shocks, which include the energy losses due to synchrotron and IC processes. Using the simulation data, we calculate the synchrotron emission from CR electrons, and model the synchrotron \ufb02ux and spectral distributions from spherical shocks. We then compare the resulting distributions to those of two well-observed radio relics in clusters CIZA J2242.8+5301 (van Weeren et al. 2010) and ZwCl0008.8+5215 (van Weeren et al. 2011) in details. The relic in CIZA J2242.8+5301 at redshift z = 0.1921 perhaps demonstrates the best evidence for DSA at merger shocks. It is located at a distance of \u223c1.5 Mpc from the cluster center and spans \u223c2 Mpc in length. The relic shows a spectral index gradient towards the cluster center. The spectral index, measured between 2.3 and 0.61 GHz, steepens from \u22120.6 to \u22122.0 across the relic. The relic is strongly polarized at the 50 \u221260 % level, indicating ordered magnetic \ufb01elds aligned with the relic. In the opposite, southern part of the cluster, an accompanying fainter and smaller relic is found. The relic in ZwCl0008.8+5215 at z = 0.1032 is found at a distance of \u223c0.85 Mpc from the cluster center and has a linear extension of \u223c1.4 Mpc. It also shows the steepening of the spectral index towards the cluster center. The spectral index, measured between 1382 and 241 MHz, changes from \u22121.2 to \u22122.0 across the relic. The polarization fraction is less with \u227225 %. It also has an accompanying relic of a linear extension of \u223c290 kpc in the opposite, western side of the cluster. In Section 2 we describe the numerical method and the models for magnetic \ufb01eld, diffusion, electron injection, and pre-existing CR electron population. We present analytic evaluations for some features in the CR electron energy spectrum and synchrotron emission \f\u2013 5 \u2013 spectrum in Section 3. The results of simulations are presented and compared with observations of the previously mentioned radio relics in Section 4. Summary follows in Section 5. 2. DSA SIMULATIONS OF CR ELECTRONS 2.1. Numerical Method We simulate DSA of CR electrons at gasdynamical shocks in one-dimensional planeparallel geometry. Shocks in ICMs, especially merger shocks, are expected to persist over \u2273109 yrs, a substantial fraction of the cluster lifetime (e.g., Skillman et al. 2011). On the other hand, the time scales over which electrons are accelerated and cool are much shorter (see Eq. [2] below). So we assume that the shock structure remains steady. Assuming that the CR feedback to the \ufb02ow is negligible at weak shocks in the test-particle limit, the background \ufb02ow, u, is given by the usual shock jump condition. Then, the time-dependent evolution of the CR electron distribution, fe(t, x, p), which is averaged over pitch angles, can be followed by the following di\ufb00usion convection equation, \u2202ge \u2202t + u\u2202ge \u2202x = 1 3 \u2202u \u2202x \u0012\u2202ge \u2202y \u22124ge \u0013 + \u2202 \u2202x \u0014 \u03ba(x, y)\u2202ge \u2202x \u0015 + p \u2202 \u2202y \u0012 be p2ge \u0013 , (1) where ge = p4fe, y = ln(p/mec), me is the electron mass, c is the speed of light, and \u03ba(x, y) is the spatial di\ufb00usion coe\ufb03cient (Skilling 1975). Here, be(p) = (4e4/9m4 ec6)B2 e\ufb00p2 represents the cooling of CR electrons due synchrotron and IC losses in cgs units, where e is the electron charge. The \u201ce\ufb00ective\u201d magnetic \ufb01eld strength, B2 e\ufb00\u2261B2 + B2 CBR, includes the equivalent strength of the cosmic background radiation with BCBR = 3.24 \u00b5G(1 + z)2 at redshift z. The cooling time scale for electrons is given as trad(\u03b3e) = p be(p) = 9.8 \u00d7 107 yrs \u0012 Be\ufb00 5 \u00b5G \u0013\u22122 \u0010 \u03b3e 104 \u0011\u22121 , (2) where \u03b3e is the Lorentz factor of CR electrons. The equation in (1) is solved using the test-particle version of the CRASH (Cosmic-Ray Amr SHock) code (see Kang et al. 2011, for details). \f\u2013 6 \u2013 2.2. Models for Magnetic Field and Di\ufb00usion Here, shocks are assumed to be gasdynamical for simplicity, that is, magnetic \ufb01elds do not modify the background \ufb02ow of the shock. In ICMs, magnetic \ufb01elds of an inferred strength of order \u00b5G (Carilli & Taylor 2002; Govoni & Feretti 2004) are dynamically unimportant, since their energy density is less than \u223c10 % of the thermal energy density (e.g., Ryu et al. 2008). However, magnetic \ufb01elds, especially in the downstream region, are the key that governs DSA and the synchrotron cooling and emission of CR electrons. Theoretical studies have shown that e\ufb03cient magnetic \ufb01eld ampli\ufb01cation via resonant and non-resonant waveparticle interactions is an integral part of DSA at strong shocks (Lucek & Bell 2000; Bell 2004). In addition, magnetic \ufb01elds can be ampli\ufb01ed by turbulent motions behind shocks (Giacalone & Jokipii 2007; Inoue et al. 2009). Yet, these plasma processes are complex and their roles are not yet entirely certain, especially at weak shocks. So here we adopt a simple model in which the magnetic \ufb01eld strength is ampli\ufb01ed by a constant factor of \u03c7 across the shock, that is, B2 = \u03c7B1. Hereafter, we use the subscripts \u20181\u2019, and \u20182\u2019 to label conditions in the preshock and postshock regions, respectively. For \u03ba, we adopt a Bohm-like di\ufb00usion coe\ufb03cient with weaker non-relativistic momentum dependence, \u03ba(x, p) = \u03ba\u2217\u00b7 \u0012 p mec \u0013 , (3) where \u03ba\u2217 1 = mec3/(3eB1) = 1.7 \u00d7 1019 cm2 s\u22121(B1/1 \u00b5G)\u22121 in the preshock region and \u03ba\u2217 2 = \u03ba\u2217 1/\u03c7 in the postshock region. 2.3. Injection of Electrons As pointed in Introduction, the injection of electrons is expected to be much harder than that of protons in the so-called thermal leakage injection model. Because complex plasma interactions among CRs, waves, and the underlying gas \ufb02ow are not fully understood, it is not yet possible to predict from \ufb01rst principles how particles are injected into the \ufb01rst-order Fermi process (e.g., Malkov & Drury 2001; Garat\u00b4 e & Spitkovsky 2012). In addition, postshock thermal electrons, which have gyro-radii smaller than those of thermal protons, need to be pre-accelerated to several times the peak momentum of thermal protons, pp,th, before they can re-cross the shock transition layer. Here, pp,th = p 2mpkBT2, where T2 is the postshock gas temperature and kB is the Boltzmann constant. Recently several authors have suggested preacceleration mechanisms based on plasma interactions with \ufb02uctuating magnetic \ufb01elds that are locally quasi-perpendicular to the shock surface (e.g. Burgess 2006; Amano & Hoshino 2009; Guo & Giacalone 2010; Riquelme & Spitkovsky 2011). But the detailed picture of the \f\u2013 7 \u2013 electron injection is not well constrained by plasma physics. Observationally, the ratio of CR electron number to proton number, Ke/p \u223c0.01, is commonly inferred for strong supernova remnant shocks, since about 1% of the Galactic CR \ufb02ux near a GeV is due to electrons (Reynolds 2008). But this ratio is rather uncertain for weak shocks under consideration. So here we adopt a simple model in which the postshock electrons above a certain injection momentum, pinj = Qinjpp,th, are assumed to be injected to the CR population. Here, Qinj is a parameter that depends on the shock Mach number and turbulent magnetic \ufb01eld amplitude in the thermal leakage injection model (Kang & Ryu 2010). The CR electron number density or, equivalently, the distribution function at pinj at the shock location xs, fe(xs, pinj), will be scaled to match the observed \ufb02ux of radio relics (see Sections 3.1 and 4.2). 2.4. Pre-existing CR Electrons We consider the population of pre-existing CR electrons, along with that of freshly injected electrons at the shock. However, the nature of pre-existing CR electrons in ICMs is not well constrained. If they were generated at previous, external and internal shocks, a spectral slope of s \u223c4.0\u22125.3 is expected for M \u22732, close to the acceleration site. However, since their lifetime in equation (2) is much shorter than that of host clusters, it is unlikely that they are directly responsible for the pre-existing electron population we consider here. Any pre-existing CR electron should be locally produced, possibly either through p \u2212p collisions of CR protons with thermal protons or via turbulent re-acceleration of some populations (possibly including p \u2212p secondary electrons), as noted in Introduction. Petrosian & East (2008) have shown that turbulent injection of CR electrons from the thermal pool in ICMs is unlikely. The slope of protons re-accelerated by turbulence should be close to s \u223c4 (see, e.g., Chandran 2005), but that of electrons is strongly modi\ufb01ed by coolings (Brunetti & Lazarian 2007, 2011). The slope of secondary electrons from p \u2212p collisions would be roughly s \u223c 4/3(sp\u22121) (Mannheim & Schlickeiser 1994), where sp is the slope of CR protons, so typically, s \u223c4 \u22126. In summary, pre-existing CR electrons may contain many di\ufb00erent populations with di\ufb00erent degrees of radiative cooling and may not be represented by a single power-law. For simplicity, here we adopt a power-law form, fe,1(p) = fpre \u00b7 \u0012 p pinj \u0013\u2212s , (4) with slope s, as the model spectrum for pre-existing CR electrons. In modeling of speci\ufb01c radio relics, the value of s will be chosen as s = 2\u03b1obs + 3, where \u03b1obs is the observed mean \f\u2013 8 \u2013 spectral index. The amplitude, fpre, is set by the ratio of upstream CR electron pressure to gas pressure, R1 \u2261PCRe,1/Pg,1. Here, R1 is a parameter that will be scaled to match the observed \ufb02uxes of radio relics (see Sections 3.1 and 4.2). 3. ANALYTIC EVALUATIONS We \ufb01rst consider some features in the CR electron energy spectrum and synchrotron emission spectrum for plane-parallel shocks, to provide analytic estimations for the simulation results presented in the next section. 3.1. Basic Features in CR Electron Spectrum In the test-particle regime of DSA, the distribution of freshly injected and accelerated electrons at the \u201cshock location\u201d can be approximated, once it reaches equilibrium, by a power-law spectrum with super-exponential cuto\ufb00, fe,2(p) \u2248finj \u00b7 \u0012 p pinj \u0013\u2212q exp \u0012 \u2212p2 p2 eq \u0013 , (5) where q = 3\u03c3/(\u03c3 \u22121) (Kang 2011). In the case that B2 = \u03c3B1, that is, the jump in the magnetic \ufb01eld strength across the shock is assumed to be same as the density jump, \u03c7 = \u03c3, and \u03ba2 = \u03ba1/\u03c3, the cuto\ufb00momentum, which represents the balance between DSA and the radiative cooling, becomes peq = m2 ec2us p 4e3q/27 B1 B2 e\ufb00,1 + B2 e\ufb00,2 !1/2 . (6) The corresponding Lorentz factor for typical merger shock parameters is then \u03b3e,eq \u22482 \u00d7 109 q\u22121/2 \u0010 us 3000 km s\u22121 \u0011 B1 B2 e\ufb00,1 + B2 e\ufb00,2 !1/2 . (7) Hereafter, the magnetic \ufb01eld strength is given in units of \u00b5G. The acceleration time for electrons to reach peq, so the time for the equilibrium to be achieved, is estimated as teq \u2248(2.4 \u00d7 104 yrs) q1/2B\u22121/2 1 (B2 e\ufb00,1 + B2 e\ufb00,2)\u22121/2 \u0010 us 3000 km s\u22121 \u0011\u22121 . (8) This is much shorter than the typical time scale of merger shocks, \u2273109 yrs. For t \u2273teq, the DSA gains balance the radiative losses and the electron spectrum near the shock location asymptotes to a steady-state (Kang 2011). \f\u2013 9 \u2013 With pre-existing CR electrons given in equation (4), the electrons distribution at the shock location can be written as the sum of the pre-existing/re-accelerated and freshly injected/accelerated populations, fe,2(p) \u2248 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u0014 q (q\u2212s) \u0012 1 \u2212 \u0010 p pinj \u0011\u2212q+s\u0013 fpre \u0010 p pinj \u0011\u2212s + finj \u0010 p pinj \u0011\u2212q\u0015 exp \u0010 \u2212p2 p2 eq \u0011 , when s \u0338= q \u0014 s ln \u0010 p pinj \u0011 fpre \u0010 p pinj \u0011\u2212s + finj \u0010 p pinj \u0011\u2212q\u0015 exp \u0010 \u2212p2 p2 eq \u0011 , when s = q. (9) (Kang & Ryu 2011). The relative importance of pre-existing to freshly injected populations depends on fpre and finj, as well as on the slopes s and q in our model. For the sake of convenience, hereafter we will use the term \u201cinjected\u201d electrons for those injected at the shock and then accelerated by DSA and the term \u201cre-accelerated\u201d electrons for those accelerated from the pre-existing population. We here de\ufb01ne the CR electron number fraction, \u03bee \u2261nCRe,2/ne,2, as the ratio of CR electron number to thermal electron number in the postshock region. Here nCRe,2 includes CR electrons accelerated from both the pre-existing and freshly injected populations. Considering that the CR proton number fraction is likely to be \u03bep \u227210\u22124 at weak shocks (Kang & Ryu 2010) and Ke/p \u223c0.01, \u03bee \u223c10\u22126 could be regarded as a canonical value. We note that the resulting radio emission is linearly scaled with both \u03bee and the preshock gas density, n1, in the test-particle regime, so the combined parameter, n1\u03bee, can be treated as a free parameter. We here \ufb01x the preshock gas density, n1 = 10\u22124cm\u22123, as a \ufb01ducial parameter, but vary \u03bee to match the observed \ufb02uxes of radio relics. Another measure is the ratio of postshock CR electron pressure to gas pressure, R2 = PCRe,2/Pg,2, which depends on both \u03bee and the slopes q and s. In modeling of speci\ufb01c radio relics in Section 4.2, we will determine the set of values for \u03bee, R2 and R1, that matches the observed level of radio \ufb02ux. If we ignore for the moment the modest in\ufb02uence of continued DSA downstream of the shock, we can follow the electron population that advects downstream by solving the following equation : dge dt + V \u00b7 \u2202ge \u2202y = 0, (10) where d/dt \u2261\u2202/\u2202t + u\u2202/\u2202x and V = \u2212be(p)/p = \u2212Cey. Here, C = (4e4/9m4 ec6)B2 e\ufb00is a constant. This is basically the equation for downward advetion in the space of y = ln(p/mec) due to radiative cooling. The general solution of the equation is ge(p, t) = G(e\u2212y \u2212Ct) = G \u0012 p 1 \u2212t/trad \u0013 , (11) where trad = 1/Cey is the electron cooling time scale. This provides the approximate distribution of CR electrons at the distance d = u2t downstream from the shock, where u2 is the downstream \ufb02ow speed. \f\u2013 10 \u2013 For instance, if the distribution function of the \u201cinjected\u201d electrons at the shock location (d = 0) is the power-law spectrum, ge(p, 0) = ginj(p/pinj)\u2212q+4, the downstream spectrum can be approximated as ge(p, d) = ginj \u0014 p (1 \u2212d/u2trad)pinj \u0015\u2212q+4 . (12) It should be straightforward to apply the same approximation to the full spectrum given in equation (9). In Figures 1 and 2, we compare the distributions described by equation (11) with those from time-dependent DSA simulations, demonstrating that equation (11) provides reasonable approximations to the solutions of full DSA simulations (see Table 1 for speci\ufb01c model parameters). 3.2. Basic Features in Synchrotron Emission Spectrum Since the synchrotron emission from mono-energetic electrons with \u03b3e has a broad peak around \u03bdpeak \u22480.3(3eB/4\u03c0mec)\u03b32 e, for a given observation frequency, \u03bdobs, the greatest contribution comes from electrons of the Lorentz factor, \u03b3e,peak \u22481.26 \u00d7 104 \u0010 \u03bdobs 1GHz \u00111/2 \u0012 B 5 \u00b5G \u0013\u22121/2 (1 + z)1/2. (13) Using equations (2) and (13), the cooling time of the electrons emitting at \u03bdobs can be estimated approximately as trad \u22488.7 \u00d7 108 yrs B1/2 2 B2 e\ufb00,2 ! \u0010 \u03bdobs 1GHz \u0011\u22121/2 (1 + z)\u22121/2. (14) The cooling length behind the shock, u2trad, then becomes Lrad \u2248890kpc \u0010 u2 103 km s\u22121 \u0011 B1/2 2 B2 e\ufb00,2 ! \u0010 \u03bdobs 1GHz \u0011\u22121/2 (1 + z)\u22121/2. (15) Note that B2 e\ufb00,2/B1/2 2 \u223c15 \u221225 for the model parameters considered here. Again, trad is shorter than the typical time scale of merger shocks, \u2273109 yrs. So Lrad should represent the width of radio emitting region at \u03bdobs behind plane-parallel shocks. In radio relics, however, the observed width is constrained by both Lrad and the projection angle of spherical shocks (see Section 4.2). The cuto\ufb00energy in the electron spectrum due to the radiative cooling decreases linearly with the distance from the shock location, that is, \u03b3e,cut \u221dd\u22121, as expected from equation \f\u2013 11 \u2013 (2) and shown in Figure 1. At the farthest downstream point, d = u2t, where t is the shock age, the cuto\ufb00energy becomes \u03b3e,br(t) \u22489.82 \u00d7 102 \u0012 t 109 yrs \u0013\u22121 \u0012Be\ufb00,2 5 \u00b5G \u0013\u22122 . (16) If the electron distribution function at the shock location has a power-law form, ne(xs, \u03b3e) \u221d \u03b3\u2212r e , then the volume-integrated electron spectrum downstream steepens by the power-law index of one, i.e., Ne,2 \u221d\u03b3\u2212(r+1) e for \u03b3e > \u03b3e,br. It is because the width of the spatial distribution of electrons with \u03b3e decreases as \u03b3\u22121 e (Zirakashvili & Aharonian 2007; Kang 2011). As a consequence, the \u201cvolume-integrated\u201d synchrotron spectrum from aged electrons has a spectral break, i.e., an increase of the spectral index \u03b1 by +0.5, at \u03bdbr = 0.3 3 4\u03c0 eB2 mec\u03b32 e,br \u22486.1 \u00d7 106Hz \u0012 t 109 yrs \u0013\u22122 \u0012 B2 5 \u00b5G \u0013 \u0012Be\ufb00,2 5 \u00b5G \u0013\u22124 . (17) So the shock age may be estimated from the break frequency \u03bdbr, if the magnetic \ufb01eld strength is known. 4. RESULTS OF DSA SIMULATIONS 4.1. Plane-Parallel Shocks The model parameters of our simulations for plane-parallel shocks are summarized in Table 1. Here, z is the redshift, cs,1 is the preshock sound speed, M is the shock Mach number, u2 is the postshock \ufb02ow speed, s is the power-law slope of pre-existing CR electrons, and B2 is the postshock magnetic \ufb01eld strength. The model name in the \ufb01rst column includes the values of M, B2, and s; for models without pre-existing CRs, \u201cI\u201d (injection only) is speci\ufb01ed. For instance, M4.5B7I stands for the model with M = 4.5, B2 = 7 \u00b5G, and injected CR electrons only (no pre-existing CRs), while M2B2.3S4.2 stands for the model with M = 2.0, B2 = 2.3 \u00b5G, and s = 4.2. For the preshock magnetic \ufb01eld strength, B1 = 1 \u00b5G is adopted for all models, which is close to the typical quoted value in cluster outskirts (see Br\u00a8 uggen et al. 2011, and references therein). Once B1 < BCBR, the IC cooling dominates, and the exact value of B1 is not important in our models. The model parameters are chosen to match the observed properties of radio relics in clusters CIZA J2242.8+5301 and ZwCl 0008.8+5215 (see the next subsection for details). For example, M = 4.5 or s = 4.2 is chosen to match the observed spectral index, \u03b1 = 0.6, of the relic in CIZA J2242.8+5301, and M = 2 or s = 5.4 is chosen to match \u03b1 = 1.2 of the relic in ZwCl 0008.8+5215. For reference, the shock compression ratio is \u03c3 = 3.5 for M = 4.5 and \u03c3 = 2.3 for M = 2. The values of u2 and \f\u2013 12 \u2013 B2 are chosen to match the observed width of the relics, since they determine the cooling length as shown in equation (15). Figure 1 shows the CR electron distribution at di\ufb00erent locations downstream of the shock, after it has reached the steady state, for M4.5B3.5I, M2B7S4.2, M2B2.3I and M2B2.3S5.4 models. For the comparison of di\ufb00erent models, here the postshock CR electron number fraction is set to be \u03bee = 10\u22126, which sets the vertical amplitude. The injection-only models exhibit the power-law distributions with cuto\ufb00s due to the cooling, as discussed in the previous section. In M2B7S4.2 model, the electrons accelerated from the injected population are important only at low energies (\u03b3e \u2272102.5) and they dominate in terms of particle number, because the \u201cinjected\u201d spectrum is much softer than the \u201cre-accelerated\u201d spectrum (i.e. q > s). The electrons accelerated from the pre-existing population, on the other hand, dominate at higher energies including \u03b3e \u223c104 and they are most relevant for the synchrotron emission at \u03bd \u223c1 GHz. The slope of the accelerated spectrum at high energies is similar to that of the pre-existing spectrum, which is consistent with equation (9). On the contrary, in M2B2.3S5.4 model with s \u2248q, the \u201cinjected\u201d electrons are negligible even at low energies. This di\ufb00erence comes about, because with similar numbers of pre-existing CRs, the amplitude fpre is larger in M2B2.3S5.4 (with s = 5.4) than in M2B7S4.2 (with s = 4.2). The numbers of injected electrons should be similar in the two models, because the shock Mach number is the same. Note that the re-accelerated spectrum \ufb02attens by a factor of ln(p), as shown in equation (9), because s \u2248q in this model. The left column of Figure 2 shows the spatial pro\ufb01le of the electron distribution function, ge(\u03b3e, x), at two speci\ufb01c energies (\u03b3e) as a function of the downstream distance for the M4.5B7I, M4.5B3.5I and M2B7S4.2 models. For each model the Lorentz factors are calculated for \u03bdobs = 0.61 GHz and 2.3 GHZ according to equation (13). The upper/lower curves represent ge of the lower/higher values of \u03b3e, respectively. The right column of Figure 2 shows the synchrotron emission, j\u03bd(x), at \u03bdobs = 0.61 GHz (upper curves) and 2.3 GHZ (lower curves). The solid lines show ge and j\u03bd calculated from the DSA simulation results, while the dashed lines show the approximate solutions calculated with equation (11). The \ufb01gure demonstrates that the lower energy elections advect further from the shock before cooling than higher energy electrons, so the lower-frequency radio emission has larger widths than the higher-frequency one. According to equation (15), the cooling lengths of the electrons emitting at 0.61 and 2.3 GHz are Lrad \u224840 and 20 kpc, respectively, in the three models. \f\u2013 13 \u2013 4.2. Modeling of Radio Relics As noted above, it should be su\ufb03cient to employ the plane shock approximation to compute the distributions of CR electrons and their emissivities as a function of the distance from the shock surface. In observed radio relics, however, radio emitting shells are likely to be curved with \ufb01nite curvatures along the observer\u2019s line of sight (LoS) as well as in the plane of the sky. So in modeling of radio relics, the curved shell needs to be projected onto the plane of the sky. In that case LoS\u2019s from the observer will transect a range of shock displacements, and this needs to be taken into account when computing the observed brightness distribution of model relics. Following the approach of van Weeren et al. (2010, 2011), we consider a piece of a spherical shell with outer radius Rs, subtended along the LoS from +\u03c8 to \u2212\u03c8 so for the total angle of 2\u03c8. Then, Rs and the projection angle \u03c8 are the parameters that \ufb01x the shape of the curved shell to be projected onto the plane of the sky. The synchrotron emissivity, j\u03bd (erg cm\u22123 s\u22121 Hz\u22121 str\u22121), at each point behind the curved shock is approximated as that downstream of plane-parallel shocks discussed in the previous subsection. Since we do not consider the polarization of synchrotron emissions here, so, for simplicity, the magnetic \ufb01eld lines are assumed to lie in the plane of the sky; that is, the angle between the magnetic \ufb01eld vectors and the LoS is \ufb01xed at 90\u25e6. The synchrotron intensity is calculated by integrating the emissivity along the LoS, I\u03bd(r) = R j\u03bddl (erg cm\u22122 s\u22121 Hz\u22121 str\u22121), where r is the distance behind the projected shock edge in the plane of the sky. The bound of the path length, l, for given r is determined by Rs and \u03c8. Then, the observed \ufb02ux is estimated, assuming a Gaussian beam with e-width, \u03b8, as S\u03bd(r) \u2248I\u03bd(r)\u03c0\u03b82(1 + z)\u22123, (18) where \u03bd = \u03bdobs(1 + z). Figure 3 shows the pro\ufb01les of the synchrotron \ufb02ux, S\u03bd(r), at \u03bdobs = 0.61 GHz (left column) and the spectral index, \u03b1 = \u2212d ln S\u03bd/d ln \u03bd, estimated with the \ufb02uxes at \u03bdobs = 0.61 and 1.4 GHz (right column) for the M4.5B7I, M4.5B3.5I and M2B7S4.2 models, which are designed to reproduce the radio relic in CIZA J2242.8+5301. The \ufb02ux is calculated with the beam of \u03b82 = \u03b81\u03b82/(4 ln 2), \u03b81\u03b82 = 16.7\u201d \u00d7 12.7\u201d. They are compared with the \u201cdeconvolved\u201d pro\ufb01le of observed \ufb02ux taken form Figure 4 of van Weeren et al. (2010) (\ufb01lled circles). Since the observed \ufb02ux is given in an arbitrary unit in their paper, we scale it so that the peak value of S\u03bd(r) becomes 5 mJy, which is close to the observed value (private communication with R. J. van Weeren). The radius of the spherical shock is set to be Rs = 1.5 Mpc and two values of projection angle, \u03c8 = 10\u25e6and 20\u25e6, are considered. The observed pro\ufb01le is well \ufb01tted by the three models, if \u03c8 = 10\u25e6is taken. In M4.5B7I and M4.5B3.5I, di\ufb00erent values of u2 are assumed to match the observed width (see Table 1). The observed value \f\u2013 14 \u2013 of the spectral index at r = 0, \u03b1 = 0.6, is reproduced either in the injection-only models with M = 4.5 or in the model with pre-existing CRs with the slope s = 4.2, as noted in the previous subsection. For the \ufb01ducial preshock particle density of n1 = 10\u22124 cm\u22123, the values of the postshock electron CR number fraction required to match the peak \ufb02ux of 5 mJy are \u03bee = 7.6 \u00d7 10\u22128, 2.3 \u00d7 10\u22127, and 2.6 \u00d7 10\u22127 for M4.5B7I, M4.5B3.5I, and M2B7S4.2, respectively. In M2B7S4.2 model the ratio of the pressure of pre-existing CR electrons to gas pressure far upstream is R1 \u223c6.7\u00d710\u22125. Those values of \u03bee and R1 are modest enough that they probably are not in con\ufb02ict with the values expected in clusters. Our results demonstrate that if the pre-existing electron population is considered, the radio relic in CIZA J2242.8+5301 can be reproduced even with weak shocks of M \u223c2 or so. We note that R1 is a model parameter that sets the amplitude, fpre, of the upstream population, while the fraction \u03bee is the outcome of DSA of both pre-existing and injected electrons. As noted in Figure 1, in M2B7S4.2 model the fraction \u03bee is determined mostly by the \u201cinjected\u201d population at low energies, while the radio emission is regulated mostly by the \u201cre-accelerated\u201d population at \u03b3e \u223c104. So we should obtain the similar radio \ufb02ux even with a much lower injection rate for this model, and the resulting \u03bee could be much smaller than the current value of 2.6 \u00d7 10\u22127. We point that the radio relic in CIZA J2242.8+5301 is subtended in the plane of the sky over the angle of \u223c60 \u221270\u25e6. This means that the surface of the shock responsible for the relic should be highly elongated with the aspect ratio of \u223c(60 \u221270\u25e6)/(2\u03c8) = \u223c3 \u22123.5 when \u03c8 = 10\u25e6is adopted. It would not be trivial, if not impossible, for such structure to be induced in merger events in clusters. Or the relic may actually consist of a number of substructures, which is hinted by the variations in the observed \ufb02ux pro\ufb01le along the arc in the plane of the sky. The left column of Figure 4 shows the synchrotron \ufb02ux pro\ufb01les at \u03bdobs = 1.38 GHz, for M2B2.3I and M2B2.3S5.4 models, which are designed to reproduce the radio relic in ZwCl008.8+5215. The \ufb02ux is calculated with \u03b82 = \u03b81\u03b82/(4 ln 2), \u03b81\u03b82 = 23.5\u201d \u00d7 17.0\u201d. We note that this beam size is \ufb01ne enough that the convolved pro\ufb01les with a Gaussian beam (dotted and long-dashed lines) are very similar to the unconvolved pro\ufb01les (solid lines). The pro\ufb01les are compared with the observed pro\ufb01le given in Figure 16 of van Weeren et al. (2011) (\ufb01lled circles). Again the observed \ufb02ux is given in an arbitrary unit, so it is scaled at 5 mJy at the peak. The right column shows the pro\ufb01les of \u03b1, estimated with \ufb02uxes at \u03bdobs = 0.24 and 1.38 GHz, along with the observed \u03b1 also taken from Figure 16 of van Weeren et al. (2011) (\ufb01lled circles). The shock radius is assumed to be Rs = 1.0 Mpc and two values of projection angle, \u03c8 = 25\u25e6and 30\u25e6, are considered. The two models shown are the same except the existence of pre-existing CR electrons in M2B2.3S5.4 model. In M2B2.3S5.4, the \f\u2013 15 \u2013 \u201cre-accelerated\u201d population dominates over the \u201cinjected\u201d population. Yet, the two models give similar pro\ufb01les of S\u03bd and \u03b1. We see that in our models \u03c8 = 30\u25e6gives good \ufb01ts to the observed pro\ufb01les of S\u03bd and \u03b1, while van Weeren et al. (2011) argued that \u03c8 = 22\u25e6seems to give a reasonable \ufb01t. Note that they adopted u2 = 750 km s\u22121 and B2 = 2 \u00b5G, giving Lrad = 40 kpc, while in our models u2 = 1100 km s\u22121 and B2 = 2.4 \u00b5G, giving Lrad = 57 kpc. For the assumed value of n1 = 10\u22124 cm\u22123, the postshock CR electron number fraction required to match the peak \ufb02ux of 5 mJy is \u03bee = 2.1 \u00d7 10\u22124 for M2B2.3I, which is six times larger than \u03bee = 3.3 \u00d7 10\u22125 for M2B2.3S5.4. This is because the spectral shapes of CR electron spectrum below \u03b3e \u2272102.5 are di\ufb00erent in the two models (see Fig. 1 and discussion in the previous subsection). The number fraction of CR electrons for M2B2.3I seems too large, considering that the postshock proton CR number fraction is likely to be \u03bep \u227210\u22124 for M = 2 (Kang & Ryu 2010). In M2B2.3S5.4, on the other hand, the ratio of upstream CR electrons pressure to gas pressure is R1 \u223c1.2 \u00d7 10\u22123. This seems to be marginal, that is, not inconsistent with expected values, considering that the ratio of CR proton pressure to gas pressure is \u227210\u22122 \u221210\u22121 in ICMs as noted in Introduction. But we should point that the values of \u03bee and R1 in these two models are dominated by low-energy CR electrons with \u03b3e \u2272103 (see Figure 1), which do not contribute much to the synchrotron radiation observed in radio relics. So if the \u201cinjected\u201d population in M2B2.3I consists of electrons with \u03b3e \u2273103 only, the required values of \u03bee could be reduced by a factor of \u223c10, easing down the constraint. Since we do not understand fully the plasma interactions involved in the pre-acceleration and injection of electrons at the shock, the detailed spectral shape of those low energy electrons are very uncertain. The top panels of Figure 5 show the pro\ufb01les of the intensity, I\u03bd(r) = R j\u03bddl, at 6 cm (5 GHz), 20 cm (1.5 GHz), and 91 cm (0.33 GHz) in arbitrary units as a function of r for the M4.5B3.5I, M2B7S4.2 and M2B2.3I models. Here, the projection angle is set to be \u03c8 = 30\u25e6. Since the emissivity j\u03bd decreases downstream of the shock, while the path length increases with r, the pro\ufb01les of I\u03bd exhibit non-monotonic behaviors. For example, the pro\ufb01les at 6 cm show a slightly concave turnover before it decreases abruptly at r \u2248200 kpc. The middle panels show the spectral indices, \u03b16 20 (solid lines) calculated between 6 and 20 cm and \u03b120 91 (dashed lines) calculated between 20 and 91 cm, when the projection angle is set to be \u03c8 = 10\u25e6, 20\u25e6and 30\u25e6. The general trend is the increase of \u03b16 20 and \u03b120 91 as we move away from the projected shock edge at r = 0, re\ufb02ecting the e\ufb00ects of radiative cooling. Also \u03b16 20 > \u03b120 91, that is, the slope is steeper at higher frequencies. The bottom panels show the color-color diagram of \u03b120 91 versus \u03b16 20. The rightmost point (\u03b16 20 = \u03b120 91 = \u03b1s) corresponds to the projected shock edge. Away from the edge, the loci move towards the lower left direction. In both middle and bottom panels, the spectral slopes also show a slightly \f\u2013 16 \u2013 concave turnover for large projection angles of \u03c8 = 20\u25e6and 30\u25e6. Recently, van Weeren et al. (2012) reported the color-color diagram for the so-called \u201cToothbrush\u201d relic in cluster 1RXS J0603.3+4214, which shows a spectral behavior that is consistent with the cooled electron population downstream of the shock. 5. SUMMARY In an e\ufb00ort to re\ufb01ne our understandings of radio relics in clusters of galaxies, we have performed time-dependent, DSA simulations of CR electrons and calculated the synchrotron emission from CR electrons for plane-parallel shocks. The energy losses due to synchrotron and IC have been explicitly included. Weak shocks expected to be found in ICMs have been considered. Both the cases with and without pre-existing CR electrons have been considered. The relevant physics of DSA and cooling is well represented by plane-parallel shocks, since the time scales over which electrons are accelerated and cool are much shorter than the lifetime of merger shocks in clusters and the radio emission is con\ufb01ned to a region of small width behind the shock front. We then have modeled the synchrotron \ufb02ux and spectral distributions from spherical shocks by approximating them with plane-parallel shocks and projecting to the plane of the sky for the angle from +\u03c8 to \u2212\u03c8 along the LoS. For the speci\ufb01c models which are designed to reproduce radio relics in clusters CIZA J2242.8+5301 and ZwCl0008.8+5215, we have compared the resulting distributions with observed ones in details. The main results are summarized as follows: 1) The CR electron spectrum becomes steady, after the DSA gains balance the radiative losses. The spectrum at the shock location is well approximated by a distribution with superexponential cuto\ufb00at peq, fe,2(p) \u221dexp(\u2212p2/p2 eq). The full expressions of fe,2(p) and peq are given in equations (9) and (6). 2) The spectrum of the downstream CR electrons that have cooled for the advection time, t = d/u2, can be approximated with ge(p, d) = G [p/(1 \u2212d/u2trad)] at the distance d from the shock location. Here, G is the functional form of the spectrum at the shock location of d = 0. The synchrotron emission from this analytic formula provides a reasonable approximation to that calculated using DSA simulation results (see Figure 2). 3) Both the models of M = 4.5 shock without pre-existing CR electrons and M = 2 shock with pre-existing CR electrons of fe,1 \u221dp\u22124.2 may explain the observed properties of the radio relic in CIZA J2242.8+5301. The postshock electron CR number fraction of \u03bee \u223c10\u22127 in the injection-only model or the ratio of upstream CR electrons pressure to gas \f\u2013 17 \u2013 pressure of R1 \u223cseveral \u00d7 10\u22125 in the model with pre-existing CRs are required to explain the observed radio \ufb02ux of several mJy. Those values of \u03bee and R1 are modest enough to be accommodated in typical clusters. But the surface of the shock responsible for the relic should be highly elongated with the aspect ratio of \u223c3 \u22123.5. It would not be trivial for such structure to be induced in merger events in clusters. 4) The radio relic in ZwCl0008.8+5215 may be explained by the models of M = 2 shock with or without pre-existing CR electrons. However, in the injection-only model, \u03bee \u227310\u22124, required to explain the observed radio \ufb02ux of several mJy, is probably too large for the weak shock of M = 2. On the other hand, in the model with pre-existing CRs, R1 \u223c10\u22123, required to explain the observed \ufb02ux, seems to be marginal, that is, not inconsistent with expected values in clusters. In the model, then, the origin of such pre-existing electron population is an important topic, but beyond the scope of the present paper. 5) The color-color diagram of \u03b120 91 vs \u03b16 20 has been presented behind the projected shock edge. It includes an important information about the evolutionary properties of the postshock electrons. Due to the e\ufb00ect of the projection with limited subtended angle along the LoS for spherical shocks, the diagram behaves di\ufb00erently for di\ufb00erent projection angles. So it may provide an independent way to estimate the projection angle, which is a key parameter in modeling of radio relics. HK was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (20110002433). DR was supported by the National Research Foundation of Korea through grant 2007-0093860. TWJ was supported by NASA grant NNX09AH78G, NSF grant AST-0908668 and by the Minnesota Supercomputing Institute for Advanced Computational Research. We thanks R. J. van Weeren and L. Rudnick for discussions." + }, + { + "url": "http://arxiv.org/abs/1102.3109v1", + "title": "Energy Spectrum Of Nonthermal Electrons Accelerated At A Plane Shock", + "abstract": "We calculate the energy spectra of cosmic ray (CR) protons and electrons at a\nplane shock with quasi-parallel magnetic fields, using time-dependent,\ndiffusive shock acceleration (DSA) simulations, including energy losses via\nsynchrotron emission and Inverse Compton (IC) scattering. A thermal leakage\ninjection model and a Bohm type diffusion coefficient are adopted. The electron\nspectrum at the shock becomes steady after the DSA energy gains balance the\nsynchrotron/IC losses, and it cuts off at the equilibrium momentum p_{eq}. In\nthe postshock region the cutoff momentum of the electron spectrum decreases\nwith the distance from the shock due to the energy losses and the thickness of\nthe spatial distribution of electrons scales as p^{-1}. Thus the slope of the\ndownstream integrated spectrum steepens by one power of p for p_{br} 1 due to the balance between DSA and cooling. On the contrary, high energy protons di\ufb00use much further both downstream and upstream as pmax \u221dt increases with time. The top panels of Figure 4 show the evolution of gp(p) and ge(p) at the shock (right panel) and at a upstream position (middle panel) at t = 3.7 \u00d7 102, 1.1 \u00d7 103, 1.8 \u00d7 103 yrs. Because t \u226bteq = 71.7 yrs, so the electron spectrum has already reached the steadystate, as can be seen in the \ufb01gure. In the right panel we also plot the downstream integrated electron spectrum, Ge,2 = R 0 \u2212\u221ege(p)dx, the upstream integrated electron spectrum, Ge,1 = R +\u221e 0 ge(p)dx. As discussed right after equation (16), Ge,2 (dashed lines) becomes a broken power-law which steepens from p\u2212q to p\u2212(q+1) above p > pbr(t) with an exponential cuto\ufb00at the higher momentum peq \u22482.4\u00d7104. We also see that the brake momentum decrease with time as pbr \u221dt\u22121. Note that the upstream integrated spectrum Ge,1 (dotted lines) has reached the steady-state, while, for p < pbr, the amplitude of the downstream spectrum Ge,2 increase linearly with time as the \ufb02ow advects downstream. Such a trend was seen in Figure 3 as well. Total volume integrated spectrum (solid lines) shows a small bump near the cuto\ufb00momentum due to the upstream contribution, which in turn will determine the exact shape of the cuto\ufb00of X-ray synchrotron emission. 4.2 CR Modi\ufb01ed Case With the injection parameter \u01ebB = 0.25, the CR injection and acceleration is e\ufb03cient enough to modify signi\ufb01cantly the shock structure. The CR injection \f8 KANG Fig. 5.\u2014 Proton and electron distribution functions, gp(x, p) and ge(x, p), in the phase-space at t/to = 0.4, 1, 3 for the CR modi\ufb01ed case shown in the bottom panels of Fig. 4. At each contour level, the value of g increases by a factor of 10. Note that the spatial span of the proton distribution shown here is [-4,+4], while that of the electron distribution is [-2,+2]. fraction becomes \u03be \u22485 \u00d7 10\u22124, and the postshock CR pressure is Pcr,2/\u03c10u2 s \u22480.29. So the postshock gas density, \u03c12/\u03c10 \u22485.6, is larger than \u03c3 = 4 for the gasdynamic shock. In such CR modi\ufb01ed shocks, the pressure from CRs di\ufb00using upstream compresses and decelerates the gas smoothly before it enters the subshock, creating a shock precursor (Kang & Jones 2007). With the assumed momentum-dependent di\ufb00usion, \u03ba(p), the particles of di\ufb00erent momenta, p, experience di\ufb00erent compressions, depending on their di\ufb00usion length, ld(p) = \u03ba(p)/us. The particles just above pinj sample mostly the compression across the subshock (\u03c3s = \u03c11/\u03c10), while those near pmax experience the total compression across the entire shock structure (\u03c3t = \u03c12/\u03c10). This leads to the particle distribution function that behaves as f(p) \u221dp\u22123\u03c3s/(\u03c3s\u22121) for p \u223cpinj, but \ufb02attens gradually to f(p) \u221dp\u22123\u03c3t/(\u03c3t\u22121) toward p \u223cpmax (Kang et al. 2009). The bottom panels of Figure 4 show the evolution of gp(p), ge(p) at the shock position (left panel) and at a upstream location (middle panel) and the volume integrated Ge(p) (right panel) as discussed before. The Alfv\u00b4 enic drift with vA = 0.44cs (MA = 45) are considered. Because of the development of a smooth precursor and the weaker subshock, both gp and ge at the shock are softer than the test-particle power-law spectra at lower momenta p/(mpc) < 102, while they are harder at higher momenta with a cuto\ufb00at pmax (proton spectrum) or peq (electron spectrum). Thus the CR spectrum exhibits the well-known concave curvature between the lowest and the highest momenta. Such concavity is re\ufb02ected in the volume integrated spectrum as well, so Ge is no longer a simple broken power-law as in the test-particle case. With the greater velocity jump (\u03c3t = 5.6), the acceleration is more e\ufb03cient and so there are more highest energy particles in the upstream region, compared to the test-particle case. As a result, the upstream integrated spectrum, Ge,1 (dotted lines) has a more pronounced peak at peq, compared to the test-particle case. This introduces an additional curvature in the total Ge spectrum. In fact, Ge,1 dominates over Ge,2 near peq, so the upstream contribution should determine the spectral shape of X-ray synchrotron emission. Thus the spectral slope in radio and the detail shape in X-ray of the observed synchrotron \ufb02ux can provide a measure of \fELECTRON SPECTRUM AT PLANE SHOCKS 9 nonlinear DSA feedback. Finally, Figure 5 shows the phase-space distribution of gp(x, p) and ge(x, p) for the CR modi\ufb01ed model with high injection rate. Because the postshock magnetic \ufb01eld strength, B2 = \u03c3tB0 = 5.6B0, is stronger, electrons cool down to lower energies, compared to the test-particle case shown in Figure 3. Enhanced cooling also reduces the thickness of the electron spatial distribution downstream of the shock. Again, we can see that at the highest energies of p/mpc > 104, the upstream electron components is more important than the downstream component. 5. SUMMARY Using the kinetic simulations of di\ufb00usive shock acceleration at a plane shock, we calculate the timedependent evolution of the CR proton and electron spectra, including electronic synchrotron/IC energy losses. Both protons and electrons are injected at the shock via thermal leakage injection and accelerated by DSA, while electrons are treated as test-particles. We adopt a momentum-dependent, Bohm-type di\ufb00usion coe\ufb03cient and assume that the magnetic \ufb01eld strength scales with the gas density. The proton spectrum at the shock, gp(xs, p), and the volume-integrated proton spectrum, Gp(p) extends to pmax in equation (9), which increases linearly with time. On the other hand, the electron spectrum at the shock, ge(xs, p), approaches to the time-asymptotic spectrum for the shock age t > teq in equation (16) . In that regime, our time-dependent results with a Bohmtype di\ufb00usion are qualitatively consistent with the analytic solutions for a stead-state plane shock, which were previously presented by several authors such as Heavens & Meisenheimer (1987) with momentumindependent \u03ba and Zirakashvili & Aharonian (2007) with momentum-dependent \u03ba(p). So we will re-iterate some of the major \ufb01ndings discussed by those authors and add new insights obtained from our nonlinear DSA simulations. 1) First of all, we re-derive two characteristic momenta: the cuto\ufb00momentum, peq in equation (15) (for the Bohm-type di\ufb00usion coe\ufb03cient) and the break momentum, pbr in equation (21). Note that peq is a timeasymptotic quantity that is achieved when the DSA energy gain balances the synchrotron/IC energy losses, while pbr is a time-dependent quantity that is determined by \u2019aging\u2019 of electrons due to synchrotron/IC cooling downstream of the shock. 2) The time-asymptotic electron distribution function at the shock, fe(xs, p), has a Gaussian cuto\ufb00as exp(\u2212p2/p2 eq), which agrees well with the analytic form suggested by Zirakashvili & Aharonian (2007). 3) Behind the shock synchrotron/IC cooling dominates over DSA, so the electron spectrum, fe(x, p), cuts o\ufb00at progressively lower pcut(d) \u2248(u2/DB2 e,2)d\u22121, which decrease with the distance, d = xs \u2212x, from the shock and is smaller than peq. This cuto\ufb00momentum is determined by the cooling rate DB2 e,2, independent of DSA. 3) The electron cooling can be represented by the advection of the distribution function ge(p) = fe(p)p4 in y = ln(p) space with the advection speed, V = \u2212DB2 ep (see Eq. [3]). This causes the electron spectrum, fe(x, p), cuts o\ufb00more sharply as the distance downstream from the shock increases. 4) Because the synchrotron/IC cooling time decreases with momentum as trad \u221dp\u22121, thickness of the electron distribution is inversely proportional to the momentum, i.e., \u2206x(p) = u2 \u00b7 trad \u221dp\u22121. Then the electron spectrum integrated over to the downstream region steepens as Fe,2(p) \u221dp\u2212(q+1) for pbr(t) < p < peq, when the spectrum at the shock is fe(xs, p) \u221dp\u2212q. The break momentum decreases with the shock age as pbr \u221dt\u22121 (see Eq. [21]). 5) Only highest energy electrons di\ufb00use upstream to the distance of d \u223c\u03ba(peq)/us, so the upstream integrated spectrum has a much harder spectrum than the downstream integrated spectrum and it peaks at peq. 6) For a CR modi\ufb01ed shock, both proton and electron spectra exhibit the well-known concave curvatures. Thus the volume integrated spectrum, Fe(p), cannot be represented by the canonical broken powerlaw spectrum. In this regime, the radio synchrotron index, \u03b1, could be steeper than 0.5 even for a high sonic Mach number. Also in the case of small Alfv\u00b4 enic Mach number (i.e., large B0 and small \u03c10), the spectral slope could be even steeper due to the Alfv\u00b4 enic drift e\ufb00ect. Moreover, detail analysis of the X-ray synchrotron emission near the cuto\ufb00frequency may provide some information about the e\ufb00ect of nonlinear DSA at shocks. Spectral characteristics of the synchrotron emission from a CR modi\ufb01ed shock will be presented elsewhere. ACKNOWLEDGMENTS This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0016425)." + }, + { + "url": "http://arxiv.org/abs/1102.2561v1", + "title": "Re-acceleration of Nonthermal Particles at Weak Cosmological Shock Waves", + "abstract": "We examine diffusive shock acceleration (DSA) of the pre-exisiting as well as\nfreshly injected populations of nonthermal, cosmic-ray (CR) particles at weak\ncosmological shocks. Assuming simple models for thermal leakage injection and\nAlfv\\'enic drift, we derive analytic, time-dependent solutions for the two\npopulations of CRs accelerated in the test-particle regime. We then compare\nthem with the results from kinetic DSA simulations for shock waves that are\nexpected to form in intracluster media and cluster outskirts in the course of\nlarge-scale structure formation. We show that the test-particle solutions\nprovide a good approximation for the pressure and spectrum of CRs accelerated\nat these weak shocks. Since the injection is extremely inefficient at weak\nshocks, the pre-existing CR population dominates over the injected population.\nIf the pressure due to pre-existing CR protons is about 5 % of the gas thermal\npressure in the upstream flow, the downstream CR pressure can absorb typically\na few to 10 % of the shock ram pressure at shocks with the Mach number $M \\la\n3$. Yet, the re-acceleration of CR electrons can result in a substantial\nsynchrotron emission behind the shock. The enhancement in synchrotron radiation\nacross the shock is estimated to be about a few to several for $M \\sim 1.5$ and\n$10^2-10^3$ for $M \\sim 3$, depending on the detail model parameters. The\nimplication of our findings for observed bright radio relics is discussed.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2011-02-13", + "updated": "2011-02-13", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "INTRODUCTION Cosmological shock waves result from supersonic \ufb02ow motions induced by hierarchical clustering during the large-scale structure formation in the Universe (Miniati et al. 2000; Ryu et al. 2003). According to studies based on cosmological hydrodynamic simulations, the shocks formed by merger of subclumps, infall of matter and internal \ufb02ow motion in intracluster media (ICMs) and cluster outskirts are relatively weak with Mach number M \u2272a few (Ryu et al. 2003; Pfrommer et al. 2006; Kang et al. 2007; Skillman et al. 2008; Hoeft et al. 2008; Vazza et al. 2009). Indeed, observations of X-ray shocks (e.g., Markevitch et al. 2002, 2005; Markevitch & Vikhlinin 2007) and radio relics (e.g., Bagchi et al. 2006; Finoguenov et al. 2010; van Weeren et al. 2010) indicate that the estimated Mach number of observed shocks in cluster environments is consistent with such theoretical predictions. Suprathermal particles are known to be produced as an inevitable consequence of the formation of collisionless shocks in tenuous plasmas and they can be further accelerated to become cosmic rays (CRs) through interactions with resonantly scattering Alfv\u00b4 en waves in the converging \ufb02ow across a shock (Bell 1978; Drury 1983; Malkov & Drury 2001). Detailed nonlinear treatments of di\ufb00usive shock acceleration (DSA) have predicted that at strong shocks a signi\ufb01cant fraction of the shock kinetic energy is transferred to CRs, inducing highly nonlinear back-reactions from CRs to the underlying \ufb02ow (e.g., Amato & Blasi 2006; Vladimirov et al. 2006; Kang & Jones 2007). Multi-band observations of nonthermal radio to \u03b3-ray emissions have con\ufb01rmed the acceleration of CR electrons and protons up to 100 TeV at young supernova remnants (e.g. Parizot et al. 2006; Reynolds 2008; Abdo et al. 2010). The presence of nonthermal particles, especially electrons, in clusters of galaxies, has been inferred from observations of synchrotron emission from radio halos and relics (see, e.g., Carilli & Taylor 2002; Govoni & Feretti 2004, for review). Since the matter in ICMs and cluster outskirts should have gone \ufb01rst through accretion shocks of high Mach number around nonlinear structures and then through weaker shocks due to mergers and \ufb02ow motion (Ryu et al. 2003; Kang et al. 2007), DSA should be responsible for at least a part of the CR production. Nonthermal particles can be also produced via turbulent acceleration (see, e.g., Cassano & Brunetti 2005; Brunetti & Lazarian 2007). Recent Fermi observations of \u03b3-ray emission from galaxy clusters, however, limit that the pressure due to CR protons cannot exceed \u223c10 % of the gas thermal pressure (Abdo et al. 2010; Donnert et al. 2010). At weak shocks with M \u2272a few, DSA is known to be rather ine\ufb03cient and the CR pressure remains dynamically insigni\ufb01cant, partly because the injection from thermal to nonthermal particles is ine\ufb03cient (e.g., Kang et al. 2002). In such test-particle regime, the downstream CR spectrum takes the power-law form of f2(p) \u221dp\u2212q, where the spectral slope, q, depends on the velocity jump across the shock (Drury 1983). Recently, Kang & Ryu \f\u2013 3 \u2013 (2010) suggested analytic, time-dependent solutions for the test-particle CR spectrum, using results from DSA simulations in which particles are injected via thermal leakage process and accelerated to ever increasing maximum momentum, pmax(t). They found that at weak shocks expected to form in ICMs and cluster outskirts, indeed, much less than \u223c10\u22123 of particles are injected into CRs and much less than \u223c1% of the shock ram pressure is converted into the downstream pressure of CR protons, so the particle acceleration is virtually negligible. However, the recent discovery of very bright radio relics associated with weak shocks of M \u2272a few (e.g., Bagchi et al. 2006; Finoguenov et al. 2010; van Weeren et al. 2010) suggests that, contrary to the expectation, DSA should operate at weak shocks in cluster environments. One way to explain this is to presume that the relics form in media with pre-existing CRs which were produced by DSA at previous shocks and/or by turbulent acceleration. The existence of pre-exiting CRs alleviates the problem of ine\ufb03cient injection at weak shocks. In this paper, we examine the DSA at weak cosmological shocks in the presence of pre-existing CRs. First, the properties of weak shocks in ICMs and cluster outskirts are brie\ufb02y reviewed in Section 2. Analytic, time-dependent solutions for the acceleration of the pre-existing and freshly injected populations of CRs in the test-particle regime is described in Section 3, while the numerical solutions from kinetic DSA simulations are presented in Section 4. The synchrotron radiation from CR electrons accelerated at these shocks is discussed in Section 5. Finally, a brief summary is given in Section 6. 2. SHOCK WAVES IN ICMS AND CLUSTER OUTSKIRTS Shock waves in the large-scale structure of the universe have been studied in details using various hydrodynamic simulations for the cold dark matter cosmology with cosmological constant (\u039bCDM) (Ryu et al. 2003; Pfrommer et al. 2006; Kang et al. 2007; Skillman et al. 2008; Hoeft et al. 2008; Vazza et al. 2009). It was found that shocks with Mach number typically up to M \u223c103 and speed up to us \u223ca few \u00d71000 km s\u22121 at the present universe (z = 0). In ICMs and cluster outskirts, however, shocks are expected to have lower Mach number, because they form in the hot gas of kT \u2273keV. To examine the characteristics of shocks in ICMs and cluster outskirts, we analyze the shocks with the preshock gas temperature of T1 > 107 K. The cosmic web is \ufb01lled with ionized plasmas, the intergalactic medium (Cen & Ostriker 1999; Kang et al. 2005). The hot gas with T > 107 K is found mostly in ICMs and cluster outskirts, and the Warm Hot Intergalactic Medium (WHIM) with 105 K < T < 107 K is distributed mostly in \ufb01laments. \f\u2013 4 \u2013 The di\ufb00use gas with T < 105 K resides mainly in sheetlike structures and voids. The shocks were found in a simulation of the WMAP1-normalized \u039bCDM cosmology employed the following parameters: \u2126b = 0.048, \u2126m = 0.31, \u2126\u039b = 0.69, h \u2261H0/(100 km/s/Mpc) = 0.69, \u03c38 = 0.89, and n = 0.97. The simulation was performed using a PM/Eulerian hydrodynamic cosmology code (Ryu et al. 1993). Detailed descriptions for numerical set-up and input physical ingredients can be found in Cen & Ostriker (2006). The procedure to identify shocks was described in details in Ryu et al. (2003). Figure 1 shows the surface area of shocks with T1 > 107 K per Mach number interval in the entire simulation volume, normalized by the volume. Here, S is given in units of (h\u22121Mpc)\u22121. The quantity S provides a measure of shock frequency or the inverse of the mean comoving distance between shock surfaces. To avoid confusion from complex \ufb02ow patterns and shock surface topologies associated with very weak shocks, only those portions of shock surfaces with M \u22651.5 are shown. We also calculated the incident shock kinetic energy \ufb02ux, F\u03c6 = (1/2)\u03c11u3 s, where \u03c11 is the preshock gas density, and then the kinetic energy \ufb02ux through shock surfaces per Mach number interval, normalized by the simulation volume, dF\u03c6(M)/dM. Figure 1 shows dF\u03c6(M)/dM, too. As expected, the Mach number of the shocks formed in ICMs and cluster outskirts is small, typically M \u22723. The frequency increases to weakest possible shocks with M \u223c1. The kinetic energy \ufb02ux through shock surfaces is larger for weaker shocks; that is, weaker shocks process more shock energy, con\ufb01rming the energetic dominance of weak shocks in cluster environments. 3. ANALYTIC TEST-PARTICLE SPECTRUM In the kinetic DSA approach, the following di\ufb00usion-convection equation for the pitchangle-averaged distribution function of CRs, f(x, p, t), is solved along with suitably modi\ufb01ed gasdynamic equations: \u2202f \u2202t + (u + uw)\u2202f \u2202x = p 3 \u2202(u + uw) \u2202x \u2202f \u2202p + \u2202 \u2202x \u0014 \u03ba(x, p)\u2202f \u2202x \u0015 , (1) where \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient and uw is the drift speed of the local Alfv\u00b4 enic wave turbulence with respect to the plasma (Skilling 1975). The scattering by Alfv\u00b4 en waves tends to isotropize the CR distribution in the wave frame, which may drift upstream at the Alfv\u00b4 en speed, vA, with respect to the bulk plasma. So the wave speed is set to be uw = \u2212vA upstream of shock, while uw = 0 downstream. In the test-particle regime where the feedback due to the CR pressure is negligible, the downstream CR distribution can be described with a power-law spectrum, f2(p) \u221dp\u2212q, and \f\u2013 5 \u2013 the slope is given by q = 3(u1 \u2212vA) u1 \u2212vA \u2212u2 = 3\u03c3(1 \u2212M\u22121 A ) (\u03c3 \u22121 \u2212\u03c3M\u22121 A ), (2) where u1 and u2 are the upstream and downstream \ufb02ow speeds, respectively, in the shock rest frame, \u03c3 = u1/u2 = \u03c12/\u03c11 is the shock compression ratio, and MA = u1/vA is the upstream Alfv\u00b4 en Mach number with vA = B1/\u221a4\u03c0\u03c11 (Drury 1983; Kang & Ryu 2010). The test-particle power-law slope q can be calculated as a function of shock Mach number M with \u03c3 = [(\u03b3g + 1)M2]/[(\u03b3g \u22121)M2 + 2], which becomes 4M2/(M2 + 3) for a gas adiabatic index \u03b3g = 5/3, and MA = M/\u03b4. Here, \u03b4 \u2261vA/cs is the Alfv\u00b4 en speed parameter, where cs is the upstream sound speed. The maximum momentum of CR protons achieved by the shock age of t can be estimated as pmax(t) \u2248mpc \u0014(1 \u2212M\u22121 A )(\u03c3 \u22121 \u2212\u03c3M\u22121 A ) 3\u03c3(2 \u2212M\u22121 A ) \u0015 u2 s \u03ba\u2217t, (3) where us = u1 is the shock speed (Drury 1983; Kang & Ryu 2010). Here, a Bohm-type di\ufb00usion coe\ufb03cient, \u03ba(p) = \u03ba\u2217 \u0012 p mpc \u0013 \u0012\u03c10 \u03c1 \u0013 , (4) is adopted, where \u03ba\u2217= mpc3/(3eB0) = 3.13 \u00d7 1022(B0/1 \u00b5G)\u22121cm2s\u22121, B0 and \u03c10 are magnetic \ufb01eld strength and the gas density far upstream. In CR-modi\ufb01ed shocks where CRs are dynamically non-negligible, in general, the upstream \ufb02ow is decelerated in the precursor before it enters the gas subshock. So we use the subscripts \u201c0\u201d, \u201c1\u201d, and \u201c2\u201d to denote the conditions far upstream, immediate upstream and downstream of shock, respectively. Of course, in the test-particle limit, the distinction between far and immediate upstream quantities disappears, e.g., \u03c10 = \u03c11. In the limit of large M (\u03c3 \u22484) and large MA (\u03b4 \u22480), the maximum energy of CR protons can be approximated by Emax,p \u2248u2 st 8\u03ba\u2217mpc2 \u22481010 GeV \u0010 us 103kms\u22121 \u00112 \u0012 t 109yrs \u0013 \u0012 B0 1 \u00b5G \u0013 . (5) The CR proton spectrum limited by the shock age is expected to have a cuto\ufb00at around \u223cpmax(t) (see Section 3.3 for further discussion). 3.1. Pre-existing Population As noted in Introduction, it seems natural to assume that ICMs and cluster outskirts contain pre-existing CRs. But their nature is not well constrained, except that Pc \u22720.1Pg, \f\u2013 6 \u2013 i.e., the pressure of CR protons is less that \u223c10 % of the gas thermal pressure (e.g., Abdo et al. 2010; Donnert et al. 2010). With pre-existing CRs of spectrum f0(p) upstream of shock, the steady-state, test-particle solution of Equation (1) for the downstream CR distribution can be written as f2(p) = qp\u2212q Z p pinj p\u2032q\u22121f0(p\u2032)dp\u2032 + finj \u0012 p pinj \u0013\u2212q , (6) where q is the test-particle power-law slope given in Equation (2) (Drury 1983). Here, pinj is the lowest momentum boundary above which particles can cross the shock, i.e., the injection momentum (see the next subsection). By this de\ufb01nition of pinj, the CR distribution function, f0 = 0 and f2 = 0 for p < pinj. The \ufb01rst term in the right-hand-side of Equation (6) represents the re-accelerated population of pre-existing CRs, while the second term represents the population of CRs freshly injected at the shock and will be discussed in the next subsection. We adopt a power-law form, f0(p) = fpre \u00b7 (p/pinj)\u2212s, with the slope s = 4 \u22125, as the model spectrum for pre-existing CR protons. If pre-existing CRs were generated at previous shocks, the slope of s = 4 \u22125 is achieved for M \u2265 \u221a 5 with \u03b4 = 0 (see Equation (2)). On the other hand, if they are mainly the outcome of turbulent acceleration, the slope should be close to s \u223c4 (see, e.g., Chandran 2005). Then, the spectrum of re-accelerated CRs is obtained by direct integration: f reac 2 (p) = \u001a [q/(q \u2212s)] [1 \u2212(p/pinj)\u2212q+s] f0(p), if q \u0338= s q ln(p/pinj)f0(p), if q = s. (7) If q \u0338= s, for p \u226bpinj, f reac 2 (p) = q |q \u2212s|fpre \u0012 p pinj \u0013\u2212r , (8) where r = min(q, s). That is, if the spectral slope of pre-existing CRs is softer than the test-particle slope (s > q), the re-accelerated CR spectrum gets \ufb02attened to p\u2212q by DSA; in the opposite case (s < q), the re-accelerated CR spectrum is simply ampli\ufb01ed by the factor of q/(q \u2212s) and retains the same slope as the slope of pre-existing CRs. Figure 2 shows the re-accelerated CR distribution given in Equation (7) for a M = 3 shock in the presence of the pre-existing power-law CR spectrum with the slope s = 4 and 4.5 (right panel) and s = 5 (left panel). The Alfv\u00b4 enic drift is ignored (\u03b4 = 0), so the test-particle slope is q = 4.5. Here, we adopted the following parameters: the upstream gas temperature T0 = 107 K and the injection parameter \u01ebB = 0.25, resulting in pinj = 8.0 \u00d7 10\u22123mpc (see the next subsection for details of our injection model). The \ufb01gure illustrates that for p \u226bpinj, the CR ampli\ufb01cation factor, f2(p)/f0(p), approaches a constant, q/(q \u2212s) = 9, in the case of s = 4, increases as ln(p/pinj) in the case of \f\u2013 7 \u2013 q = s = 4.5, and scales as (p/pinj)s\u2212q in the case of s = 5. So, for instance, the factor becomes f2/f0 = 32 and 310 at p/mpc = 10 for s = 4.5 and 5, respectively. We point that these values of the CR ampli\ufb01cation factor are substantially larger than those expected for the adiabatic compression across the shock. With pre-existing CRs of f0 \u221dp\u2212s, the ampli\ufb01cation factor due to the adiabatic compression is given by f adb 2 /f0 = \u03c3s/3 (9) in the test-particle regime. So the adiabatic ampli\ufb01cation factor is f adb 2 /f0 = 4.3, 5.2, and 6.2 and for s = 4, 4.5 and 5, respectively, at a Mach 3 shock. Note that the adiabatic compression does not change the slope of the CR spectrum. The left panel of Figure 2 also shows the time evolution of the CR distribution at the shock location, fs(p, t), from a DSA simulation for the same set of parameters (see Section 4 for details of DSA simulations). The CR injection was turned o\ufb00for this particular simulation in order to compare the analytic and numerical solutions only for pre-existing CRs. This demonstrates that the time-dependent solution asymptotes to the steady-state solution in Equation (7). 3.2. Injected Population Because complex plasma interactions among CRs, waves, and the underlying gas \ufb02ow are not fully understood yet, it is not possible to make a precise quantitative prediction for the injection process from \ufb01rst principles (e.g., Malkov & Drury 2001). Here, we adopt a phenomenological injection model that can emulate the thermal leakage process, through which particles above a certain injection momentum pinj cross the shock and get injected to the CR population (Kang et al. 2002; Kang & Ryu 2010). Then, the CR distribution function at pinj is anchored to the downstream Maxwellian distribution as finj = f(pinj) = n2 \u03c01.5 p\u22123 th exp \u0000\u2212Q2 inj \u0001 , (10) where n2 is the downstream proton number density. Here, pinj and Qinj are de\ufb01ned as Qinj(M) \u2261pinj pth \u22481.17mpu2 pth \u0012 1 + 1.07 \u01ebB \u0013 \u0012M 3 \u00130.1 , (11) where pth = p 2mpkBT2 is the thermal peak momentum of the downstream gas with temperature T2 and kB is the Boltzmann constant. We note that the functional form of Qinj was adopted to represent an \u201ce\ufb00ective\u201d injection momentum, since particles in the suprathermal \f\u2013 8 \u2013 tail can cross the shock with a smoothly-varying probability distribution (see Kang et al. 2002). One free parameter that controls the leakage process is the injection parameter, \u01ebB = B0/B\u22a5, which is the ratio of the general magnetic \ufb01eld along the shock normal, B0, to the amplitude of the downstream, magnetohydrodynamic (MHD) wave turbulence, B\u22a5. Although plasma hybrid simulations and theories both suggested that 0.25 \u2272\u01ebB \u22720.35 (Malkov & V\u00a8 olk 1998), the physical range of this parameter remains to be rather uncertain due to lack of full understanding of relevant plasma interactions. The second term in Equation (6) is \ufb01xed by q, pinj, and finj. The fraction of particles injected into the CR population can be estimated analytically as well: \u03be \u2261nCR n2 = 4 \u221a\u03c0Q3 inj exp \u0000\u2212Q2 inj \u0001 1 q \u22123, (12) which is \ufb01xed only by Qinj and q. The injection fraction depends strongly on \u01ebB (through Qinj) for weak shocks with M \u22725 (see also Kang & Ryu 2010). For example, it varies from 5 \u00d7 10\u22125 to 10\u22123 for \u01ebB = 0.25 \u22120.3 for shocks with M = 3. 3.3. Cosmic-Ray Spectrum for Weak Shocks Kang & Ryu (2010) demonstrated that the time-dependent, test-particle solutions of the downstream CR distribution can be represented by the steady-state, test-particle solutions with an exponential cuto\ufb00(Caprioli et al. 2009), if the cuto\ufb00momentum is set as p\u2217\u2248 1.2 pmax(t) with pmax(t) in Equation (3). Here, we suggest that the same cuto\ufb00would be applied to the spectrum of re-accelerated CRs. Then, the CR distribution at the shock location, xs, originated from both the pre-existing and freshly injected populations can be approximated by fs(p, t) \u2261f2(xs, p, t) \u2248 \" f reac 2 (p) + finj \u00b7 \u0012 p pinj \u0013\u2212q# \u00b7 exp [\u2212qC(z)] , (13) where f reac 2 (p) is given in Equation (7) and z = p/p\u2217. The function C(z) is de\ufb01ned as C(z) = Z z zinj dz\u2032 z\u2032 1 exp(1/z\u2032) \u22121, (14) where zinj = pinj/p\u2217(Kang & Ryu 2010). Of course, for p > p\u2217, the acceleration is limited by the shock age and so pre-existing CRs will be simply advected downstream, resulting in fs(p) \u2248f0(p). These particles, however, do not make any signi\ufb01cant contribution to the downstream CR pressure, if the pre-existing power-law spectrum has the slope s > 4 (see below). \f\u2013 9 \u2013 4. COMPARISON WITH NUMERICAL SOLUTIONS 4.1. Set-up for DSA Simulations We carried out kinetic DSA simulations in order to test the time-dependent features of the test-particle solution in Equation (13). Also for shocks with typically M \u2273a few, the evolution of CR-modi\ufb01ed shocks should be followed by DSA simulations, because the nonlinear feedback of CRs becomes important (Kang & Ryu 2010). We used the CRASH (Cosmic-Ray Acceleration SHock) code for quasi-parallel shock, in which the di\ufb00usion-convection equation (1) is solved along with the gasdynamic equation modi\ufb01ed for the e\ufb00ects of the CR pressure (Kang et al. 2002). We considered shocks with a wide range of Mach number, M = 1.5\u22125, propagating into typical ICMs and cluster outskirts of T0 = 107 K; the shock speed is us = M\u00b7474 km s\u22121. The di\ufb00usion in Equation (4) was used. In the code units, the di\ufb00usion coe\ufb03cient is normalized with \u03bao = 103\u03ba\u2217for numerical simulations. Then, the length and time scales are given as lo = \u03bao/us and to = \u03bao/u2 s, respectively. Since the \ufb02ow structure and Pc pro\ufb01le evolve self-similarly, a speci\ufb01c physical value of \u03bao matters only in the determination of pmax at a given simulation time. For instance, pmax/mpc \u2248103 is achieved by the termination time of t/to = 10 in our simulations. Simulations start with purely gasdynamic shocks initially at rest at xs = 0, and the gas adiabatic index is \u03b3g = 5/3. As for the pre-existing CRs, we adopted f0(p) = fpre(p/pinj)\u2212s for their spectrum. The amplitude, fpre, is set by the ratio of the upstream CR to gas pressure, R \u2261Pc,0/Pg,0, and we consider R = 0.01 \u22120.1. We note that with the same value of R, the amplitude fpre is larger for softer pre-existing spectrum, i.e., larger s. To examine the e\ufb00ects of Alfv\u00b4 enic drift, in addition to the models with \u03b4 = 0, we consider \u03b4 = 0.42 as a \ufb01ducial value, which corresponds to EB \u223c0.1Eg, i.e., the magnetic \ufb01eld energy density of \u223c10 % of the gas thermal energy density. Finally, we consider \u01ebB = 0.25 \u22120.3 for the injection parameter. 4.2. CR Proton Spectrum and CR Pressure Figure 3 shows the CR pressure pro\ufb01le and the CR distribution at the shock location, fs, from DSA simulations for a Mach 3 shock. In the cases with pre-existing CRs in (b) and (c), the steady-state solution without injection given in Equation (7) (dot-dashed line) is also shown for comparison. As CRs are accelerated to ever high energies (pmax \u221dt), the scale length of the CR pressure increases linearly with time, ld(pmax) \u221dust (Kang et al. 2009). Left panels demonstrate that the CR pressure pro\ufb01le evolves in a self-similar fashion, \f\u2013 10 \u2013 depending approximately only on the similarity variable, x/(ust). Right panels indicate that fs can be well approximated with the form in Equation (13), i.e., the acceleration of pre-existing and injected CRs along with an exponential cuto\ufb00at pmax(t). Comparing the cases in (a) and (b), we see that with the same injection parameter, the presence of pre-existing CRs results in higher downstream CR pressure, and that the re-accelerated pre-existing population dominates over the injected population. The presence of pre-existing CRs acts e\ufb00ectively as a higher injection rate than the thermal leakage alone, leading to the greatly enhanced CR acceleration e\ufb03ciency. For the case with \u01ebB = 0.3 in (c), the injection rate is much higher than that of the case with \u01ebB = 0.25, yet the injected population makes a non-negligible contribution only near pinj. In Figure 4, we compare the spectrum of re-accelerated CRs from the steady-state solutions without injection (left panels) and the CR spectrum at the shock location from the time-dependent solutions of DSA simulations at t/to = 10 (right panels), in order to demonstrate the relative importance of the acceleration of the pre-existing and the injected populations. Di\ufb00erent values of M and s are considered, but R = 0.05, \u03b4 = 0.42, and \u01ebB = 0.25 are \ufb01xed. As noted before, with the same R, the amplitude fpre of the preexisting CR spectrum is larger for larger s, so the re-acceleration of pre-existing population is relatively more important. The \ufb01gure indicates that for most cases considered, the reaccelerated pre-existing population dominates over the injected population for the considered range of Mach number. Only for the cases with s = 4 and M \u22733, the freshly injected population makes a noticeable contribution. Figure 5 shows the downstream CR pressure, Pc,2, relative to the shock ram pressure, \u03c10u2 s, and to the downstream gas thermal pressure, Pg,2, as a function of shock Mach number M for di\ufb00erent values of R, s, and \u03b4. Again, \u01ebB = 0.25 in all the cases. As shown in the top panels, without pre-existing CRs, both Pc,2/\u03c10u2 s and Pc,2/Pg,2 steeply increase with M, because both the injection and acceleration e\ufb03ciencies depend strongly on M. For shocks with M \u22735, Pc,2/(\u03c10u2 s) \u22730.1 and the nonlinear feedback begins to be noticeable. The feedback reduces the CR injection and saturates the CR acceleration, so Pc,2 from DSA simulations becomes smaller than the analytic estimates in the test-particle limit (see also Kang & Ryu 2010). Also the top panels compare the models with \u03b4 = 0 and \u03b4 = 0.42, demonstrating that the Alfv\u00b4 enic drift softens the accelerated spectrum and reduces the CR pressure. In (b) panels, the cases with di\ufb00erent upstream CR pressure fractions are compared: Pc,2 increases almost linearly with R at shocks with M \u22723 in the test-particle regime, while the CR acceleration begins to show the saturation e\ufb00ect for M \u22734. With pre-existing CRs, both Pc,2/\u03c10u2 s and Pc,2/Pg,2 are substantially larger, compared to the case with R = 0, \f\u2013 11 \u2013 especially for M \u22723, con\ufb01rming the dominance of the re-accelerated pre-existing population over the injected population at weak shocks. In (c) panels, the cases with di\ufb00erent pre-existing slopes are compared; with softer spectrum (larger s), the amplitude fpre is larger and the CR acceleration is more e\ufb03cient, as described above with Figure 4. In (d) panels, the same cases as in (c) panels except \u03b4 = 0 are shown, demonstrating again the e\ufb00ects of Alfv\u00b4 enic drift. These results indicate that at shocks with M \u22723 in ICMs and cluster outskirts, the downstream CR pressure is typically a few to 10 % of either the shock ram pressure or the downstream gas thermal pressure. Even in the cases where the pre-existing CR population takes up to 10 % of the gas thermal pressure in the upstream \ufb02ow, Pc,2/Pg,2 \u22720.1 in the downstream \ufb02ow. This is consistent with the Fermi upper limit (Abdo et al. 2010; Donnert et al. 2010). 5. CR ELECTRONS AND SYNCHROTRON RADIATION Since DSA operates on relativistic particles of the same rigidity (R = pc/Ze) in the same way, both electrons and protons are expected to be accelerated at shocks. However, electrons lose energy, mainly by synchrotron emission and Inverse Compton (IC) scattering, and the injection of postshock thermal electrons is believed to be much less e\ufb03cient, compared to protons. The maximum energy of CR electrons accelerated at shocks can be estimated by the condition that the momentum gain per cycle by DSA is equal to the synchrotron/IC loss per cycle, i.e., \u27e8\u2206p\u27e9DSA + \u27e8\u2206p\u27e9rad=0 (see Webb et al. 1984; Zirakashvili & Aharonian 2007). With the assumed Bohm-type di\ufb00usion coe\ufb03cient, the electron spectrum has a cuto\ufb00at pcut \u2248 m2 ec2 p 4e3/27 us \u221aq s B0 B2 0,e\ufb00+ B2 2,e\ufb00 (in cgs units) \u2248 340TeV c \u0012 us 103 km s\u22121\u22121 \u0013 s (B0/1 \u00b5G) q [(B0,e\ufb00/1 \u00b5G)2 + (B2,e\ufb00/1 \u00b5G)2], (15) where Be\ufb00= (B2 + B2 CMB)1/2 with BCMB = 3.24 \u00d7 10\u22126 G is the e\ufb00ective magnetic \ufb01eld strength for synchrotron and IC coolings upstream and downstream of shock, and \u03b4 = 0 was assumed. Note that the electron cuto\ufb00energy is a time-asymptotic quantity that depends only on the shock speed and the magnetic \ufb01eld strength, independent of the shock age. For a Mach 3 shock and B0 = 1 \u00b5G, for example, the shock jump condition gives \u03c3 = 3, \f\u2013 12 \u2013 q = 4.5 (with \u03b4 = 0) and B2 = 3\u00b5G (assuming B \u221d\u03c1), resulting in the cuto\ufb00Lorentz factor, \u03b3e,cut = pcut/mec \u22485.6 \u00d7 107 (us/1000 km s\u22121). Thus, we may model the downstream electron spectrum as fe,2(p) \u2248Ke/p fp,2(p) exp \u0012 \u2212p2 p2 cut \u0013 , (16) where fp,2(p) is the downstream proton spectrum (Zirakashvili & Aharonian 2007). The electron-to-proton number ratio, Ke/p, is not yet constrained precisely by plasma physics (see, e.g., Reynolds 2008). Although Ke/p \u223c10\u22122 is inferred for the Galactic CRs (Schlickeiser 2002), a much smaller value, Ke/p \u227210\u22124, is preferred for young supernova remnants (Morlino et al. 2009). However, Ke/p for the pre-existing population in ICMs and cluster outskirts could be quite di\ufb00erent from these estimates. Next, from the electron spectrum in Equation(16), we consider the synchrotron emission. The averaged rate of synchrotron emission at photon frequency \u03bd from a single relativistic electron with Lorentz factor \u03b3e can be written as \u27e8P\u03bd(\u03b3e)\u27e9= 4 3c\u03c3T \u03b22UB\u03b32 e\u03c6\u03bd(\u03b3e), (17) where \u03b2 is the particle speed in units of c, \u03c3T is the Thomson cross section, and UB is the magnetic energy density (see, e.g., Shu 1991). The frequency distribution function, \u03c6\u03bd(\u03b3e), which satis\ufb01es the normalization R \u03c6\u03bd(\u03b3)d\u03bd = 1, peaks at \u03bdpeak \u2248\u03b32 e\u03bdL = 280 \u0012 B 1 \u00b5G \u0013 \u0010 \u03b3e 104 \u00112 MHz, (18) where \u03bdL = eB/mec is the Larmor frequency. If we approximate that the synchrotron radiation is emitted mostly at \u03bd = \u03bdpeak (i.e., \u03c6\u03bd(\u03b3) is replaced by a delta function centered at \u03bd = \u03bdpeak), the synchrotron volume emissivity from the CR electron number density, ne(\u03b3e)d\u03b3e = fe(p)p2dp, becomes J(\u03bd) \u22482 3c\u03c3T\u03b22UB \u03b3e \u03bdL ne(\u03b3e), (19) with \u03b3e corresponding to the given \u03bdpeak = \u03bd in Equation (18). So the ratio of the downstream to upstream synchrotron emissivity at a given frequency \u03bd can be written as J2(\u03bd) J0(\u03bd) \u2248B2 B0 \u03b33 e,2fe,2(\u03b3e,2) \u03b33 e,0fe,0(\u03b3e,0), (20) where \u03b3e,0 and \u03b3e,2 are the Lorenz factor that corresponds to the given \u03bdpeak = \u03bd in Equation (18) for upstream \ufb01eld B0 and downstream \ufb01eld B2, respectively. \f\u2013 13 \u2013 For power-law spectra, the ratio J2(\u03bd)/J0(\u03bd) can be written in a more intuitive form. If the ratio Ke/p of the pre-existing population is comparable to or greater than that of the injected population, pre-existing electrons are more important than injected electrons at weak shocks of M \u22723, as pointed out in the previous section. Then, the downstream electron spectrum fe,2 can be approximated by the distribution function in Equation (7) with a Gaussian cuto\ufb00, exp(\u2212p2/p2 cut). Again adopting fe,0(\u03b3e) \u221d\u03b3\u2212s e for pre-existing CR electrons, the downstream spectrum is fe,2(\u03b3e) \u221d\u03b3\u2212r e , (unless q = s) for \u03b3e < \u03b3e,cut \u2261pcut/mec. Then, the ratio of the downstream to upstream synchrotron emissivity at \u03bd becomes J2(\u03bd) J0(\u03bd) \u2248 B(r\u22121)/2 2,\u00b5G B(s\u22121)/2 0,\u00b5G ! \u0014fe,2(\u03b3e) fe,0(\u03b3e) \u0015 \u03b3e=104 \u0010 \u03bd 280 MHz \u0011(s\u2212r)/2 \u2248 \u03c3w(r\u22121)/2B\u2212(s\u2212r)/2 0,\u00b5G \u0014fe,2(\u03b3e) fe,0(\u03b3e) \u0015 \u03b3e=104 \u0010 \u03bd 280 MHz \u0011(s\u2212r)/2 , (21) where B0,\u00b5G and B2,\u00b5G are the upstream and downstream magnetic \ufb01eld strengths in units of \u00b5G. In the second step, we assumed that B2/B0 = (\u03c12/\u03c10)w = \u03c3w, where w = 1 corresponds to B \u221d\u03c1 implied by the di\ufb00usion model in Equation (4), Figure 6 shows fe,0(\u03b3e)/fe,2(\u03b3e) at \u03b3e = 104, and (J2/J1)280 \u2261J2(\u03bd)/J0(\u03bd) at \u03bd = 280 MHz for B0 = 1\u00b5G and w = 1 for the cases considered in Figure 5. Here, we assume that Ke/p is the same for both the pre-existing and injected populations. Since the electron cuto\ufb00 momentum is \u03b3cut \u223c108 for the shock parameters considered here, the choice of \u03b3e = 104 and \u03bd = 280 MHz (see Equation(18)) as the representative values should be safe. As shown in Figure 5, for M \u22723, the downstream CR proton pressure can absorb typically only a few to 10% of the shock ram pressure even for R = 0.05. Yet, the acceleration of CR electrons can result in a substantial enhancement in synchrotron radiation across the shock. Our estimation indicates that the enhancement factor, (J2/J1)280, can be up to several at shocks with M \u223c1.5, up to several 10s for M \u223c2, and up to several 100s for M \u223c3. This is partly due to the large enhancement of the electron population across the shock, fe,2/fe,0, which is typically an order of magnitude smaller than the ratio (J2/J0)280. Additional enhancement comes from the ampli\ufb01cation of magnetic \ufb01elds across the shock, B2/B0. We note that for the compression of a uniform magnetic \ufb01eld, B \u221d\u03c12/3, that is, w = 2/3. With this scaling, (J2/J0)280 should be a bit smaller than that in Figure 6. However, it is also quite plausible that the downstream magnetic \ufb01eld is stronger than that expected for simple compression. It has been suggested that at shocks, especially at strong shocks, the downstream magnetic \ufb01eld is ampli\ufb01ed by plasma instabilities (see, e.g., Lucek & Bell 2000; Bell 2004), although the existence of such instabilities has not been fully explored for weak shocks. Moreover, the magnetic \ufb01eld can be further ampli\ufb01ed by the turbulence that is induced through cascade of the vorticity generated behind shocks (Giacalone & Jokipii 2007; \f\u2013 14 \u2013 Ryu et al. 2008). In such cases, the ratio (J2/J0)280 could be larger than that in Figure 6. In that sense, our estimate for the synchrotron enhancement factor may be considered as conservative one. We also note that with s \u2265r in Equation (21), J2(\u03bd)/J0(\u03bd) is larger at higher frequencies, but smaller with larger B0. The above enhancement in synchrotron emission across the shock can be compared to the enhancement in Bremsstrahlung X-ray. The Bremsstrahlung X-ray emissivity is given as JX \u221d\u03c12\u221a T, so the ratio of the downstream to upstream emissivity can be written as JX,2 JX,0 = \u03c32 r T2 T0 = \u0012 4M2 M2 + 3 \u00133/2 \u00125M2 \u22121 4 \u00131/2 , (22) in the limit where the CR pressure does not modify the shock structure. The enhancement in Bremsstrahlung X-ray emission, JX,2/JX,0, is 3.6, 7.5, and 17 for M = 1.5, 2, and 3, respectively. These values are substantially smaller than (J2/J0)280 shown in Figure 6. This implies that shocks in ICMs and cluster outskirts may appear as radio relics, but not be detected in X-ray, for instance, as in the case of CIZA J2242.8+5301 (van Weeren et al. 2010). Since the synchrotron/IC cooling time scales as trad = 2.45 \u00d7 1013 yrs \u03b3e \u0012Be\ufb00,2 1 \u00b5G \u0013\u22122 (23) (Webb et al. 1984), behind the shock the width of the distribution of CR electrons with \u03b3e becomes d \u2248u2trad(\u03b3e) \u221d\u03b3\u22121 e . For instance, electrons radiating synchrotron at \u03bd \u223c1 GHz have mostly the Lorentz factor of \u03b3e \u2248104 in the magnetic \ufb01eld of B2 \u223ca few \u00b5G. So the width of the synchrotron emitting region behind the shock is d \u2248u2trad(\u03b3e = 104) \u223c 100 kpc (u2/103 km s\u22121) as long as the shock age t > trad(\u03b3e = 104) \u223c108 yrs. This is indeed of order the width of bright radio relics such as CIZA J2242.8+5301 (van Weeren et al. 2010). Moreover, from the fact that d \u221d\u03b3\u22121 e , we can identify another feature in the integrated synchrotron spectrum. The volume integrated electron spectrum, Fe,2(\u03b3e) \u221dfe,2(\u03b3e) \u00b7 d \u221d\u03b3\u2212(r+1) e , steepens by one power of \u03b3e above the break Lorentz factor, \u03b3e,br \u22482.45 \u00d7 105 (108 yrs/t) (Be\ufb00,2/1 \u00b5G)\u22122, where t is the shock age. Note that the break Lorentz factor is basically derived from the condition, t = trad in Equation (23) and so independent of the shock speed. Hence, if fe,2(\u03b3e) \u221d\u03b3\u2212r e , in observations of unresolved sources, the integrated synchrotron emission, S\u03bd \u221d\u03bd\u2212\u03b1, has the spectral slope \u03b1 = (r \u22123)/2 for \u03bd < \u03bdbr and \u03b1 = (r \u22122)/2 for \u03bdbr \u2272\u03bd \u2272\u03bdcut. Here, the two characteristic frequencies, \u03bdbr and \u03bdcut, \f\u2013 15 \u2013 correspond to the peak frequency in Equation (18) for \u03b3e,br and \u03b3e,cut, respectively. So the spectral slope of the integrated spectrum just below the cuto\ufb00frequency is steeper by 0.5 than that of the resolved spectrum. 6. SUMMARY Cosmological shocks are expected to be present in the large-scale structure of the universe. They form typically with Mach number up to 103 and speed up to a few 1000 km s\u22121 at the present universe. Shocks in ICMs and cluster outskirts with relatively high X-ray luminosity, in particular, have the best chance to be detected, so they have started to be observed as X-ray shocks and radio relics (see Introduction for references). Those shocks are mostly weak with small Mach number of M \u22723, because they form in the hot gas of kT \u2273 keV. In this paper, we have studied DSA at weak cosmological shocks. Since the test-particle solutions could provide a simple yet reasonable description for weak shocks, we \ufb01rst suggested analytic solutions which describe the time-dependent DSA in the test-particle regime, including both the pre-existing and injected CR populations. We adopted a thermal leakage injection model to emulate the acceleration of suprathermal particles into the CR population, along with a simple transport model in which Alfv\u00b4 en waves self-excited by the CR streaming instability drift relative to the bulk plasma upstream of the gas subshock. We then performed kinetic DSA simulations and compared the analytic and numerical solutions for wide ranges of model parameters relevant for shocks in ICMs and cluster outskirts: the shock Mach number M = 1.5 \u22125, the slope of the pre-existing CR spectrum s = 4 \u22125, the ratio of the upstream CR to gas pressure R = 0.01 \u22120.1, the injection parameter \u01ebB = 0.25 \u22120.3, and the Alfv\u00b4 enic speed parameter \u03b4 = 0 \u22120.42. The upstream gas was assumed to be fully ionized with T0 = 107 K. The main results can be summarized as follows: 1) For weak shocks with M \u22723, the test-particle solutions given in Equation (13) should provide a good approximation for the time-dependent CR spectrum at the shock location. We note that the test-particle slope, q, in Equation (2) and the maximum momentum, pmax(t), in Equation (3) may include the Alfv\u00b4 enic drift e\ufb00ect. 2) For the injection parameter considered here, \u01ebB = 0.25 \u22120.3, the injection fraction is rather low, typically \u03be \u223c5 \u00d7 10\u22125 to 10\u22123 for M \u22723. The pre-existing CR population provides more particles for DSA than the freshly injected population. Hence, the pre-existing \f\u2013 16 \u2013 population dominates over the injected population. If there exist no CRs upstream (R = 0), the downstream CR pressure absorbs typically much less than \u223c1 % of the shock ram pressure for M \u22723. With pre-existing CRs that accounts for 5 % of the gas thermal pressure in the upstream \ufb02ow, the CR acceleration e\ufb03ciency increases to a few to 10 % for those weak shocks. 3) For the pre-exisiting population, the enhancement of the distribution function across the shock, f2(p)/f1(p), at a given momentum is substantially larger than that expected from the simple adiabatic compression. Hence, with ampli\ufb01ed magnetic \ufb01elds downstream, the re-acceleration of pre-existing CR electrons can result in a substantial synchrotron radiation behind the shock. We estimated that the enhancement in synchrotron radiation across the shock, J2(\u03bd)/J0(\u03bd), is about a few to several for M \u223c1.5, while it could reach to 102 \u2212103 for M \u223c3, depending on the detail model parameters. This is substantially larger than the enhancement in X-ray emission. 4) Unlike protons, relativistic electrons lose energy by synchrotron emission and IC scattering behind the shock, resulting in a \ufb01nite width of synchrotron emitting region. In ICMs and cluster outskirts with \u00b5G \ufb01elds, the radio synchrotron emission at \u03bd \u223c1GHz originate mostly from the relativistic electrons with \u03b3e \u223c104, which cool in a time scale of trad \u223c108 yrs. So the width of the \u223c1 GHz synchrotron emitting region is d \u2248u2trad \u223c 100 kpc (us/1000 km s\u22121) for a shock of age t > trad. Finally, although the CRASH numerical code and our thermal leakage model are developed for quasi-parallel shocks, the main conclusions in this paper should be valid for quasi-perpendicular shocks as well. It is recognized that the injection may be less e\ufb03cient and the self-excited waves are absent at perpendicular shocks. However, both of these problems are alleviated in the presence of pre-existing CRs and turbulence (Giacalone 2005; Zank et al. 2006). So the di\ufb00usion approximation should be valid and the re-acceleration of pre-existing CRs are similar at both kinds of shocks. Then, we expect our results can be applied to, for instance, CIZA J2242.8+5301, the radio relic whose magnetic \ufb01eld direction inferred from the polarization observation is perpendicular to the shock normal. The authors would like to thank T. W. Jones and J. Cho for discussions. HK was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0016425). DR was supported by the National Research Foundation of Korea through grant 20070093860. \f\u2013 17 \u2013" + }, + { + "url": "http://arxiv.org/abs/1008.0429v1", + "title": "Diffusive Shock Acceleration in Test-Particle Regime", + "abstract": "We examine the test-particle solution for diffusive shock acceleration, based\non simple models for thermal leakage injection and Alfv'enic drift. The\ncritical injection rate, \\xi_c, above which the cosmic ray (CR) pressure\nbecomes dynamically significant, depends mainly on the sonic shock Mach number,\nM, and preshock gas temperature, T_1. In the hot-phase interstellar medium\n(ISM) and intracluster medium, \\xi_c < 10^{-3} for shocks with M < 5, while\n\\xi_c ~ 10^{-4}(T_1/10^6 K)^{1/2} for shocks with M > 10. For T_1=10^6 K, for\nexample, the test-particle solution would be valid if the injection momentum,\np_{inj} > 3.8 p_{th}. This leads to the postshock CR pressure less than 10% of\nthe shock ram pressure. If the Alfv'en speed is comparable to the sound speed\nin the preshock flow, as in the hot-phase ISM, the power-law slope of CR\nspectrum can be significantly softer than the canonical test-particle slope.\nThen the CR spectrum at the shock can be approximated by the revised\ntest-particle power-law with an exponential cutoff at the highest accelerated\nmomentum, p_{max}(t). An analytic form of the exponential cutoff is also\nsuggested.", + "authors": "Hyesung Kang, Dongsu Ryu", + "published": "2010-08-03", + "updated": "2010-08-03", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "Introduction Suprathermal particles are produced as an inevitable consequence of the formation of collisionless shocks in tenuous astrophysical plasmas and they can be further accelerated to very high energies through interactions with resonantly scattering Alfv\u00b4 en waves in the converging \ufb02ow across a shock (Bell 1978; Drury 1983; Blandford & Eichler 1987; Malkov & Drury 2001). The most attractive feature of the di\ufb00usive shock acceleration (DSA) 1Department of Earth Sciences, Pusan National University, Pusan 609-735, Korea: kang@uju.es.pusan.ac.kr 2Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764, Korea: ryu@canopus.cnu.ac.kr \f\u2013 2 \u2013 theory is the simple prediction of the power-law momentum distribution of cosmic rays (CRs), f(p) \u221dp\u22123\u03c3/(\u03c3\u22121) (where \u03c3 is the shock compression ratio) in the test particle regime. For strong, adiabatic gas shocks, this gives a power-law index of 4, which is reasonably close to the observed, \u2018universal\u2019 index of the CR spectra in many environments. The nonthermal particle injection and ensuing acceleration at shocks depend mainly upon the shock Mach number, \ufb01eld obliquity angle, and the strength of the Alfv\u00b4 en turbulence responsible for scattering. At quasi-parallel shocks, the shock Mach number is the primary parameter that determines the CR acceleration e\ufb03ciency, while the injection fraction, \u03be (the ratio of CR particles to the total particles passed through the shock), is the secondary parameter. Detailed nonlinear treatments of DSA predict that at strong shocks, with a small fraction of \u03be > 10\u22124, a signi\ufb01cant fraction of the shock kinetic energy is transferred to CRs and there are highly nonlinear back-reactions from CRs to the underlying \ufb02ow (Berezhko & V\u00a8 olk 1997; Kang & Jones 2007). Indeed, multi-band observations of nonthermal radio to \u03b3-ray emissions from several supernova remnants (SNRs) have been successfully explained by e\ufb03cient DSA features such as high degree of shock compression and ampli\ufb01cation of magnetic \ufb01elds in the precursor (e.g. Reynods 2008; Berezhko et al. 2009; Morlino et al. 2009). It has been recognized, however, that the CR spectrum at sources, N(E), predicted for shocks strongly modi\ufb01ed by CR feedback may be too \ufb02at to be consistent with the observed \ufb02ux of CR nuclei at Earth, J(E). Recently Ave et al. (2009) analyzed the spectrum of CR nuclei up to \u223c1014 eV measured by TRACER instruments and found that the CR spectra at Earth can be \ufb01tted by a single power law of J(E) \u221dE\u22122.67. Assuming an energydependent propagation path length (\u039b \u221dE\u22120.6), they suggested that a soft source spectrum, N(E) \u221dE\u2212s with s \u223c2.3 \u22122.4, is preferred by the observed data. This is much softer than the CR spectrum that the nonlinear DSA predicts for strong SNRs, which are believed to be the main accelerators for Galactic CRs up to the knee energy around 1015.5eV. Thus, in order to reconcile the DSA prediction with the observed J(E), the bulk of Galactic CRs should originate from SNRs in which the CR acceleration e\ufb03ciency is 10 % or so (i.e., roughly in the test-particle regime). Such ine\ufb03cient acceleration could be possible for SNRs in the hot phase of the interstellar medium (ISM) (i.e., low shock Mach number shocks) and for the inject fraction smaller than 10\u22124 (Kang 2010). The scattering by Alfv\u00b4 en waves tends to isotropize the CR distribution in the wave frame, which may drift upstream at Alfv\u00b4 en speed with respect to the bulk plasma (Skilling 1975). This Alfv\u00b4 enic drift in the upstream region reduces the velocity jump that the particles experience across the shock, which in turn softens the CR spectrum beyond the canonical test-particle slope (s = 2 for strong shocks) (Kang 2010; Caprioli et al. 2010). \f\u2013 3 \u2013 Moreover, the Alfv\u00b4 enic drift in ampli\ufb01ed magnetic \ufb01elds both upstream and downstream can drastically soften the accelerated particle spectrum even in nonlinear modi\ufb01ed shocks (Zirakashvili and Ptuskin 2008; Ptuskin et al. 2010). At collisionless shocks suprathermal particles moving faster than the postshock thermal distribution may swim through the MHD waves and leak upstream across the shocks and get injected into the CR population (Malkov 1998; Gieseler et al. 2000; Kang et al. 2002). But it is not yet possible to make precise quantitative predictions for the injection process from \ufb01rst principles, because complex plasma interactions among CRs, waves, and the underlying gas \ufb02ow are not fully understood yet (e.g., Malkov & Drury 2001). Until plasma simulations such as hybrid or particle-in-cell simulations reach the stage where the full problem can be treated with practical computational resources, in the studies of DSA we have to adopt a phenomenological injection scheme that can emulate the injection process. In this paper, we will examine the relation between the thermal leakage injection model described in Kang et al. (2002) and the time-dependent test-particle solutions for DSA. The basic models are described in \u00a72, while the analytic expression for the CR spectrum in the test-particle limit is suggested in \u00a73. Finally, a brief summary will be given in \u00a74. 2. BASIC MODELS In the kinetic DSA approach, the following di\ufb00usion-convection equation for the pitchangle-averaged distribution function, f(x, p, t), is solved along with suitably modi\ufb01ed gasdynamic equations: \u2202f \u2202t + (u + uw)\u2202f \u2202x = p 3 \u2202(u + uw) \u2202x \u2202f \u2202p + \u2202 \u2202x \u0014 \u03ba(x, p)\u2202f \u2202x \u0015 , (1) where \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient and uw is the drift speed of the local Alfv\u00b4 enic wave turbulence with respect to the plasma (Skilling 1975). We consider only the proton CR component. 2.1. Alfv\u00b4 enic Drift E\ufb00ect Since the Alfv\u00b4 en waves upstream of the subshock are expected to be established by the streaming instability, the wave speed is set there to be uw = \u2212vA. Downstream, it is likely that the Alfv\u00b4 enic turbulence is nearly isotropic, hence uw = 0 there. As a result, the velocity jump across the shock is reduced, and the slope of test-particle plower-law spectrum should \f\u2013 4 \u2013 be revised as qtp = 3(u1 \u2212vA) u1 \u2212vA \u2212u2 = 3\u03c3(1 \u2212M\u22121 A ) (\u03c3 \u22121 \u2212\u03c3M\u22121 A ), (2) where u1 and u2 are the upstream and downstream speed, respectively, in the shock rest frame, \u03c3 = u1/u2 = \u03c12/\u03c11 is the shock compression ratio, and vA and MA = u1/vA are the Alfv\u00b4 en speed upstream and Alfv\u00b4 en Mach number. Hereafter, we use the subscripts \u20191\u2019, and \u20192\u2019 to denote conditions upstream and downstream of the shock, respectively. Thus the CR spectrum would be softer than the canonical power-law spectrum with the slope, 3\u03c3/(\u03c3 \u22121), unless MA \u226b1. The left panel of Figure 1 shows the revised test-particle slope qtp as a function of the sonic Mach number, M, for di\ufb00erent Alfv\u00b4 en speeds, vA = \u03b4 \u00b7 cs (where cs is the upstream sound speed). In the hot-phase ISM of T \u2248106K with the hydrogen number density nH \u2248 0.003 cm\u22123 and the magnetic \ufb01eld strength B \u22485\u00b5G, the sound speed is cs \u2248150 km s\u22121 and the Alfv\u00b4 en speed is vA \u2248170 km s\u22121. So \u03b4 \u22481 is a representative value. If \u03b4 \u2248(PB/Pg)1/2 \u2248 1, the Alfv\u00b4 en drift e\ufb00ect is signi\ufb01cant for Alfv\u00b4 en Mach number, MA \u2248M \u227230. Consequently, this e\ufb00ect reduces the CR acceleration e\ufb03ciency. Of course, it is not important for strong shocks with us \u226bcs \u223cvA (i.e., MA \u227330). 2.2. Thermal Leakage Injection Model Since the velocity distribution of suprathermal particles is not isotropic in the shock frame, the di\ufb00usion-convection equation cannot directly follow the injection from the nondi\ufb00usive thermal pool into the di\ufb00usive CR population. Here we adopt the thermal leakage injection model that was originally formulated by Gieseler et al. (2000) based on the calculations of Malkov (1998). In this model particles above a certain injection momentum pinj cross the shock and get injected to the CR population. We adopt a smooth \u201ctransparency function\u201d, \u03c4esc(\u01ebB, v), that expresses the probability of suprathermal particles at a given velocity, v, leaking upstream through the postshock MHD waves. One free parameter controls this function; \u01ebB = B0/B\u22a5, the ratio of the general magnetic \ufb01eld along the shock normal, B0, to the amplitude of the postshock MHD wave turbulence, B\u22a5. Although plasma hybrid simulations and theories both suggested that 0.25 \u2272\u01ebB \u22720.35 (Malkov & V\u00a8 olk 1998), the physical range of this parameter remains to be rather uncertain due to lack of full understanding of relevant plasma interactions. Since \u03c4esc increases gradually from zero to one in the thermal tail distribution, the \u201ce\ufb00ective\u201d injection momentum can be approximated by pinj \u22481.17mpu2(1 + 1.07 \u01ebB ) \u2261Qinj(M, \u01ebB)pth (3) \f\u2013 5 \u2013 where pth = p 2mpkBT2 is the thermal peak momentum of the immediate postshock gas with temperature T2 and kB is the Boltzmann constant (Kang et al. 2002). The right panel of Figure 1 shows the value of Qinj as a function of M for three values of \u01ebB = 0.21, 0.23, and 0.27, which represents \u201cine\ufb03cient\u201d, \u201cmoderately e\ufb03cient\u201d, \u201ce\ufb03cient\u201d injection cases, respectively (see Fig. 4 below). At weaker shocks the compression is smaller and so the ratio u2/u1 is larger. For stronger turbulence (larger B\u22a5, smaller \u01ebB) it is harder for particles to swim across the shock. So for both of these cases, pinj has to be larger. Hence the value of Qinj(M, \u01ebB) is larger for weaker shocks and for smaller \u01ebB, which leads to a lower injection fraction. In our thermal leakage injection model, the CR distribution function at pinj is then anchored to the postshock Maxwellian distribution as, finj = f(pinj) = n2 \u03c01.5 p\u22123 th exp(\u2212Q2 inj), (4) where n2 is the postshock proton number density and the distribution function is de\ufb01ned in general as R 4\u03c0p2f(p)dp = n. For the test-particle power-law spectrum, the value of Qinj determines the amplitude of the subsequent suprathermal power-law distribution as f(p) = finj \u00b7 (p/pinj)\u2212qtp. Then the CR injection fraction can be de\ufb01ned as \u03be \u2261nCR n2 = 4 \u221a\u03c0Q3 inj exp(\u2212Q2 inj) 1 qtp(M) \u22123, (5) which depends only on the ratio Qinj and the slope qtp, but not on the postshock temperature T2. For Qinj = 3.8, for example, \u03be = 6.6 \u00d7 10\u22125/(qtp \u22123), which becomes \u03be = 6.6 \u00d7 10\u22125 for strong shocks with qtp = 4.0. 2.3. Bohm-type Di\ufb00usion Model In modeling DSA, it is commonly assumed that the particles are resonantly scattered by self-generated waves, so the Bohm di\ufb00usion model can represent a saturated wave spectrum (i.e., the mean scattering length, \u03bb = rg, where rg is the gyro-radius). Here, we adopt a Bohm-type di\ufb00usion coe\ufb03cient that includes a weaker non-relativistic momentum dependence, \u03ba(x, p) = \u03ba\u2217\u00b7 ( p mpc)\u03b1 \u0014\u03c1(x) \u03c11 \u0015\u2212m , (6) where the coe\ufb03cient \u03ba\u2217= mpc3/(3eB0) depends on the upstream mean \ufb01eld strength. The case with m = 1 approximately accounts for the compressive ampli\ufb01cation of Alfv\u00b4 en waves. \f\u2013 6 \u2013 The mean acceleration time for a particle to reach pmax from pinj in the test-particle limit of DSA theory can be approximated by tacc = 3 u1 \u2212vA \u2212u2 Z pmax pinj \u0012 \u03ba1 u1 \u2212vA + \u03ba2 u2 \u0013 dp p , (7) if we assume the bulk drift of waves with vA in the upstream region (e.g., Drury 1983). Then the maximum momentum can be estimated by setting t = tacc as pmax(t)\u03b1 \u2248\u03b1(1 \u2212M\u22121 A )(\u03c3 \u22121 \u2212\u03c3M\u22121 A ) 3\u03c3[1 + (1 \u2212M\u22121 A )\u03c31\u2212m] u2 s \u03ba\u2217t = fc u2 s \u03ba\u2217t, (8) where us = u1 is the shock speed (Kang et al. 2009). For the case of m = 1, the typical value of the parameter, fc = \u03b1(1 \u2212M\u22121 A )(\u03c3 \u22121 \u2212\u03c3M\u22121 A )/{3\u03c3[1 + (1 \u2212M\u22121 A )\u03c31\u2212m]}, is \u223c1/8 in the limit of MA \u226b1 and M \u226b1. 3. TEST-PARTICLE SPECTRUM If the injection is ine\ufb03cient, especially at weak shocks, the CR pressure remains dynamically insigni\ufb01cant and the test-particle solution is valid. Caprioli et al.(2009) (CBA09 hereafter) derived the analytic solution for a steady-state, test-particle shock with a freeescape boundary (FEB) at a distance xFEB upstream of the shock (i.e., f(x > xFEB) = 0). For a di\ufb00usion coe\ufb03cient that depends on the momentum as \u03ba(p) = \u03ba\u2217(p/mpc)\u03b1, the CR distribution at the shock location, xs, is given by ftp(xs, p) = f0 \u00b7 exp \" \u2212qtp Z z zinj dz\u2032 z\u2032 1 1 \u2212exp(\u22121/z\u2032\u03b1) # , (9) where z = p/p\u2217, zinj = pinj/p\u2217, f0 = finj, and p\u2217/mpc = (xFEBus/\u03ba\u2217)1/\u03b1 is the cuto\ufb00momentum set by the FEB. This expression can be re-written as, ftp(xs, p) = finj \u00b7 ( p pinj )\u2212qtp \u00b7 exp [\u2212qtpC(z)] , (10) where the function C(z) is given by C(z) = Z z zinj dz\u2032 z\u2032 1 exp(1/z\u2032\u03b1) \u22121. (11) We show the function C(z) for \u03b1 = 0.5 and 1 in the left panel of Figure 2. For z \u226a1, C(z) is small and so exp [\u2212qtpC(z)] = 1, as expected. For z \u226b1, C(z) \u2248z\u03b1 = (p/p\u2217)\u03b1. But this \f\u2013 7 \u2013 regime (p \u226bp\u2217) is not really relevant, because the resulting ftp(xs, p) is extremely small. We are more interested in the exponential cuto\ufb00where p \u223cp\u2217. Figure 2 shows that C(z) increases much faster than z\u03b1 near z \u223c1. In fact, at z \u223c1, approximately C(z) \u22480.29z2 for \u03b1 = 1 and C(z) \u22480.58z for \u03b1 = 1/2. Thus equation (10) can be approximated by ftp(xs, p) \u2248finj \u00b7 ( p pinj )\u2212qtp \u00b7 exp \u0014 \u22120.29qtp \u03b1 ( p p\u2217)2\u03b1 \u0015 . (12) Kang et al. (2009) showed that the shock structure and the CR spectrum of timedependent, CR modi\ufb01ed shocks with ever increasing pmax(t) are similar to those of steadystate shocks with particles escaping through the upper momentum boundary, i.e., f(p > pub) = 0, if compared when pmax(t) = pub (see their Figs. 10-11). They also showed that the exponential cuto\ufb00in the form of exp[\u2212k(p/pmax)2\u03b1] matches well the DSA simulation results for CR modi\ufb01ed shocks. In the same spirit, we suggest that equation (10) could represent the CR spectrum at the shock location for time-dependent, test-particle shocks without particle escape, in which the cuto\ufb00momentum is determined by the shock age as in equation (8), i.e., p\u2217\u223cpmax(t). The distribution function f(x, pmax) in the upstream region decreases roughly as exp[\u2212x/ld(pmax)], where the di\ufb00usion length for pmax is ld(pmax) = \u03ba(pmax) us = fcust. (13) CBA09 spectrum in equation (10) was derived from the FEB condition of f(x > xFEB, p) = 0 for steady-state shocks, while f(x, p) \u21920 only at x \u2192\u221e(upstream in\ufb01nity) for timeevolving shocks without particle escape. So we presume that the cuto\ufb00momentum can be found by setting the location of FEB at xFEB = \u03b6 \u00b7ld(pmax), where \u03b6 \u223c1. From the condition that p\u2217/mpc = (\u03b6ld(pmax)us/\u03ba\u2217)1/\u03b1, we \ufb01nd p\u2217= \u03b6 \u00b7 pmax. The right panel of Figure 2 shows the test-particle solution from a time-dependent DSA simulation, in which the dynamical feedback of the CR pressure was turned o\ufb00. Contrary to CBA09 case, no FEB is enforced in this simulation, so the shock does not approach to a steady state, but instead evolves in time. As the CRs are accelerated to ever high energies (pmax \u221dt), the scale length of the CR pressure increases linearly with time, ld(pmax) \u221dust. So the shock structure evolves in a self-similar fashion, depending only on the similarity variable, x/(ust) (see Kang & Jones 2007). By setting p\u2217= 1.2pmax(t) (i.e., \u03b6 = 1.2) and also by adopting the value of finj from the DSA simulation result, we calculated ftp(xs, p) according to equation (10). As can be seen in the \ufb01gure, the agreement between the numerical DSA results and the analytic approximation is excellent. Thus we take equation (10) as the test-particle spectrum from DSA, where qtp, pinj, finj, and p\u2217\u22481.2pmax(t) are given by equations (2), (3), (4), and (8), respectively. \f\u2013 8 \u2013 Figure 3 shows some examples of the test-particle spectrum given in equation (10). We consider the shocks propagating into the hot-phase of the ISM of T1 = 106K or a typical intracluster medium (ICM) of T1 = 107K. The shock speed is given by us = M \u00b7 cs, where the sound speed is cs = 150 km s\u22121(T1/106K)1/2. For all the cases, we assume a constant cuto\ufb00momentum, p\u2217= 106GeV/c, which is close to the knee energy in the Galactic cosmic ray spectrum. For typical hot-phase ISM, \u03b4 = vA/cs \u22481 as mentioned before. For typical ICM, nH \u224810\u22123cm\u22123 and B \u22481 \u22125\u00b5G, so \u03b4 \u22480.5 is taken here. For typical test-particle limit solutions, we adopt \u01ebB = 0.21 to specify pinj given in equation (3), which determines the anchoring point where the test-particle power-law begins. This choice of \u01ebB results in the injection rate \u03be \u227210\u22124 and the postshock CR pressure Pc,2/(\u03c11u2 s) \u22720.1. As can be seen in Figure 3, for stronger (faster) shocks, the postshock gas is hotter, the amplitude finj is higher and the power-law spectrum is harder. Then the CR pressure at the shock position can be calculated by Pc(xs) = 4\u03c0 3 c Z \u221e pinj ftp(xs, p) p4dp p p2 + (mpc)2. (14) For strong shocks with qtp = 4, with the test-particle spectrum in equation (10), Pc \u221d finjp4 inj ln(p\u2217/mpc). Then, with a constant cuto\ufb00p\u2217, Pc \u221dexp(\u2212Q2 inj)Q4 injpth. So for a \ufb01xed value of Qinj (or \ufb01xed injection fraction \u03be), Pc \u221dpth \u221dus. Figure 4 shows the fraction of injected particles and the postshock CR pressure calculated by adopting the test-particle spectrum given in equation (10). The same p\u2217= 106GeV/c is chosen as in Figure 3. The quantities, ncr,2 and Pc,2 do not depend sensitively on the assumed value of p\u2217for weak shocks, since the power-slope qtp is greater than 4. But for strong shocks (M \u227330) where qtp \u22484 (see Fig. 1), the CR pressure increases logarithmically as Pc \u221dln(p\u2217/mpc). Several values of T1, \u01ebB (or Qinj), and \u03b4 = vA/cs are considered. In general, for \ufb01xed values of \u01ebB (or Qinj) and \u03b4, the ratio Pc,2/(\u03c11u2 s) increase strongly with the shock Mach number for shocks with M \u227210, because of the strong dependence of \u03be (or Qinj) on M for weaker shocks. But for shocks with M > 10, \u03be becomes independent of M and so Pc \u221dus, as discussed above. So the CR pressure relative to the shock ram pressure, Pc,2/(\u03c11u2 s) \u221du\u22121 s , that is, it becomes smaller at faster shocks. Of course, in the nonlinear DSA regime, the ratio Pc,2/(\u03c11u2 s) increases with the shock Mach number and saturates at about 1/2 (Kang et al. 2009). The top panels of Figure 4 show how the CR pressure depends on \u01ebB and \u03b4. For a given Mach number, the CR pressure increases strongly with \u01ebB, because of the exp(\u2212Q2 inj) factor. Obviously, the CR pressure becomes smaller for larger \u03b4 because of softer power-law spectra at weaker shocks with M \u227230. For \u01ebB = 0.21 and \u03b4 = 1, \u03be \u227210\u22124 and Pc,2/(\u03c11u2 s) \u22720.1, so the test-particle solution would provide a good approximation. For \u01ebB = 0.23, on the other \f\u2013 9 \u2013 hand, the injection fraction becomes \u03be \u224810\u22124 \u221210\u22123, and the test-particle solution is no longer valid for M \u22735. For weak cosmological shocks with M \u22723, typically found in the hot ICM (e.g., Ryu et al. 2003; Kang et al. 2007), even for a rather large value of \u01ebB = 0.27, the injection fraction is smaller than 10\u22123 and Pc,2/\u03c11u2 s < 0.01 So we can safely adopt the test-particle solution for those weak shocks, unless there are abundant pre-existing CRs in the preshock \ufb02ow. The middle panels show the cases with the same Qinj, independent of M. For these cases, T1 = 106K, \u03b4 = 1, and pmax = 106GeV/c. With the same Qinj, the injection fraction is almost independent of M except for weak shocks with M \u22725. For Qinj = 3.8, Pc,2/\u03c11u2 s \u22720.1 for all shocks. One can see that Qinj \u22483.8 is the critical value, above which the injection fraction becomes \u03be \u227210\u22124 and the ratio Pc,2/(\u03c11u2 s) \u22720.1. Hence, if pinj \u22733.8pth, the CR injection fraction is small enough to guarantee the validity of test-particle solution. But once again one should note that Pc \u221dln p\u2217for strong shocks. The bottom panels show the cases in which the preshock temperature is T1 = 105\u2212107K. Since the ratio Pc,2/(\u03c11u2 s) \u221d\u03beu\u22121 s and \u03be does not depend on T1, Pc,2/(\u03c11u2 s) \u221d\u03beT \u22121/2 1 for a given Mach number, M = us/cs. So we chose \u01ebB \u22480.20 \u22120.22 for di\ufb00erent T1, which results in \u03be \u223c10\u22124(T1/106K)1/2. This gives the similar value of Pc,2/(\u03c11u2 s) \u223c0.1 for three values of T1. For these shocks, the test-particle solution would be valid. When Pc,2/(\u03c11u2 s) > 0.1, the nonlinear feedback of the di\ufb00usive CR pressure becomes important and the evolution of CR modi\ufb01ed shocks should be followed by DSA simulations. Figure 5 compares the evolution of a slightly modi\ufb01ed M = 5 shock (\u01ebB = 0.27) with that of a test-particle shock (\u01ebB = 0.2). In the CR modi\ufb01ed shock, the upstream \ufb02ow is decelerated in the precursor before it enters the gas subshock. So the quantities at far upstream, immediately upstream and downstream of the subshock are subscripted with \u20190\u2019, \u20191\u2019, and \u20192\u2019, respectively. For the test-particle shock, \u03c11 = \u03c10 and T1 = T0. Here T0 = 106K and vA/cs = 0.42. The simulations start with a purely gasdynamic shock at rest at x = 0, initialized according to Rankine-Hugoniot relations with u0 = \u22121, \u03c10 = 1 and a gas adiabatic index, \u03b3g = 5/3. There are no pre-existing CRs. The test-particle spectrum given in equation (10) with p\u2217= 1.2pmax at t/t0 = 10 is also shown for comparison (dot-dashed lines) in the bottom panels. In the test-particle shock with \u01ebB = 0.2, both Pc,2/(\u03c10u2 s) \u22480.005 and f(xs) from the DSA simulation agree well with the test-particle solution given in equation (10), as expected. If we were to take the test-particle spectrum with \u01ebB = 0.27, we would obtain \u03be = 1.74 \u00d7 10\u22123 and Pc,2/(\u03c11u2 s) = 1.17, which is unphysical. In the CR modi\ufb01ed solution from the DSA simulation, however, \u03be \u22483.6 \u00d7 10\u22124 and Pc,2/(\u03c11u2 s) \u22480.1. The postshock \f\u2013 10 \u2013 temperature T2 is reduced about 17 % in the CR modi\ufb01ed solution (due to higher \u03c12 and lower pg,2), compared to that in the test particle solution. But u2 and so pinj remain about the same. As a result, the amplitude finj is lower than that of the test-particle spectrum (see the bottom right panel of Fig. 5) and so the injection rate is reduced in the CR modi\ufb01ed solution. The distribution function f(xs, p) from the DSA simulation is slightly steeper for p/mpc < 10 and slightly \ufb02atter for p/mpc > 10 than the test-particle power-law, because the \ufb02ow velocity is slightly modi\ufb01ed. This demonstrates that the DSA saturates in the limit of e\ufb03cient injection through the modi\ufb01cation of the shock structure (i.e., a precursor plus a weak gas subshock), which in turn reduces the injection rate. Thus the ratio Pc,2/(\u03c11u2 s) approaches to \u223c1/2 for strongly modi\ufb01ed CR shocks (Kang & Jones 2007). Finally, we \ufb01nd that the volume integrated spectrum contained in the simulation volume can be obtained simply from F(p) = R f(x, p)dx \u2248ftp(xs, p)u2t. This provides the total CR spectrum accelerated by the age t. 4. SUMMARY Although the nonlinear di\ufb00usive shock acceleration (DSA) involves rather complex plasma and MHD processes, the test-particle solution may unveil some simple yet essential pictures of the theory. In this study, we suggest an analytic form for the CR spectrum from DSA in the test-particle regime, based on simple models for thermal leakage injection and Alfv\u00b4 enic drift of self-generated resonant waves. If the particle di\ufb00usion is speci\ufb01ed (e.g., Bohm di\ufb00usion), the shock Mach number is the primary parameter that determines the e\ufb03ciency of di\ufb00usive shock acceleration. For a given shock Mach number, the fraction of injected CR particles becomes the next key factor. Since the postshock thermal velocity distribution at the injection momentum determines the amplitude of the power-law distribution in the thermal leakage injection model, the ratio Qinj = pinj/pth is the key parameter that controls the CR injection fraction and in turn determines the CR acceleration e\ufb03ciency. On the other hand, as a result of the drift of Alfv\u00b4 en waves in the precursor, the power-law slope should be revised as in equation (2), which leads to the CR spectrum much steeper than the canonical test-particle power-law. This e\ufb00ect is negligible for shocks with the Alfv\u00b4 enic Mach number, MA \u227330. For shocks with the sonic Mach number M \u227310, depending on the preshock temperature T1, the injection fraction, \u03be \u2272\u03bec \u224810\u22124(T1/106K)1/2 would lead to the downstream CR pressure, Pc,2/(\u03c11u2 s) \u22720.1. The exact values depend on other parameters such as vA. In that case, the CR spectrum at the shock location can be described by the test-particle \f\u2013 11 \u2013 power-law given in equation (10), in which the amplitude, finj, is \ufb01xed by the postshock thermal distribution at pinj given in equation (4). For supernova remnants in the hot-phase of the ISM with T1 = 106K, for example, the CR injection fraction becomes less than 10\u22124, if Qinj \u22733.8 (or \u01ebB \u22720.21). For weaker shocks with M < 5, the test-particle solution is valid even for larger injection fraction, so \u03bec < 10\u22123. We have shown that the CR spectrum at the shock location in time-dependent, testparticle shocks without particle escape could be approximated by the analytic solution given in equation (10), which was derived for steady-state, test-particle shocks by Caprioli et al. (2009), with the cuto\ufb00momentum set as p\u2217\u22481.2pmax(t). If the CR injection is ine\ufb03cient, which should be true for weak shocks with M \u22725 found in the intracluster medium, the test-particle solution presented in this paper should provide a good approximation. Figure 4 should provide guidance to assess if a shock with speci\ufb01c properties can be treated with the test-particle solution. With the injection rate greater than \u03bec, especially for shocks with M > 5, the spectrum deviates from the test-particle form due to the modi\ufb01ed \ufb02ow structure caused by the di\ufb00usive CR pressure. In fact, the DSA e\ufb03ciency saturates in strongly modi\ufb01ed CR shocks, because the postshock temperature gets lower and so the injection rate is reduced. Based on the results of the DSA simulations, Kang et al. (2009) suggested that CR-modi\ufb01ed shocks evolve self-similarly once the total pressure is dominated by relativistic particles, and that the CR spectrum at the subshock can be approximated by the sum of two power laws with the slopes determined by the subshock and total compression ratios with an exponential cuto\ufb00 at pmax(t). The authors would like to thank T. W. Jones for helpful comments on the paper. HK was supported by National Research Foundation of Korea through grant 2009-0075060. DR was supported by National Research Foundation of Korea through grant KRF-2007-341-C00020." + }, + { + "url": "http://arxiv.org/abs/1003.4386v1", + "title": "Cosmic Ray Spectrum in Supernova Remnant Shocks", + "abstract": "We perform kinetic simulations of diffusive shock acceleration (DSA) in Type\nIa supernova remnants (SNRs) expanding into a uniform interstellar medium\n(ISM). Bohm-like diffusion assumed, and simple models for Alfvenic drift and\ndissipation are adopted. Phenomenological models for thermal leakage injection\nare considered as well. We find that the preshock gas temperature is the\nprimary parameter that governs the cosmic ray (CR) acceleration efficiency and\nenergy spectrum, while the CR injection rate is a secondary parameter. For SNRs\nin the warm ISM, if the injection fraction is larger than 10^{-4}, the DSA is\nefficient enough to convert more than 20 % of the SN explosion energy into CRs\nand the accelerated CR spectrum exhibits a concave curvature flattening to\nE^{-1.6}. Such a flat source spectrum near the knee energy, however, may not be\nreconciled with the CR spectrum observed at Earth. On the other hand, SNRs in\nthe hot ISM, with an injection fraction smaller than 10^{-4}, are inefficient\naccelerators with less than 10 % of the explosion energy getting converted to\nCRs. Also the shock structure is almost test-particle like and the ensuing CR\nspectrum can be steeper than E^{-2}. With amplified magnetic field strength of\norder of 30 microG, Alfven waves generated by the streaming instability may\ndrift upstream fast enough to make the modified test-particle power-law as\nsteep as E^{-2.3}, which is more consistent with the observed CR spectrum.", + "authors": "Hyesung Kang", + "published": "2010-03-23", + "updated": "2010-03-23", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION It is believed that most of the Galactic cosmic rays (CRs) are accelerated in the blast waves driven by supernova (SN) explosions (e.g., Blandford & Eichler 1987, Reynolds 2008 and references therein). If about 10 % of Galactic SN luminosity, LSN \u22481042erg s\u22121, is transfered to the CR component, the di\ufb00usive shock acceleration (DSA) at supernova remnants (SNRs) can provide the CR luminosity, LCR \u22481041erg s\u22121 that escapes from the Galaxy. Several time-dependent, kinetic simulations of the CR acceleration at SNRs have shown that an order of 10 % of the SN explosion energy can be converted to CRs, when a fraction \u223c10\u22124 of incoming thermal particles are injected into the CR population at the subshock (e.g., Berezhko, & V\u00a8 olk 1997; Berezhko et al 2003; Kang 2006). X-ray observations of young SNRs such as SN1006 and RCW86 indicate the presence of 10-100 TeV electrons emitting nonthermal synchrotron emission immediately inside the outer SNR shock (Koyama et al. 1995; Bamba et al. 2006, Helder et al. 2009). They provide clear evidence for the e\ufb03cient acceleration of the CR electrons at SNR shocks. Moreover, HESS gammaray telescope detected TeV emission from several SNRs such as RXJ1713.7-3946, Cas A, Vela Junior, and RCW86, which may indicate possible detection of \u03c00 \u03b3rays produced by nuclear collisions of hadronic CRs with the surrounding gas (Aharonian et al. 2004, 2009; Berezhko & V\u00a8 olk 2006; Berezhko et al. 2009; Morlino et al. 2009, Abdo et al. 2010). It is still challenging to discern whether such emission could provide direct evidence for the acceleration of hadronic CRs, since \u03b3-ray emission could be produced by inverse Compton scattering of the background radiation by X-ray emitting relativistic electrons. More recently, however, Fermi LAT has observed in GeV range several SNRs interacting with molecular clouds, providing some very convincing evidence of \u03c00 decay \u03b3-rays (Abdo et al. 2009, 2010). In DSA theory, a small fraction of incoming thermal particles can be injected into the CR population, and accelerated to very high energies through their interactions with resonantly scattering Alfv\u00b4 en waves in the converging \ufb02ows across the SN shock (e.g., Drury et al. 2001). Hence the strength of the turbulent magnetic \ufb01eld is one of the most important ingredients, which govern the acceleration rate and in turn the maximum energy of the accelerated particles. If the magnetic \ufb01eld strength upstream of SNRs is similar to the mean interstellar medium (ISM) \ufb01eld of BISM \u223c5\u00b5G, the maximum energy of CR ions of charge Z is estimated to be Emax \u223c1014Z eV (Lagage & Cesarsky 1983). However, high-resolution X-ray observations of several young SNRs exhibit very thin rims, indicating the presence of magnetic \ufb01elds as strong as a few 100\u00b5G downstream of the shock (e.g., Bamba et al. 2003, Parizot et al. 2006). Moreover, theoretical studies have shown that e\ufb03cient magnetic \ufb01eld ampli\ufb01cation via resonant and non-resonant wave-particle interactions is an integral part of DSA (Lucek & Bell 2000, Bell 2004). If there exist such ampli\ufb01ed magnetic \ufb01elds in the upstream region of SNRs, CR ions might gain energies up to Emax \u223c1015.5Z eV, which may explain \u2013 1 \u2013 \f2 KANG the all-particle CR spectrum up to the second knee at \u223c1017 eV with rigidity-dependent energy cuto\ufb00s. A self-consistent treatment of the magnetic \ufb01eld ampli\ufb01cation has been implemented in several previous studies of nonlinear DSA (e.g., Amato & and Blasi 2006, Vladimirov et al. 2008). In Kang 2006 (Paper I, hereafter), we calculated the CR acceleration at typical remnants from Type Ia supernovae expanding into a uniform interstellar medium (ISM). With the upstream magnetic \ufb01elds of B0 = 30\u00b5G ampli\ufb01ed by the CR streaming instability, it was shown that the particle energy can reach up to 1016Z eV at young SNRs of several thousand years old, which is much higher than what Lagage & Cesarsky predicted. But the CR injection and acceleration e\ufb03ciencies are reduced somewhat due to faster Alfv\u00b4 en wave speed. With the particle injection fraction \u223c10\u22124 \u221210\u22123, the DSA at SNRs is very e\ufb03cient, so that up to 40-50 % of the explosion energy can be transferred to the CR component. We also found that, for the SNRs in the warm ISM (T0 = 104K), the accelerated CR energy spectrum should exhibit a concave curvature with the power-law slope, \u03b1 (where N(E) \u221dE\u2212\u03b1) \ufb02attening from 2 to 1.6 at E > 0.1 TeV. In fact, the concavity in the CR energy spectrum is characteristic of strong (M > 10) CR modi\ufb01ed shocks when the injection fraction is greater than 10\u22124. (e.g., Malkov & Drury 2001, Berezhko & V\u00a8 olk 1997, Blasi et al. 2005) Recently, Ave et al. (2009) have analyzed the spectrum of CR nuclei up to \u223c1014 eV measured by TRACER instrument and found that the CR spectra at Earth can be \ufb01tted by a single power law of J(E) \u221dE\u22122.67. Assuming an energy-dependent propagation path length (\u039b \u221dE\u22120.6), they suggested that a soft source spectrum, N(E) with \u03b1 \u223c2.3\u22122.4 is preferred by the observed data. However, the DSA predicts that \u03b1 = 2.0 for strong shocks in the test-particle limit and even smaller values for CR modi\ufb01ed shocks in the e\ufb03cient acceleration regime as shown in Paper I. Thus in order to reconcile the DSA prediction with the TRACER data the CR acceleration e\ufb03ciency at typical SNRs should be minimal and perhaps no more than 10 % of the explosion energy transferred to CRs (i.e., test-particle limit). Moreover, recent Fermi-LAT observations of Cas A, which is only 330 years old and has just entered the Sedov phase, indicate that only about 2% of the explosion energy has been transfered to CR electrons and protons, and that the soft proton spectrum with E\u22122.3 is preferred to \ufb01t the observed gamma-ray spectrum (Abdo et al. 2010). According to Paper I, such ine\ufb03cient acceleration is possible only for SNRs in the hot phase of the ISM and for the injected particle fraction smaller than 10\u22124. One way to soften the CR spectrum beyond the canonical testparticle slope (\u03b1 > 2) is to include the Alfv\u00b4 enic drift in the precursor, which reduces the velocity jump across the shock. Zirakashvili & Ptuskin (2008) showed that the Alfv\u00b4 enic drift in the ampli\ufb01ed magnetic \ufb01elds both upstream and downstream can drastically soften the accelerated particle spectrum. We will explore this issue using our numerical simulations below. Caprioli et al. (2009) took a di\ufb00erent approach to reconcile the concave CR spectrum predicted by nonlinear DSA theory with the softer spectrum inferred from observed J(E). They suggested that the CR spectrum at Earth is the sum of the time integrated \ufb02ux of the particles that escape from upstream during the ST stage and the \ufb02ux of particles con\ufb01ned in the remnant and escaping at later times. They considered several cases and found the injected spectrum could be softer than the concave instantaneous spectrum at the shock. The main uncertainties in their calculations are related with speci\ufb01c recipes for the particle escape. It is not well understood at the present time how the particles escape through a free escape boundary (xesc) located at a certain distance upstream of the shock or through a maximum momentum boundary due to lack of (selfgenerated) resonant scatterings above an escape momentum. The escape or release of CRs accelerated in SNRs to the ISM remains largely unknown and needs to be investigated further. One of the key aspects of the DSA model is the injection process through which suprathermal particles in the Maxwellian tail get accelerated and injected into the Fermi process. However, the CR injection and consequently the acceleration e\ufb03ciency still remain uncertain, because complex interplay among CRs, waves, and the underlying gas \ufb02ow (i.e., self-excitation of waves, resonant scatterings of particles by waves, and non-linear feedback to the gas \ufb02ow) is all modeldependent and not understood completely. In this paper, we adopted two di\ufb00erent injection recipes based on thermal leakage process, which were considered previously by us and others. Then we have explored the CR acceleration at SNR shocks in the different temperature phases (i.e., di\ufb00erent shock Mach numbers) and with di\ufb00erent injection rates. Details of the numerical simulations and model parameters are described in \u00a7II. The simulation results are presented and discussed in \u00a7III, followed by a summary in \u00a7IV. II. NUMERICAL METHOD (a) Spherical CRASH code Here we consider the CR acceleration at a quasiparallel shock where the magnetic \ufb01eld lines are parallel to the shock normal. So we solve the standard gasdynamic equations with CR pressure terms added in the Eulerian formulation for one dimensional spherical symmetric geometry. The basic gasdynamic equations and details of the spherical CRASH (Cosmic-Ray Amr SHock) code can be found in Paper I and Kang & Jones (2006). In the kinetic equation approach to numerical study of DSA, the following di\ufb00usion-convection equation for the particle momentum distribution, f(p), is solved \fCosmic Ray Spectrum in SNRs 3 along with suitably modi\ufb01ed gasdynamic equations (e.g., Kang & Jones 2006): \u2202g \u2202t + (u + uw)\u2202g \u2202r = 1 3r2 \u2202 \u2202r \u0002 r2(u + uw) \u0003 \u0012\u2202g \u2202y \u22124g \u0013 + 1 r2 \u2202 \u2202r \u0014 r2\u03ba(r, y)\u2202g \u2202r \u0015 , (1) where g = p4f, with f(p, r, t) the pitch angle averaged CR distribution, and y = ln(p), and \u03ba(r, y) is the di\ufb00usion coe\ufb03cient parallel to the \ufb01eld lines . So the proton number density is given by nCR,p = 4\u03c0 R f(p)p2dp. For simplicity we express the particle momentum, p in units of mpc and consider only the proton component. The velocity uw represents the e\ufb00ective relative motion of scattering centers with respect to the bulk \ufb02ow velocity, u. The mean wave speed is set to the Alfv\u00b4 en speed, i.e., uw = vA = B/\u221a4\u03c0\u03c1 in the upstream region. This term re\ufb02ects the fact that the scattering by Alfv\u00b4 en waves tends to isotropize the CR distribution in the wave frame rather than the gas frame. In the postshock region, uw = 0 is assumed, since the Alfv\u00b4 enic turbulence in that region is probably relatively balanced. This reduces the velocity di\ufb00erence between upstream and downstream scattering centers compared to the bulk \ufb02ow, leading to less e\ufb03cient DSA. This in turn a\ufb00ects the CR spectrum, and so the \u2018modi\ufb01ed\u2019 test-particle slope can be estimated as qtp = 3(u0 \u2212vA) u0 \u2212vA \u2212u2 (2) where f(p) \u221dp\u2212qtp is assumed (e.g., Kang et al. 2009). Hereafter we use the subscripts \u20190\u2019, \u20191\u2019, and \u20192\u2019 to denote conditions far upstream of the shock, immediately upstream of the gas subshock and immediately downstream of the subshock, respectively. Thus the drift of Alfv\u00b4 en waves in the upstream region tends to soften the CR spectrum from the canonical test-particle spectrum of f(p) \u221dp\u22124 if the Alfv\u00b4 en Mach number (MA = us/vA) is small. We note \u03b1 = q \u22122 for relativistic energies. For example, for a strong shock with u2 = u0/4 in the test particle limit, we can obtain the observed value of \u03b1 = 2.3 if vA = 0.173u0. Gas heating due to Alfv\u00b4 en wave dissipation in the upstream region is represented by the term W(r, t) = \u2212\u03c9HvA \u2202Pc \u2202r , (3) where Pc = (4\u03c0mpc2/3) R g(p)dp/ p p2 + 1 is the CR pressure. This term is derived from a simple model in which Alfv\u00b4 en waves are ampli\ufb01ed by streaming CRs and dissipated locally as heat in the precursor region (e.g., Jones 1993). As was previously shown in SNR simulations (e.g., Berezhko & V\u00a8 olk 1997, Kang & Jones 2006), precursor heating by wave dissipation reduces the subshock Mach number thereby reducing DSA ef\ufb01ciency. The parameter \u03c9H is introduced to control the degree of wave dissipation. We set \u03c9H = 1 for all models unless stated otherwise. Accurate solutions to the CR di\ufb00usion-convection equation require a computational grid spacing signi\ufb01cantly smaller than the particle di\ufb00usion length, \u2206x \u226a xd(p) = \u03ba(p)/us. With Bohm-like di\ufb00usion coe\ufb03cient, \u03ba(p) \u221dp, a wide range of length scales must be resolved in order to follow the CR acceleration from the injection energy (typically pinj \u223c10\u22122) to highly relativistic energy (p \u226b1). This constitutes an extremely challenging numerical task, requiring rather extensive computational resources. In order to overcome this dif\ufb01culty, we have developed CRASH code in 1D planeparallel geometry (Kang et al. 2001) and in 1D spherical symmetric geometry (Kang & Jones 2006) by combining Adaptive Mesh Re\ufb01nement technique and subgrid shock tracking technique. Moreover, we solve the \ufb02uid and di\ufb00usion-convection equations in a frame comoving with the outer spherical shock in order to implement the shock tracking technique e\ufb00ectively in an expanding spherical geometry. In the comoving grid, the shock remains at the same location, so the compression rate is applied consistently to the CR distribution at the subshock, resulting in much more accurate and e\ufb03cient low energy CR acceleration. (b) Injection Recipes for Thermal Leakage The injection rate with which suprathermal particles are injected into CRs at the subshock depends in general upon the shock Mach number, \ufb01eld obliquity angle, and strength of Alfv\u00b4 en turbulence responsible for scattering. In thermal leakage injection models suprathermal particles well into the exponential tail of the postshock Maxwellian distribution leak upstream across a quasi-parallel shock (Malkov & V\u00a8 olk 1998; Malkov & Drury 2001). Currently, however, these microphysics issues are known poorly and any quantitative predictions of macrophysical injection rate require extensive understandings of complex plasma interactions. Thus this process has been handled numerically by adopting some phenomenological injection schemes in which the particles above a certain injection momentum pinj cross the shock and get injected to the CR population. There exist two types of such injection models considered previously by several authors. In a simpler form, pinj represents the momentum boundary between thermal and CR population and so the particles are injected at this momentum (e.g., Kang & Jones 1995, Berezhko & V\u00a8 olk 1997, Blasi et al. 2005). The injection momentum is then expressed as pinj = Rinjpth, (4) where Rinj is a constant and pth = p 2kBT2mp is the thermal peak momentum of the Maxwellian distribution of the immediate postshock gas with temperature T2, and kB is the Boltzmann constant. The CR distribution at pinj is then \ufb01xed by the Maxwellian distribu\f4 KANG Table 1. Model Parameters Modela nH (ISM) T0 E0 B\u00b5 ro to uo Po (cm\u22123) (K) (1051 ergs) (\u00b5G) (pc) (years) (104 km s\u22121) (10\u22126erg cm\u22123) WA/WB 0.3 3.3 \u00d7 104 1. 30 3.19 255. 1.22 1.05 MA/MB 0.03 105 1. 30 6.87 549. 1.22 1.05 \u00d7 10\u22121 HA/HB 0.003 106 1. 30 14.8 1182. 1.22 1.05 \u00d7 10\u22122 a \u2018W\u2019, \u2018M\u2019, and \u2019H\u2019 stands for the warm, intermediate, and hot phase of the ISM, respectively, while \u2018A\u2019 and \u2018B\u2019 stands for the injection recipes A and B, respectively, described in II (b). tion, f(pinj) = n2 \u0012 mp 2\u03c0kBT2 \u00131.5 exp(\u2212R2 inj), (5) where n2 is the postshock proton number density. Thus the constant parameter Rinj controls the injection rate in this model. Here we refer this as \u2018injection recipe B\u2019 and consider the cases of Rinj = 3.6 and 3.8. In Kang et al. (2002), on the other hand, a smooth \u201ctransparency function\u201d, \u03c4esc(\u01ebB, \u03c5) is adopted, rather than a step-like \ufb01lter function of the injection recipe B. This function expresses the probability of suprathermal particles at a given velocity, \u03c5, leaking upstream through the postshock MHD waves. One free parameter controls this function; \u01ebB = B0/B\u22a5, which is the inverse ratio of the amplitude of the postshock MHD wave turbulence B\u22a5to the general magnetic \ufb01eld aligned with the shock normal, B0 (Malkov & V\u00a8 olk 1998). In this model, the leakage probability \u03c4esc > 0 above p1 \u2248mpu2(1 + 1.07/\u01ebB) \u221dpth, and the \u201ce\ufb00ective\u201d injection momentum is a few times p1. So the injection momentum can be expressed as pinj = Qinj(Ms, \u01ebB)pth. (6) Note that the ratio Qinj is a function of the subshock Mach number, Ms, as well as the parameter \u01ebB, while the constant ratio Rinj is independent of Ms. The value of Qinj is larger (and so the injection rate is smaller) for weaker subshocks and for smaller \u01ebB (see Kang et al. 2002). In an evolving CR modi\ufb01ed shock, the subshock weakens as the precursor develops due to nonlinear feedback of the CR pressure and so the injection rate decreases in time. We refer this as \u2018injection recipe A\u2019 and consider 0.2 \u2264\u01ebB \u22640.3 here. In Paper I we only considered the gas with protons (i.e., mean molecular weight \u00b5 = 1), but here we assume fully ionized plasma with cosmic abundance (\u00b5 = 0.61). As a result, for given gas pressure and density, the temperature is lower and so slightly larger \u01ebB is needed to obtain the similar level of injection as in Paper I. Note that \u01ebB = 0.16 \u22120.2 in Paper I. The e\ufb03ciency of the particle injection is quanti\ufb01ed by the fraction of particles swept through the shock that have been injected into the CR distribution: \u03be(t) = R 4\u03c0r2dr R 4\u03c0f(p, r, t)p2dp R 4\u03c0r2 sn0usdt , (7) where n0 is the proton number density far upstream and rs is the shock radius. Recent observations of nonthermal radiation from several SNRs indicate that the injection fraction is about \u03be \u223c10\u22124 (e.g., Berezhko et al. 2009, Morlino et al. 2009). In our simulations, initially there is no pre-existing CRs and so all CR particles are freshly injected at the shock. (c) A Bohm-like Di\ufb00usion Model Self-excitation of Alfv\u00b4 en waves by the CR streaming instability in the upstream region is an integral part of the DSA (Bell 1978; Lucek & Bell 2004). The particles are resonantly scattered by those waves, di\ufb00use across the shock, and get injected into the Fermi \ufb01rst-order process. These complex interactions are represented by the di\ufb00usion coe\ufb03cient, which is expressed in terms of a mean scattering length, \u03bb, and the particle speed, \u03c5, as \u03ba(x, p) = \u03bb\u03c5/3. The Bohm di\ufb00usion model is commonly used to represent a saturated wave spectrum (i.e., \u03bb = rg, where rg is the gyro-radius), \u03baB(p) = \u03banp2/ (p2 + 1)1/2. Here \u03ban = mc3/(3eB) = 3.13 \u00d7 1022cm2s\u22121B\u22121 \u00b5 , and B\u00b5 is the magnetic \ufb01eld strength in units of microgauss. As in Paper I, we adopt a Bohmlike di\ufb00usion coe\ufb03cient that includes a weaker nonrelativistic momentum dependence, \u03ba(r, p) = \u03ban \u00b7 p \u03c10 \u03c1(r). (8) Since we do not follow explicitly the ampli\ufb01cation of magnetic \ufb01elds due to streaming CRs, we simply assume that the \ufb01eld strength scales with compression and so the di\ufb00usion coe\ufb03cient scales inversely with density. III. Simulations of Sedov-Taylor Blast Waves (a) SNR Model Parameters As in Paper I, we consider a Type Ia supernova explosion with the ejecta mass, Mej = 1.4M\u2299, expand\fCosmic Ray Spectrum in SNRs 5 Fig. 1.\u2014 Time evolution of SNR model MA with \u01ebB = 0.25 (upper panels) and SNR model MB with Rinj = 3.6 (lower panels) at t/to = 1., 3., 6., 10. and 15. In the right panels, heavy lines are for the CR pressure, while thin lines are for the gas pressure. The model parameters are Mej = 1.4M\u2299, Eo = 1051 ergs, nH = 0.03cm\u22123, T0 = 105K, and B0 = 30\u00b5G. See Table 1 for the normalization constants. ing into a uniform ISM. All models have the explosion energy, Eo = 1051 ergs. Previous studies have shown that the shock Mach number is the key parameter determining the evolution and the DSA e\ufb03ciency, although other processes such as the particle injection, the Alfv\u00b4 enic drift and dissipation do play certain roles (e.g. Kang & Jones 2002, 2007). So here three phases of the ISM are considered: the warm phase with nH = 0.3cm\u22123 and T0 = 3 \u00d7 104K, the hot phase with nH = 0.003 cm\u22123 and T0 = 106K, and the intermediate phase with nH = 0.03 cm\u22123 and T0 = 105K. The pressure of the background gas is PISM \u223c10\u221212 erg cm\u22123. Model parameters are summarized in Table 1. For example, \u2018WA\u2019 model stands for the warm phase and the injection recipe A, while \u2018MB\u2019 model stands for the intermediate phase and the injection recipe B. Recent X-ray observations of young SNRs indicate a magnetic \ufb01eld strength much greater than the mean ISM \ufb01eld of 5\u00b5G (e.g., Berezhko et al. 2003; V\u00a8 olk et al. 2005). Thus, to represent this e\ufb00ect we take the upstream \ufb01eld strength, B0 = 30\u00b5G. The strength of magnetic \ufb01eld determines the size of di\ufb00usion coe\ufb03cient, \u03ban, and the drift speed of Alfv\u00b4 en waves relative to the bulk \ufb02ow. The Alfv\u00b4 en speed is given by vA = vA,0(\u03c1/\u03c10)\u22121/2 where vA,0 = (1.8 kms\u22121)B\u00b5/\u221anH. So in the hot phase of the ISM (HA/HB models), vA,0 = 986 kms\u22121 and vA,0/us \u22480.175 at t = to, The physical quantities are normalized, both in the numerical code and in the plots below, by the following constants: ro = \u00123Mej 4\u03c0\u03c1o \u00131/3 , to = \u0012\u03c1or5 o Eo \u00131/2 , \f6 KANG Fig. 2.\u2014 Immediate pre-subshock density, \u03c11, post-subshock density, \u03c12, post-subshock CR and gas pressure in units of the ram pressure of the unmodi\ufb01ed Sedov-Taylor solution, \u03c10U 2 ST \u221d(t/to)\u22126/5, the CR injection parameter, \u03be, and subshock Mach number, Ms are plotted for models WA (left panels), MA (middle panels), and HA (right panels). See Table 1 for the model parameters. The injection recipe A with \u01ebB = 0.25 is adopted. For WA and HA models, the dashed lines are for the runs with a reduced wave heating parameter, wH = 0.5. uo = ro/to, \u03c1o = (2.34 \u00d7 10\u221224gcm\u22123)nH, Po = \u03c1ou2 o. These values are also given in Table 1 for reference. Note that these physical scales depend only on nH, since Mej and Eo are the same for all models. For a SNR propagating into a uniform ISM, the highest momentum, pmax, is achieved at the beginning of the Sedov-Taylor (ST hereafter) stage and the transfer of explosion energy to the CR component occurs mostly during the early ST stage (e.g., Berezhko et al. 1997). In order to take account of the CR acceleration from free expansion stage through ST stage, we begin the calculations with the ST similarity solution at t/to = 0.5 and terminated them at t/to = 15. See Paper I for further discussion on this issue. IV. RESULTS (a) Remnant Evolution Fig. 1 shows the evolution of SNRs in the intermediate temperature phase with \u01ebB = 0.23 (injection recipe A) and with Rinj = 3.6 (injection recipe B). The spatial pro\ufb01le and the evolution of SNRs are quite similar in the two models, implying that detail di\ufb00erence between the two injection recipes is not crucial. This is because the subshock Mach number reduces to Ms \u22484 at the early stage and remains the same until the end of simulations in the both models, resulting in similar evolutionary behavior of \u03be (see Fig. 2). In these e\ufb03cient acceleration models, by the early ST stage, t/to \u223c1, the forward shock has already become dominated by the CR pressure and the total density compression ratio becomes \u03c12/\u03c10 \u22486 in both models. Spatial distri\fCosmic Ray Spectrum in SNRs 7 Fig. 3.\u2014 The same as Fig. 2 except that \u01ebB = 0.2 (solid lines) or 0.3 (dashed lines) is adopted bution of the CR pressure widens and becomes broader than that of the gas pressure at the later stage. The precursor length scale is given by the di\ufb00usion length of the highest momentum particles, ld,max \u2248 0.1 R us(t)dt, independent of the di\ufb00usion coe\ufb03cient \u03ban (Kang et al. 2009). In the test-particle limit, the shock would follow the ST similarity solution given by UST/uo = 0.4\u03bes(t/to)\u22120.6, (9) where \u03bes = 1.15167 is the similarity parameter (Spritzer 1978). Then ld,max/ro \u22480.1\u03bes(t/to)0.4. Since the shock radius of the ST solution is rST/ro = \u03bes(t/to)0.4, the relative width of the precursor is estimated to be ld,max/rs \u22480.1, independent of \u03ban. In Fig. 1 one can see narrow precursors in the density and CR pressure distribution, consistent with this estimate. The shock Mach number is the primary parameter that determines the CR acceleration e\ufb03ciency, while the injection parameters \u01ebB and Rinj are the secondary parameters. So the temperature (i.e., sound speed) of the background ISM is important. The Mach numbers of the initial shock are Ms,i \u2248310, 180, and 60 in the warm, intermediate, hot ISM models, respectively. For the warm (WA/WB) and intermediate (MA/MB) models, the CR acceleration is e\ufb03cient with the postshock CR pressure, 0.2 \u223c < Pc,2/(\u03c10U 2 ST ) \u223c < 0.4, and the shock is signi\ufb01cantly modi\ufb01ed. We will refer these models as \u2018e\ufb03cient acceleration models\u2019. For HA/HB models, the CR acceleration is ine\ufb03cient with Pc,2/(\u03c10U 2 ST ) < 0.1 and the shock is almost test-particle like. So the hot ISM models are \u2018ine\ufb03cient acceleration models\u2019. Regarding the injected particle fractions, the models with \u01ebB > \u223c0.23 or Rinj = 3.6 \u22123.8 have the injection fraction, \u03be > 10\u22124 and represent \u2018e\ufb03cient injection models\u2019. The models with \u01ebB = 0.2 have \u03be \u224810\u22124.2 and are \u2018ine\ufb03cient injection models\u2019. Figs. 2-4 show the evolution of shock properties such as the compression factors, postshock pressures, the injection fraction, and subshock Mach number for various models. In Fig. 2, the models with \u01ebB = 0.25 are shown for \u03c9H = 0.5 (dashed lines) and \u03c9H = 1.0 (solid \f8 KANG Fig. 4.\u2014 The same as Fig. 2 except that the injection recipe B with Rinj = 3.6 or Rinj = 3.8 is adopted lines). We can see that the more e\ufb03cient wave dissipation (i.e., larger \u03c9H) reduces the CR acceleration and the \ufb02ow compression. Here WA and MA models are characterized by both e\ufb03cient injection and e\ufb03cient acceleration, while HA models show an e\ufb03cient injection but ine\ufb03cient acceleration. Most of shock properties seem to approach to more or less time-asymptotic values before t/to = 1. As the precursor grows, the subshock weakens to 3 \u2264Ms \u22645 in these models. The injected CR particle fraction is about \u03be \u224810\u22123. The postshock CR pressure is about Pc,2/(\u03c10U 2 ST ) \u22480.4, 0.25, and 0.1 for WA, MA, and HA models, respectively. The compression factor in the precursor varies a little among di\ufb00erent models, typically \u03c11/\u03c10 \u22482\u22123. The total density compression is \u03c12/\u03c10 \u22487-8 for WA models, 5.5 for MB model, 4.5 for HA models, indicating the CR modi\ufb01ed shock structure. Comparison of Figs. 2 and 3 tells us how the CR acceleration depends on the injection parameter \u01ebB and consequently on \u03be. The injection fraction is \u03be \u224810\u22124.2 for \u01ebB = 0.2 (ine\ufb03cient injection models), \u03be \u224810\u22123 \u221210\u22123.5 for 0.23 \u2264\u01ebB \u22640.25, and \u03be \u224810\u22122.5 \u221210\u22122 for \u01ebB = 0.3. In the ine\ufb03cient injection models, the subshock Mach number and postshock properties change rather gradually throughout the Sedov stage and the shock is almost test-particle like with \u03c11/\u03c10 \u22481. On the other hand, for \u01ebB = 0.3 the injection fraction seems much higher than what is inferred from recent observations of nonthermal emission from young SNRs (e.g., Morlino et al. 2009). The postshock CR pressure, Pc,2/(\u03c10U 2 ST ), is roughly independent of the injection fraction as long as \u03be > \u223c10\u22123.5 (\u01ebB > \u223c0.23). But, for ine\ufb03cient injection models with \u01ebB = 0.2, this ratio is reduced signi\ufb01cantly. We can see that the models with \u01ebB = 0.25 (Fig. 2) and the models with Rinj = 3.6 (Fig. 4) have similar results. This con\ufb01rms that the shock Mach number is the primary factor that controls the CR acceleration. For the injection recipe B, the injection rate does not depend on the subshock Mach number, so the evolution of \u03be is similar among WB, MB, and HB models. The models with Rinj = 3.8 have about 3 times smaller \fCosmic Ray Spectrum in SNRs 9 Fig. 5.\u2014 The CR distribution at the shock, g(rs, p), and its slope, q(p) = \u2212d(ln g(rs, p))/d ln p + 4, the volume integrated CR number, G(p) = R g(r, p)4\u03c0r2dr, and its slope, Q(p) = \u2212d(ln G(p))/d ln p + 4, are shown at t/to = 1, 3, 6, 10, and 15 for models WA (left panels), MA (middle panels), and HA (right panels). The injection recipe A with \u01ebB = 0.25 is adopted. injection fraction and evolve more slowly, compared to those with Rinj = 3.6. But both models seems to approach to the similar states at t/to > 10. (b) Cosmic Ray Spectrum The rate of momentum gain for a particle under going DSA is given by dp dt = u0 \u2212u2 3 \u0012u0 \u03ba0 + u2 \u03ba2 \u0013 p. (10) Assuming that the shock approximately follows the ST solution and that the compression ratio is about four, the maximum momentum of protons accelerated from ti to t, can be calculated as pmax \u22480.53u2 oto \u03ban \"\u0012 ti to \u0013\u22120.2 \u2212 \u0012 t to \u0013\u22120.2# . (11) For our simulations started from ti/to = 0.5, this asymptotes to pmax \u22480.61(u2 oto/\u03ban) \u223c106.5 at large t, which corresponds to Emax \u22481015.5 eV. Figs. 5-7 show the CR distribution function at the shock, g(rs, p), and its slope q(p) = \u2212d(ln g(rs, p))/d ln p+ 4, and the volume integrated CR spectrum, G(p) = R 4\u03c0g(r, p)r2dr and its slope Q(q) = \u2212d(ln G(p))/d ln p+ 4. The thermal population is included in the plot of g(rs, p) in order to demonstrate how it is smoothly connected with the CR component through thermal leakage. For the volume integrated spectrum, only the CR component is shown. We note that in our simulations the particles escape from the simulation box only by di\ufb00using upstream and no escape condition is enforced. Thus G(p) represents the spectrum of the particles con\ufb01ned by the shock, including the particles in the upstream region. From the spectra in Figs. 5-7, we can see that pmax approaches up to \u223c1015 \u22121015.5 eV/c at t/to = 15 for all models, which is consistent \f10 KANG Fig. 6.\u2014 The same as Fig. 5 except that \u01ebB = 0.2 is adopted. The injection and acceleration e\ufb03ciencies are low. with the estimate given in Eq. (11). In Fig. 5, the CR spectra at the shock in the models with \u01ebB = 0.25 (high injection rate with \u03be \u224810\u22123) exhibit the canonical nonlinear concave curvature. This is a consequence of the following two e\ufb00ects: the large compression factor across the shock structure and the decreasing injection rate due to the slowing of the shock speed. With the CR modi\ufb01ed \ufb02ow structure, the slope near pmax becomes harder with qt = 3s\u03c3t/(s\u03c3t \u22121), where s = 1 \u2212\u03c5A/us is the modi\ufb01cation factor due to the Alfv\u00b4 enic drift and \u03c3t = \u03c12/\u03c10 \u226b4 is the total shock compression ratio. If the subshock Mach number reduces to Ms \u22483 \u22125, the test-particle slope at low momenta becomes qs = 3s\u03c3s/(s\u03c3s \u22121) \u22484.2 \u22124.5, where \u03c3s = \u03c12/\u03c11 is the compression ratio across the subshock. The particle \ufb02ux through the shock, \u03c10us, decreases, because the SNR shock slows down in time. At the same time the injection rate decreases because the injection process is less e\ufb03cient at weaker shocks. The combined e\ufb00ects result in the reduction of the amplitude of f(rs, p) near pinj. Consequently, the CR spectrum at lower momentum steepens and decreases in time. Fig. 5 demonstrates that the modi\ufb01ed \ufb02ow structure along with slowing down of the shock speed accentuates the concavity of the CR spectrum in much higher degrees than what is normally observed in planeparallel shocks. However, the volume integrated spectrum G(p) is more relevant for unresolved observations of SNRs or for the total CR spectrum produced by SNRs. The concavity of G(p) is much less pronounced than that of g(rs, p), and its slope Q(p) varies 4.2-4.4 at low momentum and 3.5-4.0 at high momentum among di\ufb00erent models. We see that G(p) stays almost constant for t/to > \u223c6, especially for 1011 < E < 1015eV, while extending in the momentum space with decreasing pmin and increasing pmax. This can be understood as follows. From Figs. 2-4, Pc,2/(\u03c10U 2 ST) \u223cconstant for t/to > 1 (except in the ine\ufb03cient injection model with \u01ebB = 0.2), so the CR pressure evolves like Pc,2 \u221dt\u22126/5 (see also Fig. 1). The total CR energy associated with the remnant is roughly ECR \u221dPcR3 ST \u223cconstant. \fCosmic Ray Spectrum in SNRs 11 Fig. 7.\u2014 The same as Fig. 5 except that the injection recipe B with Rinj = 3.8 is adopted. Since ECR \u221dR G(p)dp approximately, so G(p) should approach to an asymptotic form for t/to \u226b1. In other words, the distribution function g(r/rs, p) decreases as t\u22126/5 in terms of the normalized coordinate, r/rs, but the volume occupied by the remnant increases as t6/5, resulting in more or less constant G(p). Using this and the fact that pmax asymptotes to 0.6(u2 oto/\u03ban) at large t, we can predict that the form of G(p) would remain about the same at much later time. As discussed in the Introduction, the CR spectrum observed at Earth is J(E) \u221dE\u22122.67 for 109 < E < 1014 eV. This implies that the source spectrum should be roughly N(E) \u221dE\u2212\u03b1 with \u03b1 = 2.3 \u22122.4, if we assume an energy-dependent path length, \u039b(R) \u221dR\u22120.6 (where R = pc/Ze is the rigidity) (Ave et al. 2009). If in fact the CR source spectrum at SNRs, G(p), is assumed to be released into the ISM at the end of ST stage, N(E)dE \u221dG(p)p2dp is too \ufb02at to be consistent with the observation. Thus from the spectra G(p) in Fig. 5 we can infer that SNRs expanding into warm or intermediate phases of the ISM cannot be the dominant sources of Galactic CRs. Even with the hot ISM models, the canonical testparticle spectrum, N(E) \u221dE\u22122 might be still too \ufb02at. If we consider the e\ufb00ects of Alfv\u00b4 en wave drift, however, the modi\ufb01ed test-particle slope will be given by Eq. (2) for strong plane-parallel shocks. One can estimate that vA \u22481000kms\u22121 for nH = 0.003cm\u22123 and B0 = 30\u00b5G, which leads to \u03b1 \u22482.3. We show in Fig. 6 the CR spectra for ine\ufb03cient injection models with \u01ebB = 0.2. The spectra are less \ufb02at, compared with those of e\ufb03cient injection models shown in Fig. 5. Especially, the HA model with \u01ebB = 0.2 has the slope, \u03b1 = 2.1\u22122.3 for 1011 < E < 1015 eV. This could be more compatible with observed J(E) at Earth. Fig. 7 shows the spectra for the models with injection recipe B (Rinj = 3.8). Again G(p) of HB model shows the slope, \u03b1 = 2.1\u22122.3, for 1011 < E < 1015 eV. In fact these spectra are quite similar to those for HA model shown in Fig. 6. \f12 KANG Fig. 8.\u2014 Integrated thermal, kinetic and CR energies inside the simulation volume as a function of time for di\ufb00erent models. The injection parameter is from top to bottom, \u01ebB = 0.2, \u01ebB = 0.25, Rinj = 3.8, and Rinj = 3.6. See Table 1 for model parameters. (c) Energy Conversion Factor Finally, Fig. 8 shows the integrated energies, Ei/Eo = 4\u03c0 R eir2dr, where eth, ekin, and eCR are the densities of thermal, kinetic and cosmic ray energy, respectively. The kinetic energy reduces only slightly and is similar for all models. The total CR energy accelerated by t/to = 15 is ECR/Eo = 0.35, 0.20, and 0.05 for WA, MA, and HA models, respectively, for \u01ebB = 0.2. In the e\ufb03cient injection models with \u01ebB = 0.25 or Rinj = 3.6, the evolution of SNRs is quite similar, and the CR energy fraction approaches to ECR/Eo = 0.56, 0.43, and 0.25 for WA/WB, MA/MB, and HA/HB models, respectively. So in terms of the energy transfer fraction, the CR acceleration in the warm ISM models seems to be too e\ufb03cient. But one has to recall that the CR injection rate may depend on the mean magnetic \ufb01eld direction relative to the shock surface. In a more realistic magnetic \ufb01eld geometry, where a uniform ISM \ufb01eld is swept by the spherical shock, only 10-20 % of the shock surface has a quasi-parallel \ufb01eld geometry (V\u00a8 olk et al. 2003). If the injection rate were to be reduced signi\ufb01cantly at perpendicular shocks, one may argue that the CR energy conversion factor averaged over the entire shock surface could be several times smaller than the factors shown in Fig. 8. On the other hand, Giacalone (2005) showed that the protons can be injected e\ufb03ciently even at perpendicular shocks in fully turbulent \ufb01elds due to \ufb01eld line meandering. In such case the injection rate at perpendicular shocks may not be much smaller than that at parallel shocks and the CR energy conversion may be similar. Then SNRs in the warm phase of the ISM seem to generate too much CR energy. In order to meet the requirement of 10 % energy conversion and at the same time to reconcile with the CR spectrum observed at Earth, SNRs expanding into the hot phase of the ISM should be the dominant accelerators of Galactic CRs below 1015eV. \fCosmic Ray Spectrum in SNRs 13 V. SUMMARY The evolution of cosmic ray modi\ufb01ed shocks depends on complex interactions between the particles, waves in the magnetic \ufb01eld, and underlying plasma \ufb02ow. We have developed numerical tools that can emulate some of those interactions and incorporated them into a kinetic numerical scheme for DSA, CRASH code (Kang et al. 2002, Kang & Jones 2006). Speci\ufb01cally, we assume that a Bohm-like di\ufb00usion arises due to resonant scattering by Alfv\u00b4 en waves self-excited by the CR streaming instability, and adopt simple models for the drift and dissipation of Alfv\u00b4 en waves in the precursor (Jones 1993; Kang & Jones 2006). In the present paper, using the spherical CRASH code, we have calculated the CR spectrum accelerated at SNRs from Type Ia supernova expanding into a uniform interstellar medium. We considered di\ufb00erent temperature phases of the ISM, since the shock Mach number is the primary parameter that determines the acceleration e\ufb03ciency of DSA. One of the secondary parameters is the fraction of particles injected into the CR population, \u03be, at the gas subshock. Since detailed physical processes that governs the injection are not known well, we considered two injection recipes that are often adopted by previous authors. The main di\ufb00erence between the two recipes is whether the ratio of injection momentum to thermal peak momentum, i.e., pinj/pth, is constant or depends on the subshock Mach number. It turns out the CR acceleration and the evolution of SNRs are insensitive to such di\ufb00erence as long as the injection fraction is similar. For example, the models with injection recipe A with \u01ebB = 0.23 and the models with injection recipe B with Rinj = 3.6 show almost the same results with similar injection fractions, \u03be \u224810\u22123.5 \u221210\u22123. In general the DSA is very e\ufb03cient for strong SNR shocks, if the injection fraction, \u03be > \u223c10\u22123.5. The CR spectrum at the subshock shows a strong concavity, not only because the shock structure is modi\ufb01ed nonlinearly by the dominant CR pressure, but also because the SNR shock slows down in time during the ST stage. Thus the concavity of the CR spectrum in SNRs is more pronounced than that in plane-parallel shocks. However, the volume integrated spectrum, G(p), (i.e., the spectrum of CRs con\ufb01ned by the shock including the particles in the upstream region) is much less concave, which is consistent with previous studies (e.g., Berezhko & V\u00a8 olk 1997). We have shown also that G(p) approaches roughly to time-asymptotic states, since the CR pressure decreases as t\u22126/5 while the volume increases as R3 ST \u221dt6/5. This in turn makes the total CR energy converted (ECR) asymptotes to a constant value. If we assume that CRs are released at the break-up of SNRs, then the source spectrum can be modeled as N(E)dE = G(p)p2dp. However, it is a complex unknown problem how to relate G(p) to the source spectrum N(E) and further to the observed spectrum J(E). In the warm ISM models (T0 = 3 \u00d7 104K, nH = 0.3cm\u22123), the CR acceleration at SNRs may be too ef\ufb01cient. More than 40% of the explosion energy (Eo) is tranferred to CRs and the source CR spectrum, N(E) \u221dE\u2212\u03b1 with \u03b1 \u22481.5, is too \ufb02at to be consistent with the observed CR spectrum at Earth (Ave et al. 2009). In these models with e\ufb03cient injection and acceleration, the \ufb02ow structure is signi\ufb01cantly modi\ufb01ed with \u03c12/\u03c10 \u22487.2-7.5 for WA/WB models. In the intermediate temperature ISM models (T0 = 105K, nH = 0.03cm\u22123), the \ufb02ow structure is still significantly modi\ufb01ed with \u03c12/\u03c10 \u22485.7-6.0 and the fraction of energy conversion, ECR/E0 \u22480.2 \u22120.4 for MA/MB models. Only in the hot ISM model (T0 = 106K, nH = 0.003cm\u22123) with ine\ufb03cient injection (\u01ebB = 0.2 or Rinj > 3.8), the shock structure is almost test-particle like with \u03c12/\u03c10 \u22484.2-4.4 and the fraction of energy conversion, ECR/E0 \u22480.1 \u22120.2 for HA/HB models. The predicted source spectrum G(p) has a slope q = 4.1\u22124.3 for 1011 < E < 1015 eV. Here drift of Alfv\u00b4 en waves relative to the bulk \ufb02ow upstream of the subshock plays an important role, since the modi\ufb01ed test-particle slope, qtp = 3(u0\u2212vA)/(u0\u2212vA\u2212u2), can be steeper than the canonical value of q = 4 for strong unmodi\ufb01ed shocks. With magnetic \ufb01elds of B0 = 30\u00b5G, the Alfv\u00b4 en speed is vA \u22481000kms\u22121, and so the modi\ufb01ed test-particle slope is \u03b1 \u22482.3. This may imply that SN exploding into the hot ISM are the dominant sources of Galactic CRs below 1015eV. One might ask if the magnetic \ufb01eld ampli\ufb01cation would take place in the case of such ine\ufb03cient acceleration, since the magnetic \ufb01eld energy density is expected to be proportional to the CR pressure. An alternative way to enhance the downstream magnetic \ufb01eld was suggested by Giacalone & Jokipii (2007). They showed that the density \ufb02uctuations preexisting upstream can warp the shock front and vortices are generated behind the curved shock surface. Then vortices are cascade into turbulence which ampli\ufb01es magnetic \ufb01elds via turbulence dynamo. Finally, in all models considered in this study, for Bohm-like di\ufb00usion with the ampli\ufb01ed magnetic \ufb01eld in the precursor, indicated by X-ray observations of young SNRs, the particles could be accelerated to Emax \u2248 1015.5ZeV. The drift and dissipation of faster Alfv\u00b4 en waves in the precursor, on the other hand, soften the CR spectrum and reduce the CR acceleration e\ufb03ciency. ACKNOWLEDGEMENTS The author would like to thank T. W. Jones and P. Edmon for helpful comments on the paper and Kavli Institute for Theoretical Physics (KITP) for their hospitality and support, where some parts of this work were carried out during Particle Acceleration in Astrophysical Plasmas 2009 program. This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (2009\f14 KANG 0075060)." + }, + { + "url": "http://arxiv.org/abs/0901.1702v1", + "title": "Self-Similar Evolution of Cosmic-Ray Modified Shocks: The Cosmic-Ray Spectrum", + "abstract": "We use kinetic simulations of diffusive shock acceleration (DSA) to study the\ntime-dependent evolution of plane, quasi-parallel, cosmic-ray (CR) modified\nshocks. Thermal leakage injection of low energy CRs and finite Alfv\\'en wave\npropagation and dissipation are included. Bohm diffusion as well as the\ndiffusion with the power-law momentum dependence are modeled. As long as the\nacceleration time scale to relativistic energies is much shorter than the\ndynamical evolution time scale of the shocks, the precursor and subshock\ntransition approach the time-asymptotic state, which depends on the shock sonic\nand Alfv\\'enic Mach numbers and the CR injection efficiency. For the diffusion\nmodels we employ, the shock precursor structure evolves in an approximately\nself-similar fashion, depending only on the similarity variable, x/(u_s t).\nDuring this self-similar stage, the CR distribution at the subshock maintains a\ncharacteristic form as it evolves: the sum of two power-laws with the slopes\ndetermined by the subshock and total compression ratios with an exponential\ncutoff at the highest accelerated momentum, p_{max}(t). Based on the results of\nthe DSA simulations spanning a range of Mach numbers, we suggest functional\nforms for the shock structure parameters, from which the aforementioned form of\nCR spectrum can be constructed. These analytic forms may represent approximate\nsolutions to the DSA problem for astrophysical shocks during the self-similar\nevolutionary stage as well as during the steady-state stage if p_{max} is\nfixed.", + "authors": "Hyesung Kang, Dongsu Ryu, T. W. Jones", + "published": "2009-01-13", + "updated": "2009-01-13", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "Introduction Di\ufb00usive shock acceleration (DSA) is widely accepted as the primary mechanism through which cosmic rays (CRs) are produced in a variety of astrophysical environments (Bell 1978; Drury 1983; Blandford & Eichler 1987). The most attractive feature of the DSA theory is the simple prediction of the power-law momentum distribution of CRs, f(p) \u221dp\u22123\u03c3/(\u03c3\u22121) (where \u03c3 is the shock compression ratio) in the test particle limit. For strong, adiabatic gas shocks, this gives a power-law index of 4, which is reasonably close to the observed, \u2018universal\u2019 index of the CR spectra in many environments. However, it was recognized early on, through both analytical and numerical calculations, that the DSA can be very e\ufb03cient and that there are highly nonlinear back-reactions from CRs to the underlying \ufb02ows that modify the spectral form, as well (e.g., Malkov & Drury 2001, for a review). In such CR modi\ufb01ed shocks, the pressure from CRs di\ufb00using upstream compresses and decelerates the gas smoothly before it enters the dissipative subshock, creating a shock precursor and governing the evolution of the \ufb02ow velocity in the precursor. On the other hand, it is primarily the \ufb02ow velocity through the precursor and the subshock that controls the thermal leakage injection and the DSA of CRs. Hence the dynamical structure of the \ufb02ow and the energy spectrum of CRs must evolve together, in\ufb02uencing each other in a self-consistent way. It is formation of the precursor that causes the momentum distribution of CRs to deviate from the simple test-particle power-law distribution. With a realistic momentum-dependent di\ufb00usion, \u03ba(p), the particles of di\ufb00erent momenta, p, experience di\ufb00erent compressions, depending on their di\ufb00usion length, ld(p) = \u03ba(p)/us (where us is the shock speed). The particles just above the injection momentum (pinj) sample mostly the compression across the subshock (\u03c3s), while those near the highest momentum (pmax) experience the greater, total compression across the entire shock structure (\u03c3t). This leads to the particle distribution function that behaves as f(p) \u221dp\u22123\u03c3s/(\u03c3s\u22121) for p \u223cpinj, but \ufb02attens gradually to f(p) \u221dp\u22123\u03c3t/(\u03c3t\u22121) toward p \u223cpmax (Du\ufb00y et al. 1994). Analytic solutions for f(p) at the shock have been found in steady-state limits under special conditions; for example, the case of a constant di\ufb00usion coe\ufb03cient (Drury et al. 1982) and the case of steady-state shocks with a \ufb01xed pmax above which particles escape from the system (Malkov 1997, 1999; Amato & Blasi 2005, 2006). In these treatments, the self-consistent solutions involve rather complicated transformations and integral equations, so are di\ufb03cult to use in general, although they do provide important insights. In particular, Malkov (1999) showed that in highly modi\ufb01ed, strong, steady shocks (\u03c3t \u226b1) with a \ufb01xed pmax, the spectrum of CRs \ufb02attens to f(p) \u221dp\u22123.5 for \u03ba(p) \u221dp\u03b1 with \u03b1 > 1/2. He also argued that the form of the CR spectrum is universal under these conditions, independent \f\u2013 3 \u2013 of \u03ba(p) and \u03c3t. In an e\ufb00ort to provide more practical description Berezhko & Ellison (1999) presented a simple approximate model of the CR spectrum at strong, steady shocks in planeparallel geometry. They adopted a three-element, piece-wise power-law form to represent the spectrum at non-relativistic, intermediate, and highly relativistic energies. And they demonstrated that this model approximately represents the results of their Monte Carlo simulations. In Kang & Jones (2007) (Paper I), from kinetic equation simulations of DSA in planeparallel shocks with the Bohm-like di\ufb00usion (\u03ba \u221dp), we showed that the CR injection rate and the postshock states approach time-asymptotic values, even as the highest momentum pmax(t) continues to increase with time, and that such shocks then evolve in a \u201cself-similar\u201d fashion. We then argued that the nonlinear evolution of the shock structure and the CR distribution function in this stage may be described approximately in terms of the similarity variables, \u03be = x/(ust) and Z \u2261ln(p/pinj)/ ln[pmax(t)/pinj]. Based on the self-similar evolution, we were able to predict the time-asymptotic value of the CR acceleration e\ufb03ciency as a function of shock Mach number for the assumed models of the thermal leakage injection and the wave transportation. In those simulations we assumed that the self-generated waves provide scatterings su\ufb03cient enough to guarantee the Bohm-like di\ufb00usion, and that the particles do not escape through either an upper momentum boundary or a free-escape spatial boundary. So the CR spectrum extended to ever higher momenta, but at the same time the particles with the highest momentum spread over the increasing di\ufb00usion length scale as lmax \u221d\u03ba(pmax)/us \u221dpmax \u221dt. We note that in Paper I we considered plane-parallel shocks with shock Mach number, 2 \u2264M0 \u226480, propagating into the upstream gas with either T0 = 104K or 106 K, since we were interested mainly in cosmic structure formation shocks. The simplicity of the results in Paper I suggested that it might be possible to obtain an approximate analytic expression for the CR spectrum in such shocks, but the simulations presented in that paper were not su\ufb03cient to address that question. Thus we further carried out an extensive set of simulations to explore fully the time-dependent behavior of the CR distribution in CR modi\ufb01ed shocks with shock Mach numbers M0 \u226510. In this paper, from the results of these simulations, we suggest practical analytic expressions that can describe the shock structure and the energy spectrum of accelerated particles at evolving CR modi\ufb01ed shocks in plane-parallel geometry, in which the Bohm-like di\ufb00usion is valid. In realistic shocks, however, once the di\ufb00usion length lmax becomes comparable to the curvature of shocks, or when the growth of waves generated by the CR streaming instability is ine\ufb03cient, the highest energy particles start to escape from the system before they are scattered and advected back through the subshock. In such cases, pmax is \ufb01xed, and the \f\u2013 4 \u2013 CR spectrum and the shock structure evolve into steady states. So, for comparison, we carried out additional simulations for analogous shocks in which the particles are allowed to escape from the system once they are accelerated above an upper momentum boundary, pub. Those shocks achieve true steady states and the shock structure and the CR distribution become stationary with forms similar to those maintained during the self-similar stage of shock evolution. In this sense, our solution is consistent with the analytic solutions for steady state shocks obtained in the previous papers mentioned above. In the next section we describe the numerical simulations and results. The approximate formula for the CR spectrum will be presented and discussed in \u00a73, followed by a summary in \u00a74. We also include an appendix that presents simple analytic and empirical expressions that can be used to characterize the dynamical properties of CR modi\ufb01ed shocks. 2. Numerical Calculations 2.1. Basic equations In our kinetic simulations of DSA, we solve the standard gasdynamic equations with the CR pressure terms in the conservative, Eulerian form for one-dimensional plane-parallel geometry (Kang et al. 2002; Kang & Jones 2005, 2007), \u2202\u03c1 \u2202t + \u2202(u\u03c1) \u2202x = 0, (1) \u2202(\u03c1u) \u2202t + \u2202(\u03c1u2 + Pg + Pc) \u2202x = 0, (2) \u2202(\u03c1eg) \u2202t + \u2202 \u2202x(\u03c1egu + Pgu) = \u2212u\u2202Pc \u2202x + W(x, t) \u2212L(x, t), (3) where Pg and Pc are the gas and CR pressures, respectively, eg = Pg/[\u03c1(\u03b3g \u22121)] + u2/2 is the total gas energy per unit mass. The remaining variables, except for L and W, have the usual meanings. The injection energy loss term, L(x, t), accounts for the energy carried away by the suprathermal particles injected into the CR component at the subshock and is subtracted from the postshock gas immediately behind the subshock. The gas heating due to the Alfv\u00b4 en wave dissipation in the upstream region is represented by the term W(x, t) = \u2212vA \u2202Pc \u2202x , (4) where vA = B/\u221a4\u03c0\u03c1 is the local Alfv\u00b4 en speed (Paper I). These equations can be used to describe parallel shocks, where the large-scale magnetic \ufb01eld is aligned with the shock normal and the pressure contribution from the turbulent magnetic \ufb01elds can be neglected. \f\u2013 5 \u2013 The CR population is evolved by solving the di\ufb00usion-convection equation for the pitchangle-averaged distribution function, f(x, p, t), in the form, \u2202g \u2202t + (u + uw)\u2202g \u2202x = 1 3 \u2202 \u2202x(u + uw) \u0012\u2202g \u2202y \u22124g \u0013 + \u2202 \u2202x \u0014 \u03ba(x, y)\u2202g \u2202x \u0015 , (5) where g = p4f and y = ln(p) (Skilling 1975a). Here, \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient. The CR population is isotropized with respect to the local Alfv\u00b4 enic wave turbulence, which would in general move at a speed uw with respect to the plasma. Since the Alfv\u00b4 en waves upstream of the subshock are expected to be established by the streaming instability, the wave speed is set there to be uw = vA. Downstream, it is likely that the Alfv\u00b4 enic turbulence is nearly isotropic, so we use uw = 0 there. We consider two models for CR di\ufb00usion: Bohm di\ufb00usion and power-law di\ufb00usion, \u03baB = \u03ba\u2217 \u0012\u03c10 \u03c1 \u0013\u03bd p2 p p2 + 1 , \u03bapl = \u03ba\u2217 \u0012\u03c10 \u03c1 \u0013\u03bd p\u03b1, (6) with \u03b1 = 0.5 \u22121. Hereafter, the momentum is expressed in units of mpc, where mp is the proton mass and c is the speed of light. So, \u03ba\u2217is a constant of dimensions of length squared over time. As in our previous studies, we consider di\ufb00usion both without and with a density dependence, \u03c10/\u03c1; that is, either \u03bd = 0 or \u03bd = 1. The latter case quenches the CR acoustic instability (Drury 1984) and approximately accounts for the compressive ampli\ufb01cation of Alfv\u00b4 en waves. Since we do not follow explicitly the ampli\ufb01cation of magnetic \ufb01elds due to streaming CRs, we simply assume that the \ufb01eld strength scales with compression and so the di\ufb00usion coe\ufb03cient scales inversely with density. Bohm-like di\ufb00usion is an idealization of what is expected in a dynamically evolving CR modi\ufb01ed shock. As discussed in \u00a72.3 the diffusion coe\ufb03cient, which results from resonant scattering with Alfv\u00b4 en waves, varies inversely with the intensity of the resonant waves. The wave intensity is expected to be ampli\ufb01ed as the shock evolves from upstream, ambient values via the streaming instability. Bohm di\ufb00usion represents the simplest nonlinear limited model for that process. The very highest momentum CRs will encounter ambient wave intensities, so perhaps below levels implied by Bohm di\ufb00usion. The model assumes that the streaming instability quickly ampli\ufb01es those waves to nonlinear levels (e.g., Skilling 1975b; Lucek & Bell 2000). We label the quantities upstream of the shock precursor by the subscript \u20180\u2019, those immediately upstream of the gas subshock by \u20181\u2019, and those downstream by \u20182\u2019. So, \u03c10, for example, stands for the density of the upstream gas. Equations (1), (2), (3), and (5) are simultaneously integrated by the CRASH (CosmicRay Acceleration SHock) code. The detailed description of the CRASH code can be found \f\u2013 6 \u2013 in Kang et al. (2002) and Paper I. Three features of CRASH are important to our discussion below. First, CRASH applies an adaptive mesh re\ufb01nement technique around the subshock. So the precursor structure is adequately resolved to couple the gas to the CRs of low momenta, whose di\ufb00usion lengths can be at least several orders of magnitude smaller that the precursor width. Second, CRASH uses a subgrid shock tracking; that is, the subshock position is followed accurately within a single cell on the \ufb01nest mesh re\ufb01nement level. Consequently, the e\ufb00ective numerical subshock thickness needed to compute the spatial derivatives in equation (5) is always less than the single cell size of the \ufb01nest grid. Third, we calculate the exact subshock speed at each time step to adjust the rest frame of the simulation, so that the subshock is kept inside the same grid cell throughout. These three features enable us to obtain good numerical convergence in our solutions with a minimum of computational e\ufb00orts. As shown in Paper I, the CRASH code can obtain reasonably converged dynamical solutions even when the grid spacing in the \ufb01nest re\ufb01ned level is greater than the di\ufb00usion length of the lowest energy particles (i.e., \u2206x8 > ld(pinj)). This feature allows us to follow the particle acceleration for a large dynamic rage of pmax/pinj, typically, \u223c109, although the evolution of the energy spectrum at low energies and the early dynamical evolution of the shock structure may not be calculated accurately. 2.2. Simulation Set-up The injection and acceleration of CRs at shocks depend in general upon various shock parameters such as the Mach number, the magnetic \ufb01eld strength and obliquity angle, and the strength of the Alfv\u00b4 en turbulence responsible for scattering. In this study we focus on the relatively simple case of CR proton acceleration at quasi-parallel shocks, which is appropriately described by equations (1) (3). The details of simulation set-up can be found in Paper I, and only a few essential features are brie\ufb02y summarized here. Except for di\ufb00usion details, the set-up described here is identical to those reported in Paper I. As in Paper I, a shock is speci\ufb01ed by the upstream gas temperature T0 and the initial Mach number M0. Two values of T0, 104 K and 106 K, are considered, representing the warm photoionized gas and the hot shock-heated gas often found in astrophysical environments, respectively. Then the initial shock speed is given as us,i = cs,0M0 = 15 km s\u22121 \u0012 T0 104 \u00131/2 M0, (7) where cs,0 is the sound speed of the upstream gas. All the simulations reported in this paper have M0 = 10, which is large enough to produce signi\ufb01cant CR modi\ufb01cation. In Paper I we considered a wide range of shock Mach numbers and examined the Mach-number \f\u2013 7 \u2013 dependence of the evolution of CR modi\ufb01ed shocks. The CR injection and acceleration e\ufb03ciencies are determined mainly by the sonic Mach number and the relative Alfv\u00b4 en Mach number for shocks with M0 \u227310 (Kang et al. 2002; Kang 2003). On the other hand, they depend sensitively on other model parameters for shocks with lower Mach numbers. In this paper we thus focus on the evolution of the CR spectrum at moderately strong shocks with M0 \u227310. We will consider the more complicated problem of weaker shocks in a separate paper. In our problem, three normalization units are required for length, time, and mass. While ordinary, one-dimensional, ideal gasdynamic problems do not contain any intrinsic scales, the di\ufb00usion in the DSA problem introduces one; that is, either a di\ufb00usion length or a di\ufb00usion time, which of course depend on the particle momentum. So let p\u2020 be a speci\ufb01c value of the highest momentum that we aim to achieve by the termination time of our simulations. Then the greatest width of the precursor is set by the di\ufb00usion length of the particles with p\u2020, ld(p\u2020) = \u03ba(\u03c10, p\u2020)/us, while the time required for the precursor to reach that width is given by tacc(p\u2020) \u221dld(p\u2020)/us (see eq. [9]). Hence we choose di\ufb00usion length and time for p\u2020, \u02c6 x = \u02c6 \u03ba/\u02c6 u and \u02c6 t = \u02c6 \u03ba/\u02c6 u2, with \u02c6 u = us,i and \u02c6 \u03ba = \u03ba(\u03c10, p\u2020), as the normalization units for length and time. For the normalization units for mass, we choose \u02c6 \u03c1 = \u03c10. Then the normalized quantities become \u02dc x = x/\u02c6 x, \u02dc t = t/\u02c6 t, \u02dc u = u/\u02c6 u, \u02dc \u03ba = \u03ba/\u02c6 \u03ba, and \u02dc \u03c1 = \u03c1/\u02c6 \u03c1. In addition, the normalized pressure is expressed as \u02dc P = P/(\u02c6 \u03c1\u02c6 u2). With these choices, we expect that at time \u02dc t \u223c1, the precursor width would be \u02dc x \u223c\u02dc ld(p\u2020) \u223c1, for example. It should be clear that the physical contents of our normalization are ultimately determined by the value of p\u2020 anticipated to correspond to \u02dc t \u223c1 as well as by the form of \u03ba(\u03c1, p). In the simulations reported here, p\u2020 was selected to give us the maximum span of p that is consistent with our ability to obtain converged results with available computational resources. Our choice of p\u2020 is especially dependent on the nonrelativistic momentum dependence of \u03ba(p). In particular, when the dependence is steep, \u03ba(pinj) and ld(pinj) can become extremely small compared to their relativistic values, necessitating very \ufb01ne spatial resolution around the subshock. In Table 1, we list our numerical models classi\ufb01ed by T0 and \u03ba. For example, T6P1 model adopts T0 = 106 K and \u03bapl with \u03b1 = 1 and \u03bd = 0, while T4Bd model adopts T = 104 K and Bohm di\ufb00usion, \u03baB, with \u03bd = 1. In the power law di\ufb00usion models of T6P1 and T6P1d, p\u2020 \u223c106 is chosen for the normalization, so that \u02dc \u03ba(\u02dc \u03c1 = 1) = \u02dc \u03ba\u2217p = 10\u22126p. For the Bohm di\ufb00usion models, T6Bd and T4Bd, on the other hand, p\u2020 \u223c102 is chosen, because the steep nonrelativistic form of the di\ufb00usion makes those models too costly for us to follow evolution to much higher CR momenta. A speci\ufb01c example can clarify the application of these simulations to real situations. Let us consider a shock with us,i = 1.5 \u00d7 103 km s\u22121 propagating into the interstellar medium \f\u2013 8 \u2013 with B = 5 \u00b5G. Then in the Bohm limit that the relativistic CR scattering length equals the gyroradius, \u03ba\u2217= mpc2/(3eB) = 6.3 \u00d7 1021 cm2 s\u22121. For the T6P1 model, for instance, the normalization constants are \u02c6 u = 1.5\u00d7103 km s\u22121 and \u02c6 \u03ba = 6.3\u00d71027 cm2 s\u22121, so \u02c6 x = 4.2\u00d71019 cm and \u02c6 t = 2.8 \u00d7 1011 s. On the other hand, the time evolution of these shocks becomes approximately selfsimilar, as we will demonstrate. In that case the normalization choices above are entirely for the convenience of computation. We will eventually replace even these normalized physical variables with dimensionless similarity variables. To simplify the notation in the meantime, we hereafter drop the tilde from the normalized quantities as de\ufb01ned above. Our simulations start with a purely gasdynamic shock of M0 = 10 at rest at x = 0, initialized according to Rankine-Hugoniot relations with u0 = \u22121, \u03c10 = 1 and a gas adiabatic index, \u03b3g = 5/3. So the initial shock speed is us,i = 1 in code units. There are no pre-existing CRs, i.e., Pc(x) = 0 at t = 0. 2.3. Thermal leakage and Alfv\u00b4 en wave transport Although the shock Mach number is the key parameter that determines the evolution of CR modi\ufb01ed shocks, the thermal leakage injection and the Alfv\u00b4 en wave transport are important elements of DSA. They were discussed in detail in previous papers including Paper I. So here we brie\ufb02y describe only the central concepts to make this paper self-contained. In the CRASH code, the injection of suprathermal particles via thermal leakage is emulated numerically by adopting a \u201ctransparency function\u201d, \u03c4esc(\u01ebB, \u03c5), which expresses the probability of downstream particles at given random velocity, \u03c5, successfully swimming upstream across the subshock through the postshock MHD waves (Kang et al. 2002), whose amplitude is parameterized by \u01ebB. Once such particles cross into the upstream \ufb02ow, they are subject to scattering by the upstream Alfv\u00b4 en wave \ufb01eld, so participate in DSA. The condition that non-zero probability for suprathermal downstream particles to cross the subshock (i.e., \u03c4esc > 0 for p > pinj) e\ufb00ectively selects the lowest momentum of the particles entering the CR population. The velocity \u03c5 obviously must exceed the \ufb02ow speed of the downstream plasma, u2. In addition, leaking particles must swim against the e\ufb00ective pondermotive force of MHD turbulence in the downstream plasma. The parameter, \u01ebB = B0/B\u22a5used to represent this, is the ratio of the magnitude of the large-scale magnetic \ufb01eld aligned with the shock normal, B0, to the amplitude of the postshock wave \ufb01eld that interacts with low energy particles, B\u22a5. It is more di\ufb03cult for particles to swim upstream when the wave turbulence is strong (\u01ebB is small), leading to smaller injection rates. Malkov & V\u00a8 olk (1998) \f\u2013 9 \u2013 argued on plasma physics grounds that it should be 0.25 \u2272\u01ebB \u22720.35. Our own CR shock simulations established that \u01ebB \u223c0.2 \u22120.25 leads to injection fractions in the range of \u223c10\u22124 \u221210\u22123, which are similar to the commonly adopted values in other models (e.g., Malkov 1997; Amato & Blasi 2005). In this study, we use \u01ebB = 0.2 for numerical models, although the choice is not critical to our conclusions. The CR transport in DSA is controlled by the intensity, spectrum and isotropy of the Alfv\u00b4 enic turbulence resonant with CRs. Upstream of the subshock, the Alfv\u00b4 enic turbulence is thought to be excited by the streaming CRs (e.g., Bell 1978; Lucek & Bell 2000). Recently there has been much emphasis on the possible ampli\ufb01cation of the large-scale magnetic \ufb01eld via non-resonant wave-particle interactions within the shock precursor (e.g., Bell 2004; Amato & Blasi 2006; Vladmiriov et al. 2006). Those details will not concern us here; we make the simplifying assumption that the Alfv\u00b4 enic turbulence saturates and that scattering isotropizes the CR distribution in the frame moving with the mean Alfv\u00b4 en wave motion (see eq. [5]). Since the upstream waves are ampli\ufb01ed by the CRs escaping upstream, the wave frame propagates in the upstream direction; i.e., uw > 0. Downstream, various processes should isotropize the Alfv\u00b4 en waves (e.g., Achterberg & Blandford 1986), so the wave frame and the bulk \ufb02ow frame coincide; i.e., uw = 0. This transition in uw across the subshock reduces the velocity jump experienced by CRs during DSA. Since it is really the velocity jump rather than the density jump that sets the momentum boost, this reduces the acceleration rate somewhat when the ratio of the upstream sound speed to the Alfv\u00b4 en speed is \ufb01nite. An additional e\ufb00ect that has important impact is dissipation of Alfv\u00b4 en turbulence stimulated by the streaming CRs. That energy heats the in\ufb02owing plasma beyond adiabatic compression. The detailed physics is complicated and nonlinear, but we adopt the common, simple assumption that the dissipation is local and that the wave growth saturates, so that the dissipation rate matches the rate of wave stimulation (see eq. [4]) (Jones 1993; Berezhko & V\u00a8 olk 1997). This energy deposition increases the sound speed of the precursor gas, thus reducing the Mach number of the \ufb02ow into the subshock, again weakening DSA to some degree (e.g., Achterberg 1982). Thus, the CR acceleration becomes less e\ufb03cient, when the Alfv\u00b4 en wave drift and heating terms are included (Berezhko & V\u00a8 olk 1997; Kang & Jones 2006). The signi\ufb01cance of these e\ufb00ects can be parameterized by the ratio of the magnetic \ufb01eld to thermal energy densities, \u03b8 = EB,0/Eth,0, in the upstream region, which scales as the square of the ratio of the upstream Alfv\u00b4 en (\u03c5A) and sound speeds. In Paper I, we considered 0.1 \u2264\u03b8 \u22641; here we set \u03b8 = 0.1. The dependence of shock behaviors on that parameter are outlined in Paper I. The \u03b8 parameter can be related to the more commonly used shock Alfv\u00b4 enic Mach number, MA,0 = us,i/vA,0, and the initial sonic Mach number, M0, as MA,0 = M0 p \u03b3g(\u03b3g \u22121)/(2\u03b8), where vA,0 = B0/\u221a4\u03c0\u03c10. With \u03b3g = 5/3 and \u03b8 = 0.1, this translates \f\u2013 10 \u2013 into MA,0 = 2.36M0. So, for our M0 = 10 shocks, MA,0 \u224824. Our initial shock speeds are us,i = 150 km s\u22121 for T0 = 104 K and us,i = 1500 km s\u22121 for T0 = 106 K, corresponding, then, to vA = 6.4 km s\u22121 and vA = 64 km s\u22121, respectively. For our example magnetic \ufb01eld, B0 = 5 \u00b5G, the associated upstream gas density would be \u03c10 \u22485 \u00d7 10\u221224 g cm\u22123 and \u03c10 \u22485 \u00d7 10\u221226 g cm\u22123, respectively. 3. Results 3.1. Evolution toward an asymptotic state In the early evolutionary stage, as CRs are \ufb01rst injected and accelerated at the subshock, upstream di\ufb00usion creates a CR pressure gradient that decelerates and compresses the in\ufb02owing gas within a shock precursor. This leads to a gradual decrease of shock speed with respect to the upstream gas (Fig. 1 [a]-[b]). As the subshock consequently weakens, the CR injection rate decreases due to a reduced velocity jump across the subshock. The CR spectrum near pinj also steepens (Fig. 1 [c]-[d]). The total compression across the entire shock structure actually increases to about 5 in the Mach 10 shocks reported here. The highest momentum CRs respond to the total shock transition, which \ufb02attens the spectrum at higher momenta; i.e., the CR spectrum evolves the well-known concave curvature between the lowest and the highest momenta. Each of these evolutionary features continue to be enhanced until preshock compression, CR injection at the subshock, and CR acceleration through the entire shock structure all reach self-consistent dynamical equilibrium states (Fig. 1 [e]-[f]). Once compression in the precursor reaches the level at which DSA begins to saturate, meaning the reduced subshock strength reduces CR injection to maintain an equilibrium, the shock compression (\u03c3s = \u03c12/\u03c11 and \u03c3t = \u03c12/\u03c10) as well as the gas and CR pressures should remain approximately constant during subsequent shock evolution. From that time on the structure of the precursor and the CR spectrum must evolve in tandem to maintain these dynamical features. The CR pressure is calculated from the particle distribution function by Pc = 4\u03c0 3 mpc2 Z \u221e pinj g(p) p p p2 + 1 dp p . (8) To see how Pc evolves during the early, nonrelativistic stage, consider the idealized the test-particle case where the CR distribution has a power-law form, g(p) = g0(p/pinj)\u2212\u03b4 up to p = pmax, where 0 < \u03b4 \u2261(4 \u2212\u03c3s)/(\u03c3s \u22121) < 0.5 for the shock compression ratio of 4 > \u03c3s > 3. Then one can roughly express Pc \u221d[(pmax/pinj)1\u2212\u03b4 \u22121] \u221d(pmax/pinj)1\u2212\u03b4 for pinj \u226apmax < 1. In a strong, unmodi\ufb01ed shock, 1 \u2212\u03b4 \u22481, and Pc initially increases quickly \f\u2013 11 \u2013 as Pc \u221dpmax/pinj. We will show in \u00a73.3, as the shock becomes modi\ufb01ed toward the dynamical equilibrium state, that the CR pressure is dominated by relativistic particles and the CR spectrum evolves in a manner that leads to nearly constant postshock Pc,2. These features in the evolution of Pc,2 are illustrated in Figure 1 (e) (f). The time-asymptotic states are slightly di\ufb00erent among di\ufb00erent models, because the numerically realized CR injection rate depends weakly on \u03ba(p). The mean acceleration time for a particle to reach pmax from pinj in the test-particle limit of DSA theory is given by (e.g., Drury 1983) tacc = 3 u0 \u2212u2 Z pmax pinj \u0012\u03ba0 u0 + \u03ba2 u2 \u0013 dp p . (9) For power-law di\ufb00usion with density dependence, \u03bapl = \u03ba\u2217p\u03b1(\u03c10/\u03c1)\u03bd, the maximum momentum can be estimated by setting t = tacc as pmax(t) \u2248 \u0014 \u03b1(\u03c3t \u22121) 3\u03c3t(1 + \u03c31\u2212\u03bd t ) u2 s \u03ba\u2217t \u00151/\u03b1 = \u0014 fc u2 s \u03ba\u2217t \u00151/\u03b1 , (10) where fc \u2261\u03b1(\u03c3t \u22121)/ \u0002 3\u03c3t(1 + \u03c31\u2212\u03bd t ) \u0003 is a constant factor during the self-similar stage and us is the shock speed in the time-asymptotic limit. As the feedback from CRs becomes important, the shock speed relative to far upstream \ufb02ow is reduced, typically about 10-20 % for the shock parameters considered here (i.e., us \u2248[0.8 \u22120.9]us,i). With \u03b1 = 1 and \u03bd = 1, for a typical value of \u03c3t \u22485.3 for a M0 = 10 shock, fc \u22480.13. In an evolving CR shock, at a given shock age of t, the power-law spectrum should extend roughly to pmax(t) above which it should decrease exponentially. Then the di\ufb00usion length of the most energetic particles increases linearly with time as lmax(t) \u2261\u03ba\u2217p\u03b1 max(t) us = fcust. (11) So lmax(t) depends only on the characteristic length ust, independent of the size of the di\ufb00usion coe\ufb03cient, although at a given time the particles are accelerated to higher energies with smaller values of \u03ba\u2217. Since the precursor scale height is proportional to lmax, the precursor broadens linearly with time, again independent of the size of \u03ba\u2217. This is valid even for the Bohm di\ufb00usion if pmax \u226b1, since \u03baB \u2248\u03ba\u2217p for p \u226b1. Thus, the hydrodynamic structure of evolving CR shocks does not depend on the di\ufb00usion coe\ufb03cient, even though the CR di\ufb00usion introduces the di\ufb00usion length and time scales in the problem. \f\u2013 12 \u2013 3.2. Shock structure and CR spectrum in self-similar stage After the precursor growth reaches a time-asymptotic form, the shock structure follows roughly the self-similar evolution and stretches linearly with time, as noted above. Thus, we show in Figure 2 the evolution of a M0 = 10 shock with T6P1d model in terms of the similarity variable, \u03be = x/(us,it), for t > 1 (i.e., later stage of the shock shown in Fig. 1). The time-asymptotic shock speed approaches us = u0 \u22480.9us,i for these shock parameters. The reduction in shock speed results from the increase in \u03c3t, so depends upon the degree of shock modi\ufb01cation. Here \u03c3t \u22485.3, \u03b1 = 1, \u03bd = 1, so equation (11) give lmax \u22480.13ust, which corresponds to the precursor scale height in terms of \u03be, H\u03be \u2261lmax/(us,i t) \u22480.12. We also show the approximate self-similar evolution of the shock structure for four additional models with \u03ba(\u03c1, p) listed in Table 1 (Fig. 3). As discussed in \u00a73.1, the overall shock structure at a given time t is roughly independent of the di\ufb00usion coe\ufb03cient, except for some minor details in the shock pro\ufb01le that have developed in the early stage. Also the shock evolution seems to be approximately self-similar in all the models, as shown in the middle and right panels of Figure 3. Of course, with di\ufb00erent values of \u03ba\u2217and \u03b1, on the other hand, the highest momentum of the CR spectrum at a given time depends on \u03ba (see Fig. 5). Figure 4 (a)-(b) shows how the particle distribution at the subshock, gs(p) = f(xs, p)p4, evolves during the self-similar stage, extending to higher pmax. For this model equation (10) gives pmax \u2248(0.1/\u03ba\u2217) t = 105 t. This estimate is quite consistent with the evolution of gs(p) shown in this \ufb01gure. The peak value of gs(p) near pmax seems to remain constant during the self-similar stage. This re\ufb02ects the fact that Pc,2 remains constant, as it must once DSA is saturated, and the fact that Pc is dominated by relativistic CRs near pmax for strong shocks. The injection momentum, pinj \u221d p Pg,2/\u03c12, becomes constant in time after the initial adjustment, because the postshock state is \ufb01xed in the self-similar evolution stage. Then the value of gs(pinj) is \ufb01xed by gs,th(pinj), the thermal distribution of the postshock gas at pinj, and stays constant, too. Let us suppose particles with a given momentum p1 experience on average the velocity jump over the di\ufb00usion length \u03be1 = ld(p1)/(us t1), \u2206u(\u03be1), at time t1. At a later time t they will be accelerated to p = p1 \u00b7 (t/t1)1/\u03b1 and di\ufb00use over the scale, \u03be = ld(p)/(us t) = \u03be1. So they experience the same velocity jump \u2206u(\u03be1), as long as the velocity pro\ufb01le, u(\u03be), remains constant during the self-similar stage. Then the spectral slopes plotted in terms of p/pmax should retain a similar shape over time. The slope of the distribution function at the subshock, q = \u2212d ln gs/d ln p+4, and the slope of the volume integrated distribution function, Q = \u2212d ln G/d ln p + 4 (where G = R gdx), as a function of p/pmax(t) are shown in Figure 4 \f\u2013 13 \u2013 (d). Low energy particles near pinj experience the subshock compression only, while highest momentum particles near pmax feel the total shock compression. So q(p) \u2248qs = 3\u03c3s/(\u03c3s \u22121) for p \u223cpinj, while q(p) \u2248qt = 3\u03c3t/(\u03c3t \u22121) for p \u223cpmax. The numerical results are roughly consistent with such expectations. Consequently, to a good approximation, gs(p) evolves with \ufb01xed amplitudes, gs(pinj) and gs(pmax), and with \ufb01xed spectral slopes, qs and qt at pinj and pmax, respectively, while stretching to higher pmax(t). The volume integrated distribution function, G(p), also displays a similar behavior as gs(p). In the bottom panels of Figure 4, G(p)/t and G(Z)/t are shown, noting that the kinetic energy passed through the shock front increases linearly with time. In Paper I, based on the DSA simulation results for t \u226410, we suggested that the distribution function may become self-similar in terms of the momentum similarity variable, Z, de\ufb01ned in \u00a71. If we de\ufb01ne the \u201cpartial pressure function\u201d as F(Z) \u2261g(Z) p p p2 + 1 ln \u0012pmax pinj \u0013 , (12) then the CR pressure is given by Pc \u221d R \u221e 0 F(Z)dZ. We suggested there that the postshock CR pressure stays constant because the evolution of F(Z) becomes self-similar. As can be seen in Figure 4 (b)-(c), the functions gs(Z) and Fs(Z) at the subshock seem to change very slowly, giving the false impression that Fs(Z) might be self-similar in terms of the variable Z. However, the constant shape of F(Z) cannot be compatible with the self-similar evolution of the precursor and shock pro\ufb01le. Since fs(p) \u221d(p/pinj)\u2212qs at Z \u223c0 and fs(p) \u221d(p/pmax)\u2212qt at Z \u223c1 with constant values of pinj, qs, and qt, the shape of F(Z) should evolve accordingly in the self-similar stage (see Fig. 9 below). Figure 5 shows how the evolution of gs(p) depends on the di\ufb00usion coe\ufb03cient and preshock temperature, while other parameters, M0 = 10, \u01ebB = 0.2, and \u03b8 = 0.1, are \ufb01xed. The same set of models is shown as in Figure 3. The shape of gs(p) is somewhat di\ufb00erent among di\ufb00erent models, although it seems to remain similar in time for a given model. The causes of such di\ufb00erences can be understood as follows. First of all, the value of gs(pinj) \u2248gs,th(pinj) depends on the value of pinj \u221d(us/c) \u221dM0 \u221aT0. Secondly, the numerically realized \u201ce\ufb00ective\u201d value of the injection momentum depends on the di\ufb00usion coe\ufb03cient and grid spacing, leading to slightly di\ufb00erent injection rates and shock structures. Thus the postshock Pc,2 and the compression ratios (i.e., the shock structure) depend weakly on di\ufb00usion coe\ufb03cient, as shown in Figure 1 (e)-(f). The ensuing CR spectra have slightly di\ufb00erent values of qs and qt as shown in Figure 5. The spectral slope of the CR spectrum is determined by the mean velocity jump that the particles experience across the shock structure. Here we examine how the precursor \f\u2013 14 \u2013 velocity pro\ufb01le depends on the di\ufb00usion model. Figure 6 (a) shows the velocity structure U(\u03be) = \u2212u(\u03be) in the precursor (\u03be > 0) for \ufb01ve di\ufb00erent di\ufb00usion models, where u(\u03be) is de\ufb01ned as shown in Figure 2. We use the velocity data in the \ufb01nest-level grid as well as in the base grid. The velocity pro\ufb01les are quite similar in all the models except that the model with \u03ba \u221dp1/2 shows a slightly di\ufb00erent pattern at small scales (log \u03be < \u22125). Since the particles with momentum p feel on average the velocity jump over the corresponding di\ufb00usion length, we can \ufb01nd the velocity U(\u03bep) at the distance from the shock that satis\ufb01es x = ld(p) = \u03bep \u00b7 (us,it). Using equation (10), we \ufb01nd then \u03bep = fc(us/us,i)(p/pmax)\u03b1. Then the particles with the same ratio of p/pmax di\ufb00use over the same similarity scale, \u03bep, and feel the same velocity jump, U(\u03bep) + uw(\u03bep) \u2212U2 across the shock. Thus the spectral slope can be estimated from the velocity pro\ufb01le as (e.g., Berezhko & Ellison 1999) qu(p) = qu(\u03bep) = 3(U + uw) U + uw \u2212U2 + d ln(U + uw \u2212U2) d ln p . (13) Figure 6 (b) shows the spectral slope, qu, which is calculated from numerical results of U +uw for di\ufb00erent models. These curves compare to the q(p) curves in Figure 5. The numerical convergence issue should be discussed here. The base grid had a spatial resolution \u2206x0 = 2 \u00d7 10\u22123 in the code units. The small region around the subshock was re\ufb01ned with a number of levels increasing to eight, giving there a spatial resolution \u2206x8 = 7.8 \u00d7 10\u22126. This structure was su\ufb03cient to produce dynamically converged solutions as discussed in Paper I. The di\ufb00usion length near pinj \u224810\u22122 is, for instance, ld(pinj) \u2248\u03ba(pinj)/us,i \u224810\u22128 in T6P1d model and ld(pinj) \u22482 \u00d7 10\u22125 in T6P1/2 model, where all quantities are given in the code units. So the solution for equation (5) is not resolved for the lowest energy particles in T6P1d model, while it should be well resolved in T6P1/2 model. Since low energy particles cannot see the \ufb02ow structure shorter than the minimum numerical thickness of the subshock, i.e., \u2206x8, corresponding to the e\ufb00ective di\ufb00usion length of p \u223c10 for T6P1d model, all particles below p < 10 feel the same subshock compression, independent of their di\ufb00usion lengths. This leads to a more or less constant q(p) \u2248qs for p < 10. The models shown in Figures 4 and 5 exhibit this trend except T6P1/2 model in which the di\ufb00usion of the injected particles are well resolved with \u2206x8/ld(pinj) = 0.4. The momentum integration of g(x, p), i.e., the CR pressure, is self-similar in the spatial similarity variable \u03be. Moreover, the CR distribution at the subshock, gs(Z), and the volume integrated distribution, G(Z), both change very slowly in time, when they are expressed in terms of Z. So we expect that the distribution function g in the plane of (\u03be, Z) should change only secularly during the self-similar stage, although, as mentioned before, g(Z) does not evolve self-similarly in the Z space (Fig. 7). The phase space distribution of g(\u03be, Z) shows \f\u2013 15 \u2013 that most of low energy particles (Z < 0.5) are con\ufb01ned within \u22120.2 \u2272\u03be \u22720.1, while the highest energy particles (Z \u223c1) di\ufb00use over \u22121 \u2264\u03be \u22641. Thus far away from the subshock, both downstream and upstream, relativistic particles dominate the CR energy spectrum. 3.3. Analytic approximation for CR spectrum Based on the results of DSA simulations described in the previous subsections, we suggest that the CR spectrum at CR shocks with M0 \u227310 in the self-similar stage can be approximated by the sum of two power-law functions with an exponential cuto\ufb00as follows: for pmax \u226b1 \u226bpinj, gs(p) = \" g0 \u00b7 \u0012 p pinj \u0013\u2212qs+4 + g1 \u00b7 \u0012 p pmax \u0013\u2212qt+4# exp \" \u2212 \u0012 p 1.5pmax \u00132\u03b1# , (14) where qs > 4 and qt < 4. The speci\ufb01c functional form of the exponential cuto\ufb00was found by \ufb01tting the numerical simulation results (see. Figs. 4-5). We have shown that, after the precursor has developed fully, the CR pressure at the subshock approaches a time-asymptotic value, which leads to the self-similar evolution of the entire shock structure. Then the parameters, pinj, qs and qt as well as g0 \u2248gs,th(pinj), become constant in time. Also, the value of g1 seems to stay roughly constant, according the simulation results. We will show below g1 has to be approximately constant, if Pc,2 remains constant during the self-similar stage. Then the only time-dependent parameter in equation (14) is pmax(t), which can be estimated from equation (10). Now let us examine how Pc,2 evolves in time with the proposed form of gs(p) as pmax increases to large values. Adopting \u03b1 = 1, the contributions due to the low and high energy components can be calculated as PL \u2261 Z pmax pinj g0 \u0012 p pinj \u0013\u2212qs+4 exp \" \u2212 \u0012 p 1.5pmax \u00132# p p p2 + 1 dp p , PH \u2261 Z pmax pinj g1 \u0012 p pmax \u0013\u2212qt+4 exp \" \u2212 \u0012 p 1.5pmax \u00132# p p p2 + 1 dp p . (15) In Figure 8, we show the values of PL/g0 and PH/g1 as a function of pmax for several values of qs and qt and pinj = 10\u22122. In M0 = 10 shocks the typical values of the compression ratios are \u03c3s \u22483.1 and \u03c3t \u22485.0, so qs \u22484.4 and qt \u22483.75. The plot shows that both PL/g0 and PH/g1 become constant as pmax becomes ultra-relativistic, if the shock \ufb02ow is modi\ufb01ed so that \u03c3s \u21923 and \u03c3t \u226b4. This explains why Pc,2 approaches an asymptotic value as pmax \f\u2013 16 \u2013 becomes large, leading to the self-similar evolution stage, after the subshock weakens to the subshock Mach number, M1 \u223c3 \u22124 and the total compression becomes greater than 4. Therefore g1 should stay constant, if Pc,2 becomes constant in the self-similar stage. The amplitude g1 can be estimated, if, for example, Pc,2 is known from the DSA simulations; i.e., the CR pressure obtained with the proposed analytic form of gs should be equal to the value of Pc,2 from the DSA simulations. Alternatively, as outlined in the appendix, empirical scaling relations established from simulations can connect Pc,2 through simple physics to basic shock parameters. Then all the parameters necessary to construct approximations to the CR distribution function as given in equation (14) at arbitrary time t are known for the self-similar evolution stage. Since the time-asymptotic, self-similar solution of evolving CR shocks cannot be found (semi-)analytically either from the conservation equations or from the boundary conditions, we have to rely at least in part on numerical simulations to estimate the parameters pinj, g0, \u03c3s, \u03c3t, and Pc,2 for given shock parameters. The analytic \ufb01tting forms that can approximate the DSA simulation results are described in the appendix. In Figures 4 and 5, we compare the analytic \ufb01tting formula in equation (14) with the results of our DSA simulations. They show good agreement. These plots also demonstrate that gs(pmax), and therefore, g1, remains constant in the self-similar evolution stage. The compression ratios shown in Figure 1 are \u03c3s \u22483.2 and \u03c3t \u22485.0, so the power-law indices calculated with these ratios are qs = 4.36 and qt = 3.75. But the numerical value of q = \u2212d ln fs/d ln p near pinj is 4.2, because the di\ufb00usion of low energy particles is not resolved fully. The minimum value of q = \u2212d ln fs/d ln p near pmax is 3.79, slightly larger than qt, because of the exponential cuto\ufb00. Just to demonstrate how the proposed form of gs(p) \ufb01ts the simulation results, we use qs = 4.2 and qt = 3.76 instead for the curve shown in Figure 4. We note that Berezhko & Ellison (1999) suggested the minimum value of q is qmin = 3.5+(3.5\u22120.5\u03c3s)/(2\u03c3t \u2212\u03c3s \u22121). With our compression ratios, \u03c3s = 3.2 and \u03c3t = 5.0, this gives qmin = 3.83, which is slightly larger than our estimate of 3.79. Using equations (10) and (14), we can estimate the CR spectrum gs at arbitrary time in the self-similar stage, as demonstrated in Figure 9 . Here the value of g1 is \ufb01xed by setting Pc,2 = 0.30 at t = 1 and then the same value of g1 is used for the time t > 10. From the curves of cumulative Fs(< Z), we can see that Pc,2 stays almost constant with the constant value of g1, even though pmax increases \ufb01ve orders of magnitude. In fact, Pc,2/(\u03c10u2 s,i) increases from 0.30 to 0.32 as pmax increases from 105 to 1010. For such a long span of time, however, gs(Z) or Fs(Z) does not keep the same shape. At t = 105, the maximum momentum corresponds to pmax \u22481019(eV/c) for protons. One might ask how we can justify the validity of the proposed form of gs at t \u226b1, while our DSA simulations have been carried up to t \u223c10 \u221220. In the T6P1d model, \f\u2013 17 \u2013 pmax \u223c106 at t = 10. So, most CRs are already ultra-relativistic, and the CR spectrum evolves as expected (i.e., according to eq. [14]). As long as Pc,2 stays constant, the selfsimilarity of the precursor/subshock structure would be preserved even for t \u226b1. The stretching of the u(x) pro\ufb01le in the precursor should in\ufb02uence the slope of the CR spectrum in a self-consistent way as shown in Figure 6. There is no physical reason why such feedback between the precursor structure and the CR spectrum cannot be extended to t \u226b1, as long as the assumed CR di\ufb00usion model remains valid and the most energetic particles remain contained within the system. In realistic shocks, however, the assumption for Bohm di\ufb00usion could break down due to ine\ufb03cient generation of waves in the precursor. Moreover, highest energy particles escape from the system, when their di\ufb00usion length becomes larger that the physical extent of the shock. The e\ufb00ects of escaping particles will be explored further in the next section. We have focused here on moderately strong shock evolution with M0 \u227310, since it is much more complicated to study nonlinear DSA at weaker shocks with M0 < 10. Nonrelativistic CRs play a more signi\ufb01cant role within those shocks. For instance, since Pc is not dominated by relativistic CRs, we need to follow more accurately the di\ufb00usion of nonrelativistic particles on scales close to the physical subshock thickness. Consequently, the di\ufb00usion model and the numerical grid resolution become important. The solutions also depend sensitively on the injection momentum, especially for shocks with Mach numbers, M0 \u22722.5, where modi\ufb01cations are small, so the nearly test-particle CR spectrum is largely controlled by the injection momentum. Physics of thermal leakage injection, however, is not fully understood yet and we have only a working numerical model. Thus we defer discussion of semi-analytic discussion of evolving weak CR shocks to a separate paper. 3.4. Steady State Shocks with a \ufb01xed pub In realistic shocks, pmax(t) may reach an upper momentum boundary, pub, beyond which CRs escape upstream from the shock due to the di\ufb00usion length, lmax, approaching the physical size of the shocked system, or to lack of scattering waves at resonant scales of most energetic particles. From that time the precursor will cease to increase in scale and the selfsimilar evolution makes a transition into a stationary shock structure, or the one controlled by the overall dynamics of the situation. Because the shock energy is lost through particles escaping the system beyond pub, the self-similar broadening of the precursor is replaced by a constant precursor structure in steady state. We have calculated additional runs for the T6P1d model in which an upper momentum boundary condition, i.e., g(p) = 0.0 for p \u2265pub is enforced. In these simulations once \f\u2013 18 \u2013 pmax(t) has reached the given value of pub, the highest energy particles escape from the shock, the CR spectrum becomes steady and the precursor stops growing. Figure 10 shows the results of T61Pd model with pub = 105 and without the upper momentum boundary. The distribution function gs(p) at the shock as well as the precursor and subshock structures all become steady after t > 1 in the run with pub = 105. In the other run without particle escape, the precursor continues to broaden and pmax(t) increases with time. However, the postshock states (e.g., \u03c12 and Pc,2) in the two runs are quite similar and gs(p) in the steady state limit is almost the same as that of the run without particle escape at t \u22481, except the exponential tail above pmax. In Figure 8 we showed that Pc,2 stays constant as pmax(t) increases with time, if gs(p) follows the form given in equation (14). This explains why Pc,2 are very similar at di\ufb00erent times in the two runs. Minor di\ufb00erences are slightly lower Pc,2 and higher \u03c12 in the run with particle escape at pub. We note that the compression ratio greater than 4 results mainly from the combined e\ufb00ect of the precursor compression and the subshock jump, i.e., \u03c3t = \u03c3p \u00b7 \u03c3s, regardless of particle escape. Energy loss due to escaping particles enhances the compression behind the shock only slightly in this shock, since the loss rate is not signi\ufb01cant. In Figure 11 (a) and (b) snap shots are shown at t = 1 for the runs with pub = 104 and 105, and at t = 10 for the run with pub = 106. For comparison, we also show the time-dependent solutions at t = 1 and 10 for the run without particle escape, since in the evolving shock pmax \u2248105 and 106 at t = 1 and 10, respectively, for the T6P1d model. (At t = 0.1, pmax would reach roughly to 104, but by that time dynamical equilibrium has not been achieved and the self-similar evolution has not begun yet in the simulations.) The precursor structure shown in the pro\ufb01le of Pc re\ufb02ects the di\ufb00usion length of highest momenta, ld(pub) \u221dpub or ld(pmax) \u221dpmax(t). Here the CR pressure is plotted against \u03be = x/(us,it), since the results at two di\ufb00erent times are shown together. So for example, the precursor width in \u03be is the same for the run with pub = 105 at t = 1 (dashed line) and the run with pub = 106 at t = 10 (long dashed line). Compared to these two runs, the run without particle escape at t = 1 and 10 (solid lines) have a wider precursor due to the particles in the exponential tail above pmax(t). In Figure 11 (c) and (d) we demonstrate that the evolution of the shock structure is quite similar and the shock approaches similar asymptotic states for all the runs, almost independent of pub or pmax(t), which is consistent with Figure 8. The asymptotic value of Pc,2 is slightly lower and the precursor width is smaller in the runs with smaller pub, as expected. Otherwise, the steady solutions with di\ufb00erent pub are approximately the same as the time-dependent solutions at the time t when pmax(t) equals to pub. Thus the proposed form of gs(p) can be applied to steady state shocks with an upper momentum boundary pub = pmax as well, ignoring the exponential tail above pmax. Even in the case where the shock structure is signi\ufb01cantly a\ufb00ected by the energy loss due to escaping \f\u2013 19 \u2013 particles, equation (14) can provide the steady state solution for gs(p), if the shock structures (\u03c3s, \u03c3t and postshock states) are known. 4. Summary We have studied the time-dependent evolution of the CR spectrum at CR modi\ufb01ed shocks in plane-parallel geometry, in which particles are accelerated to ever higher energies; that is, the maximum momentum pmax is not pre\ufb01xed. We adopted Bohm di\ufb00usion as well as the di\ufb00usion with the power-law momentum dependence of \u03ba(p) \u221dp\u03b1 with 0.5 \u2264\u03b1 \u22641. Thermal leakage injection of suprathermal particles into the CR population at the subshock and \ufb01nite Alfv\u00b4 en wave transport are included. Simulation parameters target nonrelativistic shocks with M0 \u227310 in warm photoionized and hot shock-heated astrophysical environments with magnetic \ufb01eld strengths somewhat below equipartition with the thermal plasma. Unlike gasdynamic shocks, the time-asymptotic dynamical state of the evolving CR modi\ufb01ed shocks under consideration here cannot be found analytically either from the conservation equations or from the boundary conditions. So we rely on the kinetic simulations of di\ufb00usive shock acceleration to \ufb01nd the time-asymptotic state in the self-similar evolution stage. The general characteristics of the evolution of shock structure and particle spectrum can be summarized as follows: 1) The width of the precursor, H, scales with the di\ufb00usion length of the most energetic particles and for di\ufb00usion that scales as \u03ba = \u03ba\u2217(\u03c10/\u03c1)\u03bdp\u03b1, increases linearly with time, i.e., H \u2248lmax \u22480.1ust, independent of the magnitude (\u03ba\u2217)and the value of \u03b1. 2) If the acceleration time scale to reach relativistic energies from injection is much shorter than the dynamical time scale of the shock system (i.e., \u03ba\u2217\u226a0.1usR, where R is the characteristic size of the shock), the CR pressure at the subshock approaches a constant value as the Pc at the shock becomes a signi\ufb01cant fraction of the momentum \ufb02ux through the shock, \u223c\u03c10u2 0. For typical nonrelativistic shocks associated with cosmic structure this transition roughly corresponds to a time when pmax becomes ultra-relativistic. Once this dynamical equilibrium develops, the shock precursor compression and the subshock jump are steady, leading to a self-similar stretching of the precursor with time. Consequently, the subshock compression ratio, \u03c3s, the total compression ratio, \u03c3t, as well as the postshock gas and CR pressures, Pg,2 and Pc,2, remain constant during the self-similar stage of the shock. 3) The lowest energy particles di\ufb00use on a scale lmin = \u03ba(pinj)/us and, so, experience only the compression across the subshock. Thus, near the injection momentum, pinj, the CR distribution function is given by f(p) \u2248fs,th(pinj)(p/pinj)\u2212qs where fs,th is the thermal \f\u2013 20 \u2013 Maxwellian distribution of the postshock gas and qs = 3\u03c3s/(\u03c3s \u22121). The amplitude fth(pinj) is determined by the thermal leakage injection physics, since that establishes pinj. 4) The most energetic particles di\ufb00use on a scale lmax = \u03ba(pmax)/us and, so, experience the total compression across the entire shock structure. Consequently, near pmax, f(p) \ufb02attens to (p/pmax)\u2212qt, where qt = 3\u03c3t/(\u03c3t \u22121). For p > pmax, f(p) is suppressed by an exponential cuto\ufb00. Considering these facts, we proposed that the CR spectrum at the subshock for arbitrary time t after self-similar evolution begins can be described approximately by the following simple analytic formula: fs(p, t) = \" f0 \u00b7 \u0012 p pinj \u0013\u2212qs + f1 \u00b7 \u0012 p pmax(t) \u0013\u2212qt# exp \" \u2212 \u0012 p 1.5pmax(t) \u00132\u03b1# , (16) where f0 = fs,th(pinj) and pmax \u221d(u2 st/\u03ba\u2217)1/\u03b1 is given in equation (10). The parameters, pinj, qs and qt can be estimated from the shock structure in the self-similar stage using DSA simulations results as outlined in the appendix. The amplitude, f1, has to satisfy the relation gs(pmax) = fs(pmax)p4 max \u2248constant in order for the postshock Pc to remain steady. So, the momentum distribution function g(p) is shifted to higher pmax in time, while keeping the amplitude at pmax constant in the self-similar stage. Hence pmax is the only time-dependent parameter in equation (16). In a realistic shock geometry, however, CRs may escape upstream from the shock due to largest di\ufb00usion length approaching the physical size of the shocked system, or due to lack of scattering waves at resonant scales of most energetic particles. Once pmax approaches some upper momentum boundary at pup, the shock structure and the CR spectrum develop steady states that are approximately the same as the evolving forms with pmax = pup, except that some di\ufb00erences in the shock structure due to energy loss from escaping particles. Otherwise, the shock structure parameters and the approximate analytic form for the CR spectrum in the self-similar stage are consistent with previously proposed analytic and semi-analytic steady state solutions (e.g., Berezhko & Ellison 1999; Amato & Blasi 2005). Finally, we note that the evolution of the CR spectrum is secular in terms of the variable, Z = ln(p/pinj)/ ln(pmax/pinj), which alluded wrongfully the self-similar evolution of the partial pressure function Fs(Z) in Paper I. In fact there is no similarity relation between p and t. HK was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) (R04-2006-000-100590). DR was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2007-341-C00020). \f\u2013 21 \u2013 TWJ is supported at the University of Minnesota by NASA grant NNG05GF57G, NSF grant Ast-0607674 and by the Minnesota Supercomputing Institute. A. Analytic approximations for dynamical states As we noted in the Introduction, there are several analytic and semi-analytic treatments of strong, steady-state CR modi\ufb01ed shocks. The full time-asymptotic state of evolving CR modi\ufb01ed shocks can be obtained only through numerical simulations of nonlinear DSA. However, such simulations show strong similarities between steady-state and asymptotic, evolving shocks. Here we outline some of those basic dynamical relations as they can be estimated analytically and empirically from our simulations, as reported in this paper and previously in Paper I. A key to this comparison is the fact that the time scale for evolution of the shock precursor is the acceleration time scale to reach pmax, tacc \u223c10(lmax/us) (see eq. [9]), which is characteristically an order of magnitude greater than the time scale for a \ufb02uid element to pass through the precursor, tdyn \u223clmax/us. Then, in following a \ufb02uid element through the precursor, one can neglect terms \u2202/\u2202t compared to terms u\u2202/\u2202x in evaluating the Lagrangian time variation, d/dt. For example, equation (3), which can be expressed as d dt \u0012 Pg \u03c15/3 \u0013 = 2 3 W \u03c15/3, (A1) assuming \u03b3g = 5/3, then gives for an evolving precursor Pg,1 \u2248 \u0012 Pg,0 + 2 5\u03c10u2 0I \u0013 \u03c35/3 p , (A2) where \u03c3p = \u03c11/\u03c10 is the precursor compression factor. The quantity I = 5 3u3 0\u03c11/3 0 Z |W| \u03c12/3 dx (A3) was introduced in Paper I, and measures entropy added by Alfv\u00b4 en wave dissipation while the \ufb02uid element crosses the precursor, normalized by u2 0\u03c10/\u03c15/3 0 . Since equation (A2) applies to an evolving shock, the subscripts \u20180\u2019 and \u20181\u2019 refer to states of a given \ufb02uid element as it enters the precursor and as it reaches the subshock. The approximation comes from neglecting explicit time variations in |W| and \u03c1 in evaluating I. Equation (A2) is exact for a steady state shock. In the absence of Alfv\u00b4 en wave dissipation, this equation simply states the properties of adiabatic compression through the precursor, which obviously does not depend on the precursor being steady state. \f\u2013 22 \u2013 Along similar lines, momentum conservation of a \ufb02uid element passing through the (slowly) evolving precursor gives Pc,1 + Pg,1 \u2248Pg,0 + \u03c10u2 0 \u0012 1 \u22121 \u03c3p \u0013 , (A4) which can be combined with equation (A2) to produce a simple estimate for the CR pressure at the subshock, Pc,1 = Pc,2 \u2248\u03c10u2 0 \" 1 \u22121 \u03c3p \u22123 5 \u03c35/3 p \u22121 M2 0 \u22122 5I\u03c35/3 p # . (A5) By substituting equation (A5) into equation (A3) along with equation (4), one can obtain I \u22485 3 vA,0 u0 Pc,1 \u03c10u2 0 , (A6) where, once again, the approximation re\ufb02ects neglect of explicit time variation in the shock structure during passage of a \ufb02uid element through the shock. Substituting this back into equation (A5) we obtain Pc,2 \u03c10u2 0 \u2248 \" 1 \u22121 \u03c3p \u22123 5 \u03c35/3 p \u22121 M2 0 # \u0014 1 + 2 3 vA,0 u0 \u03c35/3 p \u0015\u22121 . (A7) Given Pc,1 = Pc,2 from equation (A7) and using equation (A4) it is straightforward to determine, as well, Pg,1. Although we can estimate approximately the postshock pressures, Pg,2 and Pc,2, for a given value of precursor compression, we must rely on numerical simulations to obtain the value of \u03c3p for di\ufb00erent model parameters. In the remainder of this appendix we present some practical expressions for the shock dynamical properties obtained in our DSA simulations using a wide range of Mach numbers for the thermal injection parameter \u01ebB = 0.2, the Alfv\u00b4 en wave transport parameter, \u03b8 = 0.1 and the di\ufb00usion coe\ufb03cient, \u03ba = \u03ba\u2217p(\u03c1/\u03c10). In Figure 11 the time-asymptotic values of postshock CR pressure, gas pressure and compression ratios are plotted against the initial shock Mach number (M0 \u22651.5). For M0 \u22642.5, the CR modi\ufb01cation is negligible, so the postshock gas pressure and the shock compression ratios \u03c3t = \u03c3s are given by the usual Rankine-Hondo relation for pure gasdynamic shocks. For M0 > 2.5, the numerical results for the postshock gas pressure can be \ufb01tted by Pg,2 \u03c10u2 s,i \u22480.4 \u0012M0 10 \u0013\u22120.4 (A8) \f\u2013 23 \u2013 The time-asymptotic density compression ratios can be approximated as follows: \u03c3s \u22483.2 \u0012M0 10 \u00130.17 for 2.5 \u2264M0 \u226410, (A9) \u03c3s \u22483.2 \u0012M0 10 \u00130.04 for M0 > 10, \u03c3t \u22485.0 \u0012M0 10 \u00130.42 for 2.5 \u2264M0 \u226410, (A10) \u03c3t \u22485.0 \u0012M0 10 \u00130.32 for M0 > 10. We note that the subshock compression depends only weakly on M0, while the total compression increases approximately as M1/3 0 . Even for strong shocks with M0 up to 100, the total compression ratio is less than 10, because the propagation and dissipation of Alfv\u00b4 en waves upstream reduces the CR acceleration and the precursor compression. The postshock CR pressure can be \ufb01t empirically as follows: Pc,2 \u03c10u2 s,i \u22482.34 \u00d7 10\u22122(M0 \u22121)3 for 1.5 < M0 < 2.5, Pc,2 \u03c10u2 s,i \u22480.58(M0 \u22121)4 M4 0 \u22122.14(M0 \u22121)3 M4 0 + 13.7(M0 \u22121)2 M4 0 (A11) \u221227.0(M0 \u22121) M4 0 + 15.0 M4 0 for 2.5 \u2264M0 \u2264100, Pc,2 \u03c10u2 s,i \u22480.55 for M0 > 100. These \ufb01ts are plotted in solid lines in Figure 11. Since \u03c3p = \u03c3t/\u03c3s, equations (A9) and (A10) can be used along with equation (A7) to estimate Pc,2 (dotted line in Fig. 11). In Kang et al. (2002) we showed that the e\ufb00ective injection momentum is pinj/pth \u22482.5 for M0 \u227310 for the injection parameter \u01ebB = 0.2, where pth = 2 p kT2/mpc2 and T2 = (Pg,2/\u03c12)(mp/k) is the postshock gas temperature. Then the thermal distribution at the injection momentum, gs,th(pinj), can be calculated from the Maxwell distribution, since the postshock gas states, T2 and \u03c12, are known." + }, + { + "url": "http://arxiv.org/abs/0705.3274v1", + "title": "Self-Similar Evolution of Cosmic-Ray-Modified Quasi-Parallel Plane Shocks", + "abstract": "Using an improved version of the previously introduced CRASH (Cosmic Ray\nAcceleration SHock) code, we have calculated the time evolution of cosmic-ray\n(CR) modified quasi-parallel plane shocks for Bohm-like diffusion, including\nself-consistent models of Alfven wave drift and dissipation, along with thermal\nleakage injection of CRs. The new simulations follow evolution of the CR\ndistribution to much higher energies than our previous study, providing a\nbetter examination of evolutionary and asymptotic behaviors. The postshock CR\npressure becomes constant after quick initial adjustment, since the evolution\nof the CR partial pressure expressed in terms of a momentum similarity variable\nis self-similar. The shock precursor, which scales as the diffusion length of\nthe highest energy CRs, subsequently broadens approximately linearly with time,\nindependent of diffusion model, so long as CRs continue to be accelerated to\never-higher energies. This means the nonlinear shock structure can be described\napproximately in terms of the similarity variable, x/(u_s t), where u_s is the\nshock speed once the postshock pressure reaches an approximate time asymptotic\nstate. As before, the shock Mach number is the key parameter determining the\nevolution and the CR acceleration efficiency, although finite Alfven wave drift\nand wave energy dissipation in the shock precursor reduce the effective\nvelocity change experienced by CRs, so reduce acceleration efficiency\nnoticeably, thus, providing a second important parameter at low and moderate\nMach numbers.", + "authors": "Hyesung Kang, T. W. Jones", + "published": "2007-05-23", + "updated": "2007-05-23", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction Astrophysical plasmas, from the interplanetary gas inside the heliosphere to the galaxy intracluster medium (ICM), are magnetized and turbulent and contain nonthermal particles in addition to gas thermal particles. So, understanding complex interactions among these di\ufb00erent components is critical to the study of many astrophysical problems. In collisionless shocks entropy is generated via collective electromagnetic viscosities, i.e., interactions of charged particles with turbulent \ufb01elds [36]. Some suprathermal particles of the shock heated gas can leak upstream, their streaming motions against the background \ufb02uid exciting MHD Alfv\u00b4 en waves upstream of the shock [6,32]. Then those particles can be further accelerated to very high energies through multiple shock crossings resulting from resonant scatterings with the self-excited Alfv\u00b4 en waves in the \ufb02ows converging across the shock [13,10,36]. Detailed nonlinear treatments of di\ufb00usive shock acceleration (DSA) account for incoming thermal particles injected into the CR population (e.g., [19,37,21]) as a consequence of incomplete thermalization by collisionless dissipation processes. Those particles, while relatively few in number, can subsequently accumulate a major fraction of the shock kinetic energy as their individual energies increase [16,26]. Such predictions are supported by a variety of observations including direct measurements of particle spectra at interplanetary shocks, nonthermal \u03b3-ray, X-ray and radio emissions of supernova remnant shocks and also possibly the ICM of some X-ray clusters (e.g., [10,3,41]). CR acceleration may be universal to astrophysical shocks in di\ufb00use, ionized media on all scales. Unlike an ideal gasdynamic shock, downstream states of a CR modi\ufb01ed shock cannot be determined in a straightforward way by simple jump conditions across the shock front. This is because the shock transition depends on the CR pressure distribution upstream of the dissipative subshock. The particle acceleration takes place on di\ufb00usion time and length scales (td(p) = \u03ba(p)/u2 s and ld(p) = \u03ba(p)/us, where \u03ba(p) is di\ufb00usion coe\ufb03cient and us is the shock speed), which are much larger than the shock dissipation scales. Unless or until some boundary condition limits the maximum CR momentum, this structure will continue to evolve along with the CR distribution. Thus the evolution of a CR Email addresses: kang@uju.es.pusan.ac.kr (Hyesung Kang), twj@astro.umn.edu (T. W. Jones). URL: www.astro.umn.edu/\u223ctwj (T. W. Jones). 2 \fshock with a \ufb01nite age and size should properly be followed by time-dependent numerical simulations. In addition, complex interplay among CRs, resonant waves, and the underlying gas \ufb02ow (i.e., thermal leakage injection, self-excited waves, resonant scatterings of particles by waves, and non-linear feedback to the gas \ufb02ow) is model dependent and not yet understood completely. In the time dependent kinetic equation approach to numerical study of CR acceleration at shocks, the di\ufb00usion-convection equation for the particle momentum distribution, f(p, x, t), is solved, along with suitably modi\ufb01ed gasdynamic equations (e.g., [25]). Since accurate solutions to this equation require a computational grid spacing smaller than the particle di\ufb00usion length, ld(p), and since realistic di\ufb00usion coe\ufb03cients have steep momentum dependence, a wide range of length scales must be resolved in order to follow the CR acceleration from the injection momentum (typically pinj/mpc \u223c10\u22122) to highly relativistic momenta (p/mpc \u226b1). This constitutes an extremely challenging numerical task, which can require rather extensive computational resources, especially if one allows temporal and spatial evolution of di\ufb00usion behavior. To overcome this numerical problem in a generally applicable way we have built the CRASH (Cosmic-Ray Acceleration SHock) code by implementing Adaptive Mesh Re\ufb01nement (AMR) techniques and subgrid shock tracking methods [25,30] in order to enhance computational e\ufb03ciency. The CRASH code also treats thermal leakage injection self-consistently by adopting a shock transparency function for suprathermal particles in the shock [26]. We previously applied our CRASH code in a plane-parallel geometry to calculate the nonlinear evolution of CR modi\ufb01ed shocks in the absence of signi\ufb01cant local Alfv\u00b4 en wave heating and advection [26,27,29]. For those models the shock sonic Mach number, M0, largely controlled the thermal leakage injection rate and the CR acceleration e\ufb03ciency in evolving modi\ufb01ed planar shocks, since M0 determines the relative velocity jump across the shock and consequently the degree of shock modi\ufb01cation by CRs. In all but some of the highest Mach number shocks the CR injection rate and the postshock CR pressure approached time-asymptotic values when a balance was achieved between acceleration/injection and di\ufb00usion/advection processes. This resulted in an approximate \u201cself-similar\u201d \ufb02ow structure, in the sense that the shock structure broadened approximately linearly in time, so that the shock structure could be expressed in terms of the similarity coordinate ust. It is likely that all of the models would have reached such asymptotic dynamical structures eventually, but performance limits in the version of the code in use at that time prevented us from extending some of the simulations long enough to con\ufb01rm that. The CR distribution evolved only to pmax/mpc \u223c10\u2212103. Based on the self-similar evolution reported in our previous work, we calculated the ratio of CR energy to in\ufb02owing kinetic energy, \u03a6, (see Eq. [12] below) as a measure of the CR acceleration e\ufb03ciency in a time-asymptotic limit. The CR energy ratio, \u03a6, increased with the shock Mach number, but approached \u22480.5 3 \ffor large shock Mach numbers, M0 > 30, and it was relatively independent of other upstream properties or variation in the injection parameter. In those shocks where we observed time asymptotic dynamical behaviors the postshock CR pressures were \u223c30 60% of the ram pressure in the initial shock frame, this ratio increasing with Mach number. For some of the highest Mach number shocks in that study CR pressure continued to increase to the end of the simulation, so the \ufb01nal values could not be measured. Finally, the presence of a preexisting, upstream CR population was seen in those earlier simulations to be equivalent to having slightly more e\ufb03cient thermal leakage injection for such strong shocks, while it could substantially increase the overall CR energy in moderate strength shocks with M0 < 3. In the present paper, we revisit the problem of self-similar evolution of CR modi\ufb01ed shocks with a substantially improved numerical scheme that enables us to follow the particle acceleration to energies much higher than we considered before. This allows us to measure asymptotic dynamical properties for all the newly simulated shocks and to demonstrate that the self-similar evolution of the CR partial pressure in terms of a momentum similarity variable leads to the constancy of the postshock CR pressure. We also include in the new work the e\ufb00ects of Alfv` en wave drift and dissipation in the shocks. The time asymptotic CR acceleration e\ufb03ciency is once again controlled by the shock Mach number but diminished as the ratio of Alfv\u00b4 enic Mach number to sonic Mach number decreases. The asymptotic shock properties are largely independent of the magnitude of the spatial di\ufb00usion coe\ufb03cient and also its subrelativistic momentum dependence. The basic equations and details of the numerical method are described in \u00a72. We present simulation results for a wide range of shock parameters in \u00a73, followed by a summary in \u00a74. 2 Numerical Method 2.1 Basic Equations The evolution of CR modi\ufb01ed shocks depends on a coupling between the gasdynamics and the di\ufb00usive CRs. That coupling takes place by way of resonant MHD waves, although it is customary to express the pondermotive wave force and dissipation in the plasma through the associated CR pressure distribution properties along with a characteristic wave propagation speed (usually the Alfv\u00b4 en speed) (e.g., [42,1]). Consequently, in our simulations we solve the standard gasdynamic equations with CR pressure terms added in the conservative, Eulerian formulation for one dimensional plane-parallel geometry. The 4 \fevolution of a modi\ufb01ed entropy, S = Pg/\u03c1\u03b3g\u22121, is followed everywhere except across the subshock, since for strongly shocked \ufb02ows, numerical errors in computing the gas pressure from the total energy can lead to spurious entropy generation with standard methods, especially in the shock precursor [26]. \u2202\u03c1 \u2202t + \u2202(u\u03c1) \u2202x = 0, (1) \u2202(\u03c1u) \u2202t + \u2202(\u03c1u2 + Pg + Pc) \u2202x = 0, (2) \u2202(\u03c1eg) \u2202t + \u2202 \u2202x(\u03c1egu + Pgu) = \u2212u\u2202Pc \u2202x (3) \u2202S \u2202t + \u2202 \u2202x(Su) = +(\u03b3g \u22121) \u03c1\u03b3g\u22121 [W(x, t) \u2212L(x, t)], (4) where Pg and Pc are the gas and the CR pressure, respectively, eg = Pg/[\u03c1(\u03b3g \u22121)]+ u2/2 is the total energy of the gas per unit mass. The remaining variables, except for L and W have standard meanings. The injection energy loss term, L(x, t), accounts for the energy carried by the suprathermal particles injected into the CR component at the subshock and is subtracted from the postshock gas immediately behind the subshock. Gas heating due to Alfv\u00b4 en wave dissipation in the upstream region is represented by the term W(x, t) = \u2212vA\u2202Pc/\u2202x, where vA = B/\u221a4\u03c0\u03c1 is the Alfv\u00b4 en speed. This commonly used dissipation expression derives from a quasi-linear model in which Alfv\u00b4 en waves are ampli\ufb01ed by streaming CRs and dissipated locally as heat in the precursor region (e.g., [22]). The CR population is evolved by solving the di\ufb00usion-convection equation in the form, \u2202g \u2202t + (u + uw)\u2202g \u2202x = 1 3 \u2202 \u2202x(u + uw)(\u2202g \u2202y \u22124g) + \u2202 \u2202x[\u03ba(x, y)\u2202g \u2202x], (5) where g = p4f, with f(p, x, t) the pitch angle averaged CR distribution, and where y = ln(p), while \u03ba(x, p) is the spatial di\ufb00usion coe\ufb03cient [42]. For simplicity we always express the particle momentum, p in units mpc and consider only the proton CR component. The wave speed is set to be uw = vA in the upstream region, while we use uw = 0 in the downstream region. This term re\ufb02ects the fact that the scattering by Alfv\u00b4 en waves tends to isotropize the CR distribution in the wave frame rather than the bulk-\ufb02ow, gas frame [42]. Upstream, the waves are expected to be dominated by the streaming instability, so face upwind. Behind the shock, various processes, including wave re\ufb02ection, are expected to lead to a more nearly isotropic wave \ufb01eld (e.g., [2]). 5 \fEqs. (1)-(5) are simultaneously integrated by the CRASH code in planeparallel geometry. The detailed numerical description can be found in Kang et al. 2002 [26]. A key performance feature of the CRASH code is multiple levels of re\ufb01ned grids (typically lg = 8 \u221210) strategically laid around the subshock to resolve the di\ufb00usion length scale of the lowest energy particles near injection momenta. Grid re\ufb01nement spans a region around the subshock just large enough to include comfortably the di\ufb00usion scales of dynamically important high energy CRs with enough levels to follow freshly injected low energy CRs with su\ufb03cient resolution to produce converged evolutionary behaviors. To accomplish grid re\ufb01nement e\ufb00ectively it is necessary to locate the subshock position exactly. Thus, we track the subshock as a moving, discontinuous jump inside the initial, uniform and \ufb01xed grid [25]. 2.2 Di\ufb00usion Model We considered in this study two common choices for di\ufb00usion models. First is the Bohm di\ufb00usion model, which represents scattering expected for a saturated wave spectrum and gives what is generally assumed to be the minimum di\ufb00usion coe\ufb03cient as \u03baB = 1/3rg\u03c5, when the particles scatter within a path of one gyration radius (i.e., \u03bbmfp \u223crg). This gives \u03baB(p) = \u03ban p2 (p2 + 1)1/2. (6) The coe\ufb03cient \u03ban = mc3/(3eB) = (3.13 \u00d7 1022cm2s\u22121)B\u22121 \u00b5 , where B\u00b5 is the magnetic \ufb01eld strength in units of microgauss. There has been much discussion recently about ampli\ufb01cation of the large scale magnetic \ufb01eld within the shock precursor (e.g., [32,35,44]). Since physical models of that evolution are still not well developed, we will assume for simplicity in the simulations presented here that the large scale \ufb01eld is constant through the shock structure. Because of its steep momentum dependence in the nonrelativistic regime, the Bohm di\ufb00usion model requires an extremely \ufb01ne spatial grid resolution whenever nonrelativistic CRs are present. On the other hand the form of \u03ba(p) for nonrelativistic momenta mostly impacts only early evolution of CR-modi\ufb01ed shocks, when CR feedback is dominated by nonrelativistic particles and thermal leakage injection rates are adjusting rapidly to changes from initial conditions. So, to concentrate computational e\ufb00ort more e\ufb03ciently, we adopted in some previous works [25,30] a \u201cBohm-like\u201d di\ufb00usion coe\ufb03cient that includes a weaker momentum dependence for the non-relativistic regime, \u03baBL(p) = \u03banp. (7) 6 \fAccording to those previous studies, the di\ufb00erences in results between the two models are minor except during early nonlinear shock evolution, as expected. Thanks to the weaker momentum dependence of \u03baBL we can, for given computational resources, calculate numerically converged models with smaller \u03ban resulting in the acceleration of CRs to higher momenta. In order to quench the well-known CR acoustic instability in the precursor of highly modi\ufb01ed CR shocks (e.g., [24]), we assume a density dependence for the di\ufb00usion coe\ufb03cient, (\u03c1/\u03c10)\u22121, so that \u03ba(x, p) = \u03baB(\u03c1/\u03c10)\u22121 or \u03ba(x, p) = \u03baBL(\u03c1/\u03c10)\u22121, where \u03c10 is the upstream gas density. This density dependence also models enhancement of the Alfv\u00b4 en wave magnetic \ufb01eld amplitude due to \ufb02ow compression. We note, also, for clarity that hereafter we use the subscripts \u20190\u2019, \u20191\u2019, and \u20192\u2019 to denote conditions far upstream of the shock, immediately upstream of the gas subshock and immediately downstream of the subshock, respectively. 2.3 Thermal Leakage Model In the CRASH code suprathermal particles are injected as CRs self-consistently via \u201cthermal leakage\u201d through the lowest CR momentum boundary. The thermal leakage injection model emulates the \ufb01ltering process by which suprathermal particles well into the tail of the postshock Maxwellian distribution leak upstream across the subshock [37,36]. This \ufb01ltering is managed numerically by adopting a \u201ctransparency function\u201d, \u03c4esc(\u01ebB, \u03c5), that expresses the probability of supra-thermal particles at a given velocity, \u03c5, successfully swimming upstream across the subshock through the postshock MHD waves [21,26]. The one model parameter, \u01ebB = B0/B\u22a5, is the ratio of the amplitude of the postshock wave \ufb01eld interacting with the low energy particles, B\u22a5, to the general magnetic \ufb01eld, B0, which is aligned with the shock normal in these simulations. The transparency function \ufb01xes the lowest momentum of the CR component in our simulations from the condition that \u03c4esc > 0 (i.e., non-zero probability to cross the subshock for CRs) for p > p1, where p1 = (u2/c)(1 + \u01ebB)/\u01ebB and u2 is the downstream \ufb02ow speed in the subshock rest frame. Initially p1 is determined by the downstream speed of the initial shock, but it decreases as the subshock weakens and then it becomes constant after the CR modi\ufb01ed shock structures reach asymptotic states. Since suprathermal particles have to swim against the scattering waves advecting downstream, the subshock Mach number, Ms, is one of the key shock characteristics that control the injection fraction. Previous simulations showed that injection is less e\ufb03cient for weaker shocks, but becomes independent of M0 for strong shocks, when the subshock compression asymptotes [26]. For a given total shock Mach number, M0, on the other hand, the injection rate 7 \fis controlled mainly by the parameter \u01ebB In practice we have found that \u01ebB \u223c0.2 \u22120.25 leads to an injection fraction in the range \u223c10\u22124 \u221210\u22123. This is similar to commonly adopted values in other models that employ a \ufb01xed injection rate (e.g., [8,33,4]). Although somewhat higher \ufb01eld turbulence values (0.25 < \u01ebB < 0.3) are suggested theoretically for strong shocks [34], these evoke a start-up problem in the numerical simulations, since they lead to very rapid initial injection that cools the postshock \ufb02ow too strongly for it to remain numerically stable. Once the shock structure becomes nonlinear, however, those in\ufb02uences moderate greatly, so that, as we have shown previously, the ultimate CR acceleration behavior depends only weakly on \u01ebB. In fact, we have found previously for strong shocks that the time-asymptotic behaviors are very weakly dependent on \u01ebB [25], so its chosen value will have no in\ufb02uence on our \ufb01nal conclusions. We directly track the fraction of particles injected into the CR population as follows: \u03be(t) = R dx R p2 p1 4\u03c0fCR(p, x, t)p2dp n0us,0t , (8) where fCR is the CR distribution function above p1, while n0us,0t is the number of particles passed through the shock until the time t. The highest momentum of the CR component, p2, is chosen so that it is well above pmax at the simulation termination time, where pmax is de\ufb01ned in \u00a72.4. 2.4 CR Acceleration E\ufb03ciency The postshock thermal energy in a gasdynamic shock can, of course, be calculated analytically by the Rankine-Hugoniot jump condition. On the other hand, the CR population and the associated acceleration e\ufb03ciency at CR modi\ufb01ed shocks should properly be obtained through time-dependent integration of the shock structure from given initial states, since the CR distribution depends on the shock structure, which is not discontinuous and continues to evolve so long as the CR population evolves. In particular CR modi\ufb01ed shocks contain a smooth precursor in the upstream region whose scale height grows in time in proportion to the di\ufb00usion length of energetically dominant particles. The total shock compression may, similarly, evolve over a signi\ufb01cant time period. The standard expression for the mean acceleration timescale for a particle to reach momentum p in the test-particle limit of DSA theory is given by [31] \u03c4acc(p) = 3 u1 \u2212u2 (\u03ba1 u1 + \u03ba2 u2 ). (9) 8 \fIn the test particle limit the shock compression is given by the RankinHugoniot condition. Then assuming a \u03b3g = 5/3 gasdynamic shock and a di\ufb00usion coe\ufb03cient taking the density dependence indicated at the end of \u00a72.2, this leads to \u03c4acc(p) \u22488 M2 0 M2 0 \u22121 \u03ba(p) u2 s , (10) where us and M0 are the shock speed and sonic Mach number, respectively. While this expression should strictly speaking be modi\ufb01ed in highly modi\ufb01ed CR shocks, since the shock structure and the associated CR transport are more complex than assumed in Eq. (10) [11], we \ufb01nd it empirically to be reasonably consistent with our results described below. Accordingly, we may expect and con\ufb01rm below that the time-dependent evolution of our CR modi\ufb01ed shocks will be determined primarily by the shock Mach number and can be expressed simply in terms of di\ufb00usion length and time scales. Within this model the highest momentum expected to be accelerated in strong shocks by the time t is set according to Eq. (10) by the relation t \u22488\u03ba(pmax)/u2 s. In that case the scale height of the precursor or shock transition structure grows linearly with time as lshock \u223c\u03ba(pmax) us \u223c1 8ust, (11) independent of the magnitude or the momentum dependence of \u03ba(p). This evolution should continue until some other physics limits the increase in CR momentum, such as the \ufb01nite size of the shock system. Since the CR pressure approaches a time-asymptotic value (see Figs. 3-5 below), the evolution of the CR-modi\ufb01ed shock becomes, under these circumstances, approximately self-similar, independent of the form of the di\ufb00usion coe\ufb03cient, while lshock grows linearly with time [27,28,29]. On the other hand, \u03ba(pmax) \u2248lshockus, so for Bohm-like di\ufb00usion, pmax \u2248lshock(t)us/\u03ban. Thus, at a given time the CR distribution, g(p), extends for Bohm-like di\ufb00usion according to pmax \u2248 u2 st/(8\u03ban) \u221d1/\u03ban. Fig. 1 shows a comparison of three models of a M0 = 20 shock with \u02dc \u03baB = \u02dc \u03banp2/\u221ap2 + 1 using \u02dc \u03ban = 0.1, and with \u02dc \u03baBL = \u02dc \u03banp using \u02dc \u03ban = 10\u22124 and 10\u22126 in units de\ufb01ned in the following section. The upper left panel, displaying the evolution of postshock CR pressure, demonstrates similar time-asymptotic values for all three models. At early times the CR pressure evolution depends on details of the model, including numerical properties such as spatial and momentum grid resolutions, and the previously described injection suppression scheme used to prevent start up problems. 9 \fThe other panels in Fig. 1 illustrate shock structure comparisons for the three models at the end of the simulations. The shock structures, \u03c1(x) and Pc(x), are very similar, while the CR spectrum extends to di\ufb00erent values of pmax, inversely proportional to \u02dc \u03ban. The self-similar evolution of CR modi\ufb01ed shocks makes it useful to apply the ratio of CR energy to a \ufb01ducial kinetic energy \ufb02ux through the shock as a simple measure of acceleration e\ufb03ciency; namely, \u03a6(t) = R Ec(x, t)dx 0.5\u03c10u3 s,0t . (12) More speci\ufb01cally, this compares the total CR energy within the simulation box to the kinetic energy in the initial shock frame that has crossed the shock at a given time. As the shock structures approach time-asymptotic forms the above discussion suggests that \u03a6(t) also may approach time-asymptotic values. This is con\ufb01rmed in our simulations. We see also that the asymptotic \u03a6 values depend in our simulations primarily on shock sonic Mach number and are independent of \u03ba. The highest momenta achieved in our simulations are set by practical limits on computation time controlled by the vast range of di\ufb00usion times and lengths to be modeled. Still, the asymptotic acceleration e\ufb03ciency ratio is almost independent of the maximum momentum reached. For the three models shown in Fig. 1, for example, pmax \u223c10, 104, and 106 at \u02dc t = 10, depending on the value of \u03ban, but the CR energy ratio approaches similar values of \u03a6 \u223c0.4 for all three models. 3 Simulation Set Up and Model Parameters 3.1 Units and Initial Conditions The expected evolutionary behaviors described above naturally suggest convenient units. For example, given a suitable velocity scale, \u02c6 u, and length scale, \u02c6 x, a time scale \u02c6 t = \u02c6 x/\u02c6 u and di\ufb00usion coe\ufb03cient scale, \u02c6 \u03ba = \u02c6 x\u02c6 u = \u02c6 u2\u02c6 t are implied. Alternatively, one can select the velocity scale along with a convenient scale for the di\ufb00usion coe\ufb03cient, \u02c6 \u03ba, leading to a natural length scale, \u02c6 x = \u02c6 \u03ba/\u02c6 u and a related time scale, \u02c6 t = \u02c6 \u03ba/\u02c6 u2 = \u02c6 x/\u02c6 u. We will follow the latter convention in our discussion. In addition, given an arbitrary mass unit, \u02c6 m, which we take to be the proton mass, we can similarly normalize mass density in terms of \u02c6 \u03c1 = \u02c6 m/\u02c6 x3. Pressure is then expressed relative to \u02c6 \u03c1\u02c6 u2. For clarity we henceforth 10 \findicate quantities normalized by the above scales using a tilde; for example, \u02dc u, \u02dc t, and \u02dc \u03ban. We start each simulation with a pure gasdynamic, right-facing shock at rest in the computational grid. We use the upstream gas speed in this frame, u0 as our velocity scale, so that the initial, normalized shock speed is \u02dc us,0 = 1 with respect to the upstream gas. The upstream gas is speci\ufb01ed with one of two temperature values, either T0 = 104K or T0 = 106, which represent warm or hot phase of astrophysical di\ufb00use media, respectively. In astrophysical environments, for example, photoionized gas of 104 K is quite common. Hot and ionized gas of > 106 K is also found in the hot phase of the ISM [38,43]. The shock speed and the upstream temperature are related through the sonic Mach number, M0, by the usual relation us,0 = cs,0M0 = 15 km s\u22121(T0/104)1/2M0 = u0, where cs,0 is the sound speed of the upstream gas. So, by choosing T0 and M0, we set the physical value of the shock speed, which, in turn, determines the postshock thermal behavior. For CR distribution properties it is also necessary to de\ufb01ne the speed of light, c, in terms of \u03b2k = u0/c. The normalized upstream gas density is set to unity; i.e., \u02dc \u03c10 = 1. The preshock pressure is determined by the shock Mach number, \u02dc Pg,0 = (1/\u03b3g)M\u22122 0 , where the gas adiabatic index, \u03b3g = 5/3. The postshock states for the initial shocks are determined by the Rankine-Hugoniot shock jump condition. For models with T0 = 104K, M0 =10-80 is considered, since the gas would not be fully ionized at slower shocks (us < 150 km s\u22121), the postshock gas would often become radiative and the CR acceleration become ine\ufb03cient owing to wave dissipation from ion-neutral collisions (e.g., [15]). For models with T0 = 106K M0 = 2 \u221230 (300 km s\u22121 \u2264us \u22644500 km s\u22121) is considered, since the CR acceleration should be relatively independent of T0 for shocks with M0 > 30. In order to explore e\ufb00ects of pre-existing CRs, we also consider, as we did in our earlier work, models with T0 = 106 K that include an ambient (upstream) CR population, f(p) \u221dp\u22125 for p1 \u2264p \u2264p2 and set its pressure Pc,0 = (0.25 \u22120.3)Pg,0. For these models, we adopt \u03baB = 0.1p2/\u221ap2 + 1, p1 = (us,0/c)(1 + \u01ebB)/\u01ebB and p2 = 103. For strong shocks the presence of preexisting CRs is similar in e\ufb00ect to having a slightly higher injection rate, so the time asymptotic shock structure and CR acceleration e\ufb03ciency depend only weakly on such a pre-existing CR population [26,28]. On the other hand, for weak shocks, a pre-existing CR pressure comparable to the upstream gas pressure represents a signi\ufb01cant fraction of the total energy entering the shock, so pre-existing CRs obviously have far more impact. In addition, the time asymptotic CR acceleration e\ufb03ciency in weak shocks depends sensitively on the injection rate, so increases with increased shock transparency, controlled through \u01ebB (see \u00a72.3). Hence, as we found in [29], we expect relatively weak CR shocks (M0 < 5) to be substantially altered by the presence of a \ufb01nite 11 \fupstream Pc. 3.2 Wave Drift and Heating As shown in earlier works (e.g., [22,30] and references cited therein), the CR acceleration becomes less e\ufb03cient when Alfv\u00b4 en wave drift and heating terms are included in the simulations. This behavior comes from two e\ufb00ects previously mentioned in \u00a72.1, both of which derive from the resonance interaction between CRs and Alfv\u00b4 en waves in the shock precursor. The Alfv\u00b4 en waves stimulated by CR streaming in the precursor will propagate in the upstream direction, so that the e\ufb00ective advection speed of the CRs into the subshock is reduced. In addition, if the energy extracted from CRs to amplify these waves is locally dissipated, the heating rate in the precursor is increased with respect to the adiabatic rate, so that gas entering the subshock is relatively hotter, and the subshock strength is accordingly reduced. The signi\ufb01cance of the e\ufb00ects depends on the sonic Mach number, M0, relative to the shock Alfv\u00b4 enic Mach number, MA = us/vA; i.e., on the ratio of the Alfv\u00b4 en speed to the sound speed (see \u00a73.4). In a parallel shock we can write M0/MA = vA/cs = q 2\u03b8/ [\u03b3g(\u03b3g \u22121)], where we introduce a convenient \u201cAlfv\u00b4 en parameter\u201d as follows, \u03b8 = EB,0 Eth,0 = (\u03b3g \u22121) Pg,0 PB,0 = (\u03b3g \u22121) \u03b2p . (13) This expresses the relative upstream Alfv\u00b4 en and sound speeds in terms of the magnetic to thermal energy density ratio. The parameter \u03b2p is the usual \u201cplasma \u03b2\u201d parameter. For \u03b3g = 5/3, \u03b2p = 2/(3\u03b8). and \u03b8 = (5/9)(M0/MA)2. We emphasize for clarity that the present simulations are of parallel shocks, so that the direct dynamical role of the magnetic \ufb01eld has been neglected. Observed or estimated values are typically \u03b8 \u223c0.1 for intracluster media and \u03b8 \u223c1 for the interstellar medium of our Galaxy (e.g., [5,12]). So we consider 0.1 \u2264\u03b8 \u22641, and we will provide comparison to the weak \ufb01eld limit, \u03b8 = 0. Fig. 2 concisely illustrates the importance of Alfv\u00b4 en wave drift and heating e\ufb00ects on a Mach 10 shock. One can see that the postshock CR pressure, for example, decreases by more than a factor two when the sonic and Alfv\u00b4 en Mach numbers become comparable. Most of the model results we show here used \u03b8 = 0.1. For the Mach 10 shocks illustrated in Fig. 2 the associated wave terms have reduced the asymptotic CR pressure by about 30% with \u03b8 = 0.1 compared to the shock with no such terms included. 12 \f3.3 Grid Resolution and Convergence According to our previous studies [23], the spatial grid resolution should be much \ufb01ner than the di\ufb00usion length of lowest energy particles near the injection momenta, i.e., \u2206x < \u223c0.1ld(pinj). Kang & Jones (2006), however, showed that the spherical, comoving CRASH code achieve good numerical convergence, even when \u2206x > ld(pinj). This was due to the fact in the execution of that code the gas subshock remains consistently inside the same comoving grid zone. In order to gain this bene\ufb01t for our present simulations, we have modi\ufb01ed our plane-parallel CRASH code so that the shock is again forced to remain inside the same re\ufb01ned grid zone by regularly rede\ufb01ning the underlying Eulerian grid. The simulations employ eight levels of re\ufb01nement with the grid spacing reduced by an integer factor of two between re\ufb01nement levels. The spatial grid resolution on the coarsest, base grid is \u2206\u02dc x0 = 2 \u00d7 10\u22123, while on the \ufb01nest, 8th, grid \u2206\u02dc x8 = 7.8 \u00d7 10\u22126. With \u02dc \u03ban = 10\u22126, the di\ufb00usion length for injection momenta p1 \u224810\u22122 becomes \u02dc ld \u224810\u22128 for models with Bohm-like di\ufb00usion, \u03baBL. Although \u2206\u02dc x8 > \u02dc ld(pinj), we con\ufb01rmed that the new plane-parallel CRASH code also achieves good numerical convergence in the simulations presented here. This improvement enables us to extend these simulations to CR momenta several orders of magnitude greater that those discussed in [29]. When solving the di\ufb00usion-convection equation, we used 230 280 uniformly spaced logarithmic momentum bins in the interval y = ln p = [ln p1, ln p2]. 3.4 Results We show in Fig. 3 the time evolution of a M0 = 10 shock with T0 = 106K, \u02dc \u03baBL = 10\u22126p, and \u03b8 = 0.1. The wave amplitude parameter in the thermal leakage model was assumed to be \u01ebB = 0.2, unless stated otherwise. The lower left panel follows evolution of the volume integrated CR distribution function relative to the total number of particles (mostly in the thermal population) that have passed through the shock, i.e., G(p)/(n0us,0t), where G(p) = R dxg(x, p). As the CR pressure increases in the precursor in response to thermal leakage injection at the subshock and subsequent Fermi acceleration, the subshock weakens. The injection process is self-regulated in such a way that the injection rate reaches and stays at a nearly stable value after quick initial adjustment. Consequently, the postshock CR pressure reaches an approximate time-asymptotic value once a balance is established between fresh injection/acceleration and advection/di\ufb00usion of the CR particles away from the shock. 13 \fThe CR pressure is calculated as Pc = 4\u03c0 3 mpc2 p2 Z p1 g(p) p \u221ap2 + 1d ln p, (14) so we de\ufb01ne D(p) \u2261g(p)p/\u221ap2 + 1 as a \u2018partial pressure function\u2019. The upper left panel of Fig. 4 shows the evolution of D(p, xs) at the subshock for the model shown in Fig. 3. Since D(p) stretches self-similarly in momentum space, we de\ufb01ne a new momentum similarity variable as Z \u2261 ln(p/p1) ln[pmax(t)/p1]. (15) The lowest momentum p1 becomes constant after the subshock structure becomes steady. The momentum at which numerical values of D(p) peaks is chosen as pmax(t) and similar to what is estimated approximately by applying the test-particle theory in \u00a72.4. For the model shown in Fig. 3, p1 \u22480.01 and pmax \u22487.27 \u00d7 104 \u02dc t. We then de\ufb01ne another \u2018partial pressure function\u2019, F(Z) \u2261g(Z) p \u221ap2 + 1 ln[pmax(t)/p1] = D(Z) ln[pmax(t)/p1]. (16) Its time evolution is shown as a function of Z at the upper right panel of Fig. 4. The plot demonstrates that the evolution of F(Z) becomes self-similar for \u02dc t \u22652. Since Pc \u221d R p1 D(p)d ln p \u221d R 0 F(Z)dZ, the areas under the curves of D(p) or F(Z) in Fig. 4 represent the CR pressure at the shock. So the selfsimilar evolution of F(Z) implies the constancy of Pc,2. In this case \u02dc Pc,2 \u22480.31 for \u02dc t > \u223c2. \u00bfFrom that time forward the spatial distribution of Pc expands approximately linearly with time, as anticipated in \u00a72.4. This demonstrates that the growth of a precursor and the shock structure proceed approximately in a self-similar way once the postshock CR pressure becomes constant. In the lower panels of Fig. 4 we also show the evolution of D(p) and F(Z) for another model with M0 = 50, T0 = 104K, \u02dc \u03baB = 0.01p/\u221ap2 + 1 and \u03b8 = 0.1. For the this model p1 \u22482.0 \u00d7 10\u22123 and pmax \u224810\u02dc t. In this case, the CR pressure is mostly dominated by relativistic particles. Fig. 5 compares the CR distributions at \u02dc t = 10 for two sets of models spanning a range of sonic Mach numbers for di\ufb00erent di\ufb00usion model choices. For the simulations represented on the left, the di\ufb00usion coe\ufb03cient is Bohm-like, with \u02dc \u03baBL = 10\u22126p and T0 = 106K. The results on the right come from a Bohm di\ufb00usion model with \u02dc \u03baB = 0.01p2/\u221ap2 + 1 and T0 = 104 K. For all models in this \ufb01gure, \u03b8 = 0.1 and \u01eb = 0.2. The top panels show the CR distributions 14 \fat the shock, g(p, xs) = p4f(p, xs), while the middle panels show the spatially integrated G(p). The slopes of the integrated spectra, q = \u2212d(ln Gp)/d ln p + 4, are shown in the bottom panels. For strong shocks with M0 \u226510, both p4f(p, xs) and G(p) exhibit concave curvature at high momentum familiar from previous studies e.g., [9,36,4]. The hardened high momentum slopes re\ufb02ect the fact that higher momentum CRs have longer mean free scattering paths, so encounter an increased compression across the shock precursor and a greater velocity jump. It is interesting to note in Fig. 5 that the spatially integrated spectra in the stronger shocks show more obvious hardening at high momenta than do the spectra measured at the subshock. This is another consequence of the fact that ld(p) increases with momentum, so that the CR spectrum hardens considerably as one measures it further upstream of the subshock. CRs escaping upstream from such a shock would be dominated by the highest momentum particles available. Fig. 6 compares the evolution of shock properties for the same set of models whose CR spectra are shown in Fig. 5. The evolution of density increase through the precursor, \u03c3p = \u03c11/\u03c10, and the total compression, \u03c3t = \u03c12/\u03c10, are shown in each top panel. Compression through the subshock itself can be found through the ratio \u03c3s = \u03c12/\u03c11. We will discuss shock compression results in more detail below. The middle panels of Fig. 6 show evolution of the postshock gas and CR pressures normalized to the ram pressure at the initial shock. After fast evolution at the start, these ratios approach constant values for \u02dc t > \u223c1. The bottom panels monitor the thermal leakage injection fraction, \u03be(t). With the adopted value of \u01ebB = 0.2, the time asymptotic value of this fraction is \u03be \u223c10\u22124\u221210\u22123 with the higher values for stronger shocks. We note that the behavior of \u03be during the early phase (\u02dc t < 1) is controlled mainly by the injection suppression scheme used to prevent start up problems. If the di\ufb00usion of particles with lowest momenta near p1 is better resolved with a \ufb01ner grid spacing, one may observe some initial reduction of \u03be(t) as the subshock weakens in time (e.g., Fig. 6 of [26]). The various plots show that postshock properties approach time-asymptotic states that depend on the shock Mach number. Generally, as is well known, the shock compression, the normalized postshock CR pressure and the thermal leakage injection fraction increase with shock Mach number. Nonlinear e\ufb00ects, illustrated here by increased shock compression and diminished postshock gas pressure, are relatively unimportant for Mach numbers less than 10 or so. At high Mach number upstream gas temperature is relatively unimportant. In addition, noting that the simulations in the left and right panels employed di\ufb00erent models for the low momentum di\ufb00usion coe\ufb03cient, the similarity between the analogous left and right plots illustrate the point made earlier that asymptotic dynamical behaviors are insensitive to 15 \fdetails in the di\ufb00usion coe\ufb03cient. The Mach number dependences of several important shock dynamical properties are illustrated in Fig. 7. These include the time-asymptotic values of \u03c3t, \u03c3p, and the postshock CR and gas pressures, both normalized by the initial momentum \ufb02ux, \u03c10u2 s,0. Dotted lines provide for comparison the \u03b3g = 5/3 gasdynamic compression ratio and postshock pressure. The strong shock limits of the gasdynamic compression ratio and normalized postshock pressure are 4 and 0.75, respectively. The two sets of models shown in Figs. 5-6 are shown, along with an additional set of models with T0 = 106K, \u03baBL = 10\u22126p, \u03b8 = 0.1, and \u01ebB = 0.25. The normalized postshock CR pressure increases with M0, asymptoting towards \u02dc Pc,2 \u223c0.5 at high Mach numbers. These values are reduced \u223c30% from the analogous results presented in [29], which is consistent with the comparison for the Mach 10 shock shown in Fig. 2. The normalized gas pressure simultaneously drops in response to increases in \u02dc Pc, as one would anticipate. It is notable that it actually falls well below 0.25, where one might naively expect it to asymptote in order to maintain a constant total postshock pressure with respect to \u03c10u2 s,0. Instead it falls as \u02dc Pg,2 = Pg,2/(\u03c10u2 s,0) \u223c0.4(M0/10)\u22120.4, without \ufb02attening at high Mach numbers. As we shall see below, this trend is consistent with expected evolution of gas in the precursor. Thus the total postshock pressure is less than that of a gasdynamic shock of the same initial Mach number, M0, despite a softening of the equation of state coming from the CRs. For the \u01ebB = 0.2 cases shown in Fig. 7 the precursor, subshock, and total compression ratios can be approximated by \u03c3p \u223c1.5(M0/10)0.29, \u03c3s \u223c 3.3(M0/10)0.04, and \u03c3t = \u03c3p\u03c3s \u223c4.9(M0/10)0.33, respectively, for M0 > \u223c5. For these models we also \ufb01nd the subshock Mach number ranges 3 < \u223cM1(= u1/cs,1) < \u223c4 and depends on total Mach number roughly as M1 \u221dM0.1 0 . Since the Rankine-Hugoniot-derived gas shock compression in this Mach number range scales as \u03c3s \u221dM0.35 1 , so \u03c3s \u221d(M0.1 0 )0.35 is consistent with the above relation. Although weak, the variation of the subshock strength with full shock sonic Mach number is important to an understanding of the simulated \ufb02ow behaviors seen in our shock precursors. We note that the subshock behavior seen here in highly modi\ufb01ed strong shocks, particularly that it generally evolves towards Mach numbers close to three, is consistent with previous analytic and numerical results, although in some other studies the subshock strength is completely independent of M0 (e.g., [7,9,36]). The subshock strength in our simulations is determined by complex nonlinear feedback involving the thermal leakage injection process. As noted for the results in Fig. 6 the injection rate, \u03be, generally increases with M0. This simultaneously increases Pc, providing extra Alfv\u00b4 enic heating in the precursor, while cooling gas entering the subshock, as energy is transferred to low energy CRs. It should not be surprising that the balance of these feedback processes is not entirely independent of total Mach number. We comment in passing that the compression values 16 \fshown in Fig. 7 depend slightly on \u01ebB in the sense that larger values of \u01ebB result in a little higher CR injection rate and greater Pc. Compression through the shock precursor has been discussed by several previous authors (e.g., [7,8,33,26]). We outline the essential physics to facilitate an understanding of our results. In a steady \ufb02ow (\u2202/\u2202t = 0) the modi\ufb01ed entropy equation (4) can be integrated across the precursor with \u03b3g = 5/3 to give \u03c3p = \u03c11 \u03c10 = \"\u0012M1 M0 \u00132 + 2 3M2 1 I #\u22123/8 , (17) where M0 = us,0/cs,0 and I = 5 3u3 0\u03c11/3 0 Z |W| \u03c12/3 dx. (18) The same relation can be derived from a Lagrangian perspective applying the second law of thermodynamics to a parcel of gas \ufb02owing through the precursor, not necessarily in a steady state. For these simulations |W| = |vA\u2202Pc/\u2202x|. Given the previously mentioned result, Pc,2 = Pc,1 \u223c0.5\u03c10u2 s we can estimate that I \u2248vA/us \u22481.34 \u221a \u03b8/M0. Eq. (17) is then similar to an expression given in [7]. When Alfv\u00b4 en wave dissipation is small, so that \u03b8 << 5/(4M2 0), Eq. (17) gives the result \u03c3p \u223cM3/4 0 , appropriate for adiabatic compression and consistent with behaviors found in a number of previous analytic and numerical studies (e.g., [7,8,33,26]). For the simulations represented in Figs. 3-7 the opposite limit actually applies, since \u03b8 = 0.1 > 5/(4M2 0) whenever M0 > 3.5. Then the strong Alfv\u00b4 en dissipation limit of Eq. (17) predicts \u03c3p \u223c(M0 \u221a \u03b8)3/8/M3/4 1 . Substituting our observed relation between subshock Mach number and total Mach number, M1 \u221dM0.1 0 , we establish for \ufb01xed \u03b8 an expected behavior, \u03c3p \u221dM0.3 0 , very close to what is observed. If instead of \u03b8 we had parameterized the Alfv\u00b4 en wave in\ufb02uence in terms of the Alfv\u00b4 enic Mach number, the analogous behavior of Eq. (17) would have been \u03c3p \u221d1/(M3/8 A M3/4 1 ). Then for \ufb01xed Alfv\u00b4 enic Mach number the precursor compression would vary only with the subshock Mach number, which, has only a weak dependence on the full shock Mach number. That agrees with results presented by [7], for example. We mention that the precursor/subshock compression properties also explain the observed inverse dependence of postshock gas pressure on initial Mach number. In fact, in a steady \ufb02ow it is easy to show that Pg,2 \u03c10u2 0 = 2 (\u03b3g + 1) 1 \u2212(\u03b3g \u22121) 2\u03b3gM2 1 ! 1 \u03c3p \u22483 4 1 \u03c3p . (19) Thus we expect an inverse relation between precursor compression and nor17 \fmalized postshock gas pressure, close to what is observed in our simulations. Finally, Fig. 8 shows time-asymptotic values of the CR energy ratio, \u03a6(M0), for models with di\ufb00erent \u03b8 (top panel) and di\ufb00erent \u01ebB and pre-existing CRs (bottom panel). Models are shown with both T0 = 106K and T0 = 104K. This \ufb01gure demonstrates that the CR acceleration depends on the speci\ufb01c model parameters considered here. For models with \u03b8 = 0.1 and T = 106 K, the acceleration e\ufb03ciency is reduced by up to \u223c50 % in comparison to models without Alfv\u00b4 en wave drift and dissipation (\u03b8 = 0). For larger Alfv\u00b4 en speeds with \u03b8 \u223c1, the acceleration e\ufb03ciency is reduced even more signi\ufb01cantly as a consequence of strong preshock Alfv\u00b4 enic heating and wave advection. On the other hand, for models with T0 = 104K and M0 > 20 the reduction factor is less than 15 % for \u03b8 \u223c0.5. As shown in previous studies [26,27], larger values of \u01ebB lead to higher thermal leakage injection and so more CR energy. Also the presence of preexisting CRs facilitates thermal leakage injection, leading to more injected CR particles and higher acceleration e\ufb03ciency. Thus, an accurate estimate of the CR energy generated at quasi-parallel shock requires detail knowledge of complex physical processes involved. Fortunately, all theses dependences become gradually weaker at higher Mach numbers, and \u03a6 tends to approach 0.5 for M0 > 30. It seems likely that this asymptotic e\ufb03ciency would also apply for su\ufb03ciently large \u03b8. For low Mach number shocks, it is not yet possible to make simple, model-independent e\ufb03ciency predictions. 4" + }, + { + "url": "http://arxiv.org/abs/0704.1521v1", + "title": "Cosmological Shock Waves in the Large Scale Structure of the Universe: Non-gravitational Effects", + "abstract": "Cosmological shock waves result from supersonic flow motions induced by\nhierarchical clustering of nonlinear structures in the universe. These shocks\ngovern the nature of cosmic plasma through thermalization of gas and\nacceleration of nonthermal, cosmic-ray (CR) particles. We study the statistics\nand energetics of shocks formed in cosmological simulations of a concordance\n$\\Lambda$CDM universe, with a special emphasis on the effects of\nnon-gravitational processes such as radiative cooling, photoionization/heating,\nand galactic superwind feedbacks. Adopting an improved model for gas\nthermalization and CR acceleration efficiencies based on nonlinear diffusive\nshock acceleration calculations, we then estimate the gas thermal energy and\nthe CR energy dissipated at shocks through the history of the universe. Since\nshocks can serve as sites for generation of vorticity, we also examine the\nvorticity that should have been generated mostly at curved shocks in\ncosmological simulations. We find that the dynamics and energetics of shocks\nare governed primarily by the gravity of matter, so other non-gravitational\nprocesses do not affect significantly the global energy dissipation and\nvorticity generation at cosmological shocks. Our results reinforce scenarios in\nwhich the intracluster medium and warm-hot intergalactic medium contain\nenergetically significant populations of nonthermal particles and turbulent\nflow motions.", + "authors": "Hyesung Kang, Dongsu Ryu, Renyue Cen, J. P. Ostriker", + "published": "2007-04-12", + "updated": "2007-04-12", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction Astrophysical plasmas consist of both thermal particles and nonthermal, cosmic-ray (CR) particles that are closely coupled with permeating magnetic \ufb01elds and underlying turbulent \ufb02ows. In the interstellar medium (ISM) of our Galaxy, for example, an approximate energy equipartition among di\ufb00erent components seems to have been established, i.e., \u03b5therm \u223c\u03b5CR \u223c\u03b5B \u223c\u03b5turb \u223c1 eV cm\u22123 (Longair 1994). Understanding the complex network of physical interactions among these components constitutes one of fundamental problems in astrophysics. There is substantial observational evidence for the presence of nonthermal particles and magnetic \ufb01elds in the large scale structure of the universe. A fair fraction of X-ray clusters have been observed in di\ufb00use radio synchrotron emission, indicating the presence of GeV CR electrons and \u00b5G \ufb01elds in the intracluster medium (ICM) (Giovannini & Feretti 2000). Observations in EUV and hard X-ray have shown that some clusters possess excess radiation compared to what is expected from the hot, thermal X-ray emitting ICM, most likely produced by the inverse-Compton scattering of cosmic background radiation (CBR) photons by CR electrons (Fusco-Femiano et al. 1999; Bowyer et al. 1999; Bergh\u00a8 ofer et al. 2000). Assuming energy equipartition between CR electrons and magnetic \ufb01elds, \u03b5CRe \u223c\u03b5B \u223c 0.01\u22120.1eV cm\u22123 \u223c10\u22123\u221210\u22122\u03b5therm can be inferred in typical radio halos (Govoni & Feretti 2004). If some of those CR electrons have been energized at shocks and/or by turbulence, the same process should have produced a greater CR proton population. Considering the ratio of proton to electron numbers, K \u223c100, for Galactic CRs (Beck & Kraus 2005), one can expect \u03b5CRp \u223c0.01 \u22120.1\u03b5therm in radio halos. However, CR protons in the ICM have yet to be con\ufb01rmed by the observation of \u03b3-ray photons produced by inelastic collisions between CR protons and thermal protons (Reimer et al. 2003). Magnetic \ufb01elds have been also directly observed with Faraday rotation measure (RM). In clusters of galaxies strong \ufb01elds of a few \u00b5G strength extending from core to 500 kpc or further were inferred from RM observations (Clarke et al. 2001; Clarke 2004). An upper limit of \u2272\u00b5G was imposed on the magnetic \ufb01eld strength in \ufb01laments and sheets, based the observed limit of the RMs of quasars outside clusters (Kronberg 1994; Ryu et al. 1998). Studies on turbulence and turbulent magnetic \ufb01elds in the large scale structure of the universe have been recently launched too. XMM-Newton X-ray observations of the Coma cluster, which seems to be in a post-merger stage, were analyzed in details to extract clues on turbulence in the ICM (Schuecker et al. 2004). By analyzing pressure \ufb02uctuations, it was shown that the turbulence is likely subsonic and consistent with Kolmogoro\ufb00turbulence. RM maps of clusters have been analyzed to \ufb01nd the power spectrum of turbulent magnetic \ufb01elds in a few clusters (Murgia et al. 2004; Vogt & En\u00dflin 2005). While Murgia et al. (2004) \f\u2013 3 \u2013 reported a spectrum shallower than the Kolmogoro\ufb00spectrum, Vogt & En\u00dflin (2005) argued that the spectrum could be consistent with the Kolmogoro\ufb00spectrum if it is bended at a few kpc scale. These studies suggest that as in the ISM, turbulence does exist in the ICM and may constitute an energetically non-negligible component. In galaxy cluster environments there are several possible sources of CRs, magnetic \ufb01elds, and turbulence: jets from active galaxies (Kronberg et al. 2004; Li et al. 2006), termination shocks of galactic winds driven by supernova explosions (V\u00a8 ok & Atoyan 1999), merger shocks (Sarazin 1999; Gabici & Blasi 2003; Fujita et al. 2003), structure formation shocks (Loeb & Waxmann 2000; Miniati et al. 2001a,b), and motions of subcluster clumps and galaxies (Subramanian et al. 2006). All of them have a potential to inject a similar amount of energies, i.e., E \u223c1061 \u22121062 ergs into the ICM. Here we focus on shock scenarios. Astrophysical shocks are collisionless shocks that form in tenuous cosmic plasmas via collective electromagnetic interactions between gas particles and magnetic \ufb01elds. They play key roles in governing the nature of cosmic plasmas: i.e., 1) shocks convert a part of the kinetic energy of bulk \ufb02ow motions into thermal energy, 2) shocks accelerate CRs by di\ufb00usive shock acceleration (DSA) (Blandford & Ostriker 1978; Blandford & Eichler 1987; Malkov & Drury 2001), and amplify magnetic \ufb01elds by streaming CRs (Bell 1978; Lucek & Bell 2000), 3) shocks generate magnetic \ufb01elds via the Biermann battery mechanism (Biermann 1950; Kulsrud et al. 1997) and the Weibel instability (Weibel 1959; Medvedev et al. 2006), and 4) curved shocks generate vorticity and ensuing turbulent \ufb02ows (Binney 1974; Davies & Widrow 2000). In Ryu et al. (2003) (Paper I), the properties of cosmological shock waves in the intergalactic medium (IGM) and the energy dissipations into thermal and nonthermal components at those shocks were studied in a high-resolution, adiabatic (non-radiative), hydrodynamic simulation of a \u039bCDM universe. They found that internal shocks with low Mach numbers of M \u22724, which formed in the hot, previously shocked gas inside nonlinear structures, are responsible for most of the shock energy dissipation. Adopting a nonlinear DSA model for CR protons, it was shown that about 1/2 of the gas thermal energy dissipated at cosmological shocks through the history of the universe could be stored as CRs. In a recent study, Pfrommer et al. (2006) identi\ufb01ed shocks and analyzed the statistics in smoothed particle hydrodynamic (SPH) simulations of a \u039bCDM universe, and found that their results are in good agreement with those of Paper I. While internal shocks with lower Mach numbers are energetically dominant, external accretions shocks with higher Mach numbers can serve as possible acceleration sites for high energy cosmic rays (Kang et al. 1996, 1997; Ostrowski & Siemieniec-Ozieblo 2002). It was shown that CR ions could be accelerated up \f\u2013 4 \u2013 to \u223cZ \u00d7 1019eV at cosmological shocks, where Z is the charge of ions (Inoue et al. 2007). Ryu et al. (2007) (Paper II) analyzed the distribution of vorticity, which should have been generated mostly at cosmological shock waves, in the same simulation of a \u039bCDM universe as in Paper I, and studied its implication on turbulence and turbulence dynamo. Inside nonlinear structures, vorticity was found to be large enough that the turn-over time, which is de\ufb01ned as the inverse of vorticity, is shorter than the age of the universe. Based on it Ryu et al. (2007) argued that turbulence should have been developed in those structures and estimated the strength of the magnetic \ufb01eld grown by the turbulence. In this paper, we study cosmological shock waves in a new set of hydrodynamic simulations of large structure formation in a concordance \u039bCDM universe: an adiabatic (nonradiative) simulation which is similar to that considered in Paper I, and two additional simulations which include various non-gravitational processes (see the next section for details). As in Papers I and II, the properties of cosmological shock waves are analyzed, the energy dissipations to gas thermal energy and CR energy are evaluated, and the vorticity distribution is analyzed. We then compare the results for the three simulations to highlight the e\ufb00ects of non-gravitational processes on the properties of shocks and their roles on the cosmic plasmas in the large scale structure of the universe. Simulations are described in \u00a72. The main results of shock identi\ufb01cation and properties, energy dissipations, and vorticity distribution are described in \u00a73, \u00a74, and \u00a75, respectively. Summary and discussion are followed in \u00a76. 2. Simulations The results reported here are based on the simulations previously presented in Cen & Ostriker (2006). The simulations included radiative processes of heating/cooling, and the two simulations with and without galactic superwind (GSW) feedbacks were compared in that paper. Here an additional adiabatic (non-radiative) simulation with otherwise the same setup was performed. Hereafter these three simulations are referred as \u201cAdiabatic\u201d, \u201cNO GSW\u201d, and \u201cGSW\u201d simulations, respectively. Speci\ufb01cally, the WMAP1-normalized \u039bCDM cosmology was employed with the following parameters: \u2126b = 0.048, \u2126m = 0.31, \u2126\u039b = 0.69, h \u2261H0/(100 km/s/Mpc) = 0.69, \u03c38 = 0.89, and n = 0.97. A cubic box of comoving size 85 h\u22121Mpc was simulated using 10243 grid zones for gas and gravity and 5123 particles for dark matter. It allows a uniform spatial resolution of \u2206l = 83 h\u22121kpc. In Papers I and II, an adiabatic simulation in a cubic box of comoving size 100 h\u22121Mpc with 10243 grid zones and 5123 particles, employing slightly di\ufb00erent cosmological parameters, was used. The \f\u2013 5 \u2013 simulations were performed using a PM/Eulerian hydrodynamic cosmology code (Ryu et al. 1993). Detailed descriptions for input physical ingredients such as non-equilibrium ionization/cooling, photoionization/heating, star formation, and feedback processes can be found in earlier papers (Cen et al. 2003; Cen & Ostriker 2006). Feedbacks from star formation were treated in three forms: ionizing UV photons, GSWs, and metal enrichment. GSWs were meant to represent cumulative supernova explosions, and modeled as out\ufb02ows of several hundred km s\u22121. The input of GSW energy for a given amount of star formation was determined by matching the out\ufb02ow velocities computed for star-burst galaxies in the simulation with those observed in the real world (Pettini et al. 2002)(see also Cen & Ostriker 2006, for details). Figure 1 shows the gas mass distribution in the gas density-temperature plane, fm(\u03c1gas, T), and the gas mass fraction as a function of gas temperature, fm(T), at z = 0 for the three simulations. The distributions are quite di\ufb00erent, depending primarily on the inclusion of radiative cooling and photoionization/heating. GSW feedbacks increase the fraction of the WHIM with 105 < T < 107K, and at the same time a\ufb00ect the distribution of the warm/di\ufb00use gas with T < 105. 3. Properties of Cosmological Shock Waves We start to describe cosmological shocks by brie\ufb01ng the procedure by which the shocks were identi\ufb01ed in simulation data. The details can be found in Paper I. A zone was tagged as a shock zone currently experiencing shock dissipation, whenever the following three criteria are met: 1) the gradients of gas temperature and entropy have the same sign, 2) the local \ufb02ow is converging with \u20d7 \u2207\u00b7 \u20d7 v < 0, and 3) |\u2206log T| \u22650.11 corresponding to the temperature jump of a shock with M \u22651.3. Typically a shock is represented by a jump spread over 2 \u22123 tagged zones. Hence, a shock center was identi\ufb01ed within the tagged zones, where \u20d7 \u2207\u00b7\u20d7 v is minimum, and this center was labeled as part of a shock surface. The Mach number of the shock center, M, was calculated from the temperature jump across the entire shock zones. Finally to avoid confusion from complex \ufb02ow patterns and shock surface topologies associated with very weak shocks, only those portions of shock surfaces with M \u22651.5 were kept and used for the analysis of shocks properties. Figure 2 shows the locations of identi\ufb01ed shocks in a two-dimensional slice at z = 0 in the GSW simulation. The locations are color-coded according to shock speed. As shown before in Paper I, external accretion shocks encompass nonlinear structures and reveal, in addition \f\u2013 6 \u2013 to cluster complexes, rich topology of \ufb01lamentary and sheet-like structures in the large scale structure. Inside the nonlinear structures, there exist complex networks of internal shocks that form by infall of previously shocked gas to \ufb01laments and knots and during subclump mergers, as well as by chaotic \ufb02ow motions. The shock heated gas around clusters extends out to \u223c5 h\u22121Mpc, much further out than the observed X-ray emitting volume. In the GSW simulation, with several hundred km s\u22121 for out\ufb02ows, the GSW feedbacks a\ufb00ected most greatly the gas around groups of galaxies, while the impact on clusters with kT \u22731 keV was minimal. In Figure 3 we compare shock locations in a region around two groups with kT \u223c0.2 \u22120.3 keV in the three simulations. It demonstrates that GSW feedbacks pushed the hot gas out of groups with typical velocities of \u223c100 km s\u22121 (green points). In fact the prominent green balloons of shock surfaces around groups in Figure 2 are due to GSW feedbacks (see also Figure 4 of Cen & Ostriker 2006). In the left panels of Figure 4 we compare the surface area of identi\ufb01ed shocks, normalized by the volume of the simulation box, per logarithmic Mach number interval, dS(M)/d log M (top), and per logarithmic shock speed interval, dS(Vs)/d log Vs (bottom), at z = 0 in the three simulations. Here S and Vs are given in units of (h\u22121Mpc)\u22121 and km s\u22121. The quantity S provides a measure of shock frequency or the inverse of the mean comoving distance between shock surfaces. The distributions of dS(M)/d log M for the NO GSW and GSW simulations are similar, while that for the Adiabatic simulation is di\ufb00erent from the other two. This is mainly because the gas temperature outside nonlinear structures is lower without photoionization/heating in the Adiabatic simulation. As a result, external accretion shocks tend to have higher Mach number due to colder preshock gas. The distribution of dS(Vs)/d log Vs, on the other hand, is similar for all three simulations for Vs > 15 km s\u22121. For Vs < 15 km s\u22121, however, there are more shocks in the Adiabatic simulation (black points in Figure 3). Again this is because in the Adiabatic simulation the gas temperature is colder in void regions, and so even shocks with low speeds of Vs < 15 km s\u22121 were identi\ufb01ed in these regions. The GSW simulation shows slightly more shocks than the NO GSW simulation around Vs \u223c100 km s\u22121, because GSW feedbacks created balloon-shaped surfaces of shocks with typically those speeds (green points in Figure 3). For identi\ufb01ed shocks, we calculated the incident shock kinetic energy \ufb02ux, F\u03c6 = (1/2)\u03c11V 3 s , where \u03c11 is the preshock gas density. We then calculated the kinetic energy \ufb02ux through shock surfaces, normalized by the volume of the simulation box, per logarithmic Mach number interval, dF\u03c6(M)/d log M, and per logarithmic shock speed interval, dF\u03c6(Vs)/d log Vs. In the right panels of Figure 4, we compare the \ufb02ux at z = 0 in the three simulations. Once again, there are noticeable di\ufb00erences in dF\u03c6(M)/d log M between the Adiabatic simulation and the other two simulations, which can be interpreted as the result of ignoring photoion\f\u2013 7 \u2013 ization/heating in the gas outside nonlinear structures in the Adiabatic simulation. GSW feedbacks enhance only slightly the shock kinetic energy \ufb02ux for Vs \u223c100\u2212300 km s\u22121, as can be seen in the plot of dF\u03c6(Vs)/d log Vs. Yet, the total amount of the energy \ufb02ux is expected to be quite similar for all three simulations. This implies that the overall energy dissipation at cosmological shocks is governed mainly by the gravity of matter, and that the inclusion of various non-gravitational processes such as radiative cooling, photoionization/heating, and GSW feedbacks have rather minor, local e\ufb00ects. We note that a temperature \ufb02oor of T\ufb02oor = TCBR was used for the three simulations in this work, while T\ufb02oor = 104 K was set in paper I. It was because in Paper I only an adiabatic simulation was considered and the 104 K temperature \ufb02oor was enforced to mimic the e\ufb00ect of photoionization/heating on the IGM. However we found that when the same temperature \ufb02oor is enforced, the statistics of the current Adiabatic simulation agree excellently with those of Paper I. Speci\ufb01cally, the shock frequency and kinetic energy \ufb02ux, dS(M)/d log M and dF\u03c6(M)/d log M, for weak shocks with 1.5 \u2264M \u22723 are a bit higher in the current Adiabatic simulation, because of higher spatial resolution. But the total kinetic energy \ufb02ux through shock surfaces, F\u03c6(M > 1.5), agrees within a few percent. On the other hand, In Paper I we were able to reasonably distinguish external and internal shocks according to the preshock temperature, i.e., external shocks if T1 \u2264T\ufb02oor and internal shocks if T1 > T\ufb02oor. We no longer made such distinction in this work, since the preshock temperature alone cannot tell us whether the preshock gas is inside nonlinear structures or not in the simulations with radiative cooling. 4. Energy dissipation by Cosmological Shock Waves The CR injection and acceleration rates at shocks depend in general upon the shock Mach number, \ufb01eld obliquity angle, and the strength of the Alfv\u00b4 en turbulence responsible for scattering. At quasi-parallel shocks, in which the mean magnetic \ufb01eld is parallel to the shock normal direction, small anisotropy in the particle velocity distribution in the local \ufb02uid frame causes some particles in the high energy tail of the Maxwellian distribution to stream upstream (Giacalone et al. 1992). The streaming motions of the high energy particles against the background \ufb02uid generate strong MHD Alfv\u00b4 en waves upstream of the shock, which in turn scatter particles and amplify magnetic \ufb01elds (Bell 1978; Lucek & Bell 2000). The scattered particles can then be accelerated further to higher energies via Fermi \ufb01rst order process (Malkov & Drury 2001). These processes, i.e., leakage of suprathermal particles into CRs, self-excitation of Alfv\u00b4 en waves, ampli\ufb01cation of magnetic \ufb01elds, and further acceleration of CRs, are all integral parts of collisionless shock formation in astrophysical plasmas. It \f\u2013 8 \u2013 was shown that at strong quasi-parallel shocks, 10\u22124 \u221210\u22123 of the incoming particles can be injected into the CR population, up to 60% of the shock kinetic energy can be transferred into CR ions, and at the same time substantial nonlinear feedbacks are exerted to the underlying \ufb02ow (Berezhko et al. 1995; Kang & Jones 2005). At perpendicular shocks with weakly perturbed magnetic \ufb01elds, on the other hand, particles gain energy mainly by drifting along the shock surface in the \u20d7 v \u00d7 \u20d7 B electric \ufb01eld. Such drift acceleration can be much more e\ufb03cient than the acceleration at parallel shocks (Jokipii 1987; Kang et al. 1997; Ostrowski & Siemieniec-Ozieblo 2002). But the particle injection into the acceleration process is expected to be ine\ufb03cient at perpendicular shocks, since the transport of particles normal to the average \ufb01eld direction is suppressed (Ellison et al. 1995). However, Giacalone (2005) showed that the injection problem at perpendicular shocks can be alleviated substantially in the presence of fully turbulent \ufb01elds owing to \ufb01eld line meandering. As in Paper I, the gas thermalization and CR acceleration e\ufb03ciencies are de\ufb01ned as \u03b4(M) \u2261Fth/F\u03c6 and \u03b7(M) \u2261FCR/F\u03c6, respectively, where Fth is the thermal energy \ufb02ux generated and FCR is the CR energy \ufb02ux accelerated at shocks. We note that for gasdynamical shocks without CRs, the gas thermalization e\ufb03ciency can be calculated from the Rankine-Hugoniot jump condition, as follows: \u03b40(M) = \u0014 eth,2 \u2212eth,1 \u0012\u03c12 \u03c11 \u0013\u03b3\u0015 v2 \u001e \u00121 2\u03c11v2 1 \u0013 , (1) where the subscripts 1 and 2 stand for preshock and postshock regions, respectively. The second term inside the brackets subtracts the e\ufb00ect of adiabatic compression occurred at a shock too, not just the thermal energy \ufb02ux entering the shock, namely, eth,1v1. At CR modi\ufb01ed shocks, however, the gas thermalization e\ufb03ciency can be much smaller than \u03b40(M) for strong shocks with large M, since a signi\ufb01cant fraction of the shock kinetic energy can be transferred to CRs. The gas thermalization and CR acceleration e\ufb03ciencies were estimated using the results of DSA simulations of quasi-parallel shocks with Bohm di\ufb00usion coe\ufb03cient, self-consistent treatments of thermal leakage injection, and Alfv\u00b4 en wave propagation (Kang & Jones 2007). The simulations were started with purely gasdynamical shocks in one-dimensional, plane-parallel geometry, and CR acceleration was followed by solving the di\ufb00usion-convection equation explicitly with very high resolution. Shocks with Vs = 150 \u22124500 km s\u22121 propagating into media of T1 = 104 \u2212106 K were considered. After a quick initial adjustment, the postshock states reach time asymptotic values and the CR modi\ufb01ed shocks evolve in an approximately self-similar way with the shock structure broadening linearly with time (refer Kang & Jones 2007, for details). Given this self-similar nature of CR modi\ufb01ed shocks, we calculated time asymptotic values of \u03b4(M) and \u03b7(M) as the ratios of increases in the gas thermal and CR energies at shocks to the kinetic energy \f\u2013 9 \u2013 passed through the shocks at the termination time of the DSA simulations. As in Eq. (1), the increase of energies due to adiabatic compression was subtracted. Figure 5 shows \u03b4(M) and \u03b7(M) estimated from DSA simulations and their \ufb01ttings for the cases with and without a preexisting CR component. The \ufb01tting formulae are given in Appendix A. Without a preexisting CR component, gas thermalization is more e\ufb03cient than CR acceleration at shocks with M \u22725. However, it is likely that weak internal shocks propagate through the IGM that contains CRs accelerated previously at earlier shocks. In that case, shocks with preexisting CRs need to be considered. Since the presence of preexisting CRs is equivalent to a higher injection rate, CR acceleration is more e\ufb03cient in that case, especially at shocks with M \u22725 (Kang & Jones 2003). In the bottom panel the e\ufb03ciencies for shocks with PCR/Pg \u223c0.3 in the preshock region are shown. For comparison, \u03b40(M) for shocks without CRs is also drawn. Both \u03b4(M) and \u03b7(M) increase with Mach number, but \u03b7(M) asymptotes to \u223c0.55 while \u03b4(M) to \u223c0.30 for strong shocks with M \u227330. So about twice more energy goes into CRs, compared to for gas heating, at strong shocks. The e\ufb03ciencies for the case without a preexisting CR component in the upper panel of Figure 5 can be directly compared with the same quantities presented in Figure 6 of Paper I. In Paper I, however, the gas thermalization e\ufb03ciency was not calculated explicitly from DSA simulations, and hence \u03b40(M) for gasdynamic shocks was used. It represents gas thermalization reasonably well for weak shocks with M \u22722.5, but overestimates gas thermalization for stronger CR modi\ufb01ed shocks. Our new estimate for \u03b7(M) is close to that in Paper I, but a bit smaller, especially for shocks with M \u227230. This is because inclusion of Alfv\u00b4 en wave drift and dissipation in the shock precursor reduces the e\ufb00ective velocity change experienced by CRs in the new DSA simulations of Kang & Jones (2007). A note of caution for \u03b7(M) should be in order. As outlined above, CR injection is less e\ufb03cient and so the CR acceleration e\ufb03ciency would be lower at perpendicular shocks, compared to at quasi-parallel shocks. CR injection and acceleration at oblique shocks are not well understood quantitatively. And the magnetic \ufb01eld directions at cosmological shocks are not known. Considering these and other uncertainties involved in the adopted DSA model, we did not attempt to make further improvements in estimating \u03b4(M) and \u03b7(M) at general oblique shocks. But we expect that an estimate at realistic shocks with chaotic magnetic \ufb01elds and random shock obliquity angles would give reduced values, rather than increased values, for \u03b7(M). So \u03b7(M) given in Figure 5 may be regarded as upper limits. By adopting the e\ufb03ciencies in Figures 5, we calculated the thermal and CR energy \ufb02uxes dissipated at cosmological shocks, dFth(M)/d log M, dFth(Vs)/d log Vs, dFCR(M)/d log M and dFCR(Vs)/d log Vs, using Fth = F\u03c6\u03b4(M) and FCR = F\u03c6\u03b7(M), in the same way we \f\u2013 10 \u2013 calculated dF\u03c6(M)/d log M and dF\u03c6(Vs)/d log Vs in the previous section. We then integrated from z = 5 to z = 0 the shock kinetic energy passed and the thermal and CR energies dissipated through shock surfaces as follows: dYi(X) d log X = 1 Eth,0 Z z=0 z=5 dFi[X, z(t)] d log X dt, (2) where the subscript i \u2261\u03c6, th, or CR stands for the kinetic, thermal, or CR energies \ufb02uxes, the variable X is either M or Vs, and Eth,0 is the total gas thermal energy at z = 0 inside the simulation box normalized by its volume. Figure 6 shows the resulting dYi(M)/d log M and dYi(Vs)/d log Vs and their cumulative distributions, Yi(> M) and Yi(> Vs), for the GSW simulation. Weak shocks with M \u22724 or fast shocks with Vs \u2273500 km s\u22121 are responsible most for shock dissipations, as already noted in Paper I. While the thermal energy generation peaks at shocks in the range 1.5 \u2272M \u22723, the CR energy peaks in the range 2, 5 \u2272M \u22724 if no preexisting CRs are included or in the range 1.5 \u2272M \u22723 if preexisting CRs of PCR/Pg \u223c0.3 in the preshock region are included. With our adopted e\ufb03ciencies, the total CR energy accelerated and the total gas thermal energy dissipated at cosmological shocks throughout the history of the universe are compared as YCR(M \u22651.5) \u223c0.5Yth(M \u22651.5), when no preexisting CRs are present. With preexisting CRs in the preshock region, the CR acceleration becomes more e\ufb03cient, so YCR(M \u22651.5) \u223c1.7Yth(M \u22651.5), i.e., the total CR energy accelerated at cosmological shocks is estimated to be 1.7 times the total gas thermal energy dissipated. We note here again that these are not meant to be very accurate estimates of the CR energy in the IGM, considering the di\ufb03culty of modeling shocks as well as the uncertainties in the DSA model itself. However, they imply that the IGM and the WHIM, which are bounded by strong external shocks with high M and \ufb01lled with weak internal shocks with low M, could contain a dynamically signi\ufb01cant CR population. 5. Vorticity Generation at Cosmological Shock Waves Cosmological shocks formed in the large scale structure of the universe are by nature curved shocks, accompanying complex, often chaotic \ufb02ow patterns. It is well known that vorticity, \u20d7 \u03c9 = \u2207\u00d7\u20d7 v, is generated at such curved oblique shocks (Binney 1974; Davies & Widrow 2000). In Paper II, the generation of vorticity behind cosmological shocks and turbulence dynamo of magnetic \ufb01elds in the IGM were studied in an adiabatic \u039bCDM simulation. In this study we analyzed the distribution of vorticity in the three simulations to assess quantitatively the e\ufb00ects of non-gravitational processes. Here we present the magnitude of vorticity \f\u2013 11 \u2013 with the vorticity parameter \u03c4(\u20d7 r, z) \u2261tage(z)\u03c9(\u20d7 r, z) = tage(z) teddy(\u20d7 r, z), (3) where tage(z) is the age of the universe at redshift z. With teddy = 1/\u03c9 interpreted as local eddy turnover time, \u03c4 represents the number of local eddy turnovers in the age of the universe. So if \u03c4 \u226b1, we expect that turbulence has been fully developed after many turnovers. Figure 7 shows \ufb02uid quantities and shock locations in a two-dimensional slice of (21.25 h\u22121Mpc)2, delineated by a solid box in Figure 2, at z = 0 in the GSW simulations. The region contains two clusters with kT \u223c1 \u22122 keV in the process of merging. Bottom right panel shows that vorticity increases sharply at shocks. The postshock gas has a larger amount of vorticity than the preshock gas, indicating that most, if not all, of the vorticity in the simulation was produced at shocks. Figure 8 shows the gas mass distribution in the gas density-vorticity parameter plane, fm(\u03c1gas, \u03c4), (upper panel) and the gas mass fraction per logarithmic \u03c4 interval, d fm(\u03c4)/d log \u03c4, (bottom panel) for the three simulations. The most noticeable point in the upper panel is that vorticity is higher at the highest density regions with \u02dc \u03c1 \u2261\u03c1gas/\u27e8\u03c1gas\u27e9\u2273103 in the NO GSW and GSW simulations than in the Adiabatic simulation. This is due to the additional \ufb02ow motions induced by cooling. Inclusion of GSW feedbacks, on the other hand, does not alter signi\ufb01cantly the overall distribution in the gas density-vorticity parameter plane. The bottom panel indicates that cooling increased the mass fraction with large vorticity \u03c4 \u227310, while reduced the mass fraction with 1 \u2272\u03c4 \u227210. GSW feedbacks increased slightly the mass fraction with 1 \u2272\u03c4 \u227210, which corresponds to the gas in the regions outskirts of groups that expand further out due to GSWs (i.e., balloons around groups). But overall we conclude that the non-gravitational processes considered in this paper have limited e\ufb00ects on vorticity in the large scale structure of the universe. We note that the highest density regions in the NO GSW and GSW simulations have \u03c4 \u223c30 on average. As described in details in Paper II, such values of \u03c4 imply that local eddies have turned over many times in the age of the universe, so that the ICM gas there has had enough time to develop magnetohydrodynamic (MHD) turbulence. So in those regions, magnetic \ufb01elds should have grown to have the energy approaching to the turbulent energy. On the other hand, the gas with 1 \u2272\u02dc \u03c1 \u2272103, mostly in \ufb01lamentary and sheet-like structures, has 0.1 \u2272\u03c4 \u227210. MHD turbulence should not have been fully developed there and turbulence growth of magnetic \ufb01elds would be small. Finally in the low density void regions with \u02dc \u03c1 \u22721, vorticity is negligible with \u03c4 \u22720.1 on average, as expected. \f\u2013 12 \u2013 6. Summary We identi\ufb01ed cosmological shock waves and studied their roles on cosmic plasmas in three cosmological N-body/hydrodynamic simulations for a concordance \u039bCDM universe in a cubic box of comoving size 85 h\u22121Mpc: 1) adiabatic simulation (Adiabatic), 2) simulation with radiative cooling and photoionization/heating (NO GSW), and 3) same as the second simulation but also with galactic superwind feedbacks (GSW). The statistics and energetics of shocks in the adiabatic simulation are in an excellent agreement with those of Paper I where an adiabatic simulation with slightly di\ufb00erent cosmological parameters in a cubic box of comoving size 100 h\u22121Mpc was analyzed. Photoionization/heating raised the gas temperature outside nonlinear structures in the NO GSW and GSW simulations. As a result, the number of identi\ufb01ed shocks and their Mach numbers in the NO GSW and GSW simulations were di\ufb00erent from those in the Adiabatic simulation. GSW feedbacks pushed out gas most noticeably around groups, creating balloon-shaped surfaces of shocks with speed Vs \u223c100 km s\u22121 in the GSW simulation. However, those have minor e\ufb00ects on shock energetics. The total kinetic energy passed through shock surfaces throughout the history of the universe is very similar for all three simulations. So we conclude that the energetics of cosmological shocks was governed mostly by the gravity of matter, and the e\ufb00ects non-gravitational processes, such as radiative cooling, photoionization/heating, and GSW feedbacks, were rather minor and local. We estimated both the improved gas thermalization e\ufb03ciency, \u03b4(M), and CR acceleration e\ufb03ciency, \u03b7(M), as a function shock Mach number, from nonlinear di\ufb00usive shock simulations for quasi-parallel shocks that assumed Bohm di\ufb00usion for CR protons and incorporated self-consistent treatments of thermal leakage injection and Alfv\u00b4 en wave propagation (Kang & Jones 2007). The cases without and with a preexisting CR component of PCR/Pg \u223c0.3 in the preshock region were considered. At strong shocks, both the injection and acceleration of CRs are very e\ufb03cient, and so the presence of a preexisting CR component is not important. At shocks with with M \u227330, about 55 % of the shock kinetic energy goes into CRs, while about 30 % becomes the thermal energy. At weak shocks, on the other hand, without a preexisting CR component, the gas thermalization is more e\ufb03cient than the CR acceleration. But the presence of a preexisting CR component is critical at weak shocks, since it is equivalent to a higher injection rate and the CR acceleration becomes more e\ufb03cient with it. As a result, \u03b7(M) is higher than \u03b4(M) even at shocks with M \u22725. However, at perpendicular shocks, the CR injection is suppressed, and so the CR acceleration could be less e\ufb03cient than at parallel shocks. Thus our CR shock acceleration e\ufb03ciency should be regarded as an upper limit. With the adopted e\ufb03ciencies, the total CR energy accelerated at cosmological shocks \f\u2013 13 \u2013 throughout the history of the universe is estimated to be YCR(M \u22651.5) \u223c0.5 Yth(M \u22651.5), i.e., 1/2 of the total gas thermal energy dissipated, when no preexisting CRs are present. With a preexisting CR component of PCR/Pg \u223c0.3 in the preshock region, YCR(M \u22651.5) \u223c 1.7 Yth(M \u22651.5), i.e., the total CR energy accelerated is estimate to be 1.7 times the total gas thermal energy dissipated. Although these are not meant to be very accurate estimates of the CR energy in the ICM, they imply that the ICM could contain a dynamically signi\ufb01cant CR population. We also examined the distribution of vorticity inside the simulation box, which should have been generated mostly at curved cosmological shocks. In the ICM, the eddy turn-over time, teddy = 1/\u03c9, is about 1/30 of the age of the universe, i.e., \u03c4 \u2261tage/teddy \u223c30. In \ufb01lamentary and sheet-like structures, \u03c4 \u223c0.1 \u221210, while \u03c4 \u22720.1 in void regions. Radiative cooling increased the fraction of gas mass with large vorticity \u03c4 \u227310, while reduced the mass fraction with 1 \u2272\u03c4 \u227210. GSW feedbacks increased slightly the mass fraction with 1 \u2272\u03c4 \u227210. Although the e\ufb00ects of these non-gravitation e\ufb00ects are not negligible, the overall distribution of vorticity are similar for the three simulations. So we conclude that the non-gravitational processes considered in this paper do not a\ufb00ect signi\ufb01cantly the vorticity in the large scale structure of the universe. HK was supported in part by KOSEF through Astrophysical Research Center for the Structure and Evolution of Cosmos (ARCSEC). DR was supported in part by a Korea Research Foundation grant (KRF-2004-015-C00213). RC was supported in part by NASA grant NNG05GK10G and NSF grant AST-0507521. The work of HK and DR was also supported in part by Korea Foundation for International Cooperation of Science & Technology (KICOS) through the Cavendish-KAIST Research Cooperation Center. A. Fitting Formulae for \u03b4(M) and \u03b7(M) The gas thermalization e\ufb03ciency, \u03b4(M), and the CR acceleration e\ufb03ciency, \u03b7(M), for the case without a preexisting CR component (in upper panel of Figure 5) are \ufb01tted as follows: for M \u22642 \u03b4(M) = 0.92 \u03b40 (A1) \u03b7(M) = 1.96 \u00d7 10\u22123(M2 \u22121) (A2) for M > 2 \u03b4(M) = 4 X n=0 an (M \u22121)n M4 (A3) \f\u2013 14 \u2013 a0 = \u22124.25, a1 = 6.42, a2 = \u22121.34, a3 = 1.26, a4 = 0.275 (A4) \u03b7(M) = 4 X n=0 bn (M \u22121)n M4 (A5) b0 = 5.46, b1 = \u22129.78, b2 = 4.17, b3 = \u22120.334, b4 = 0.570 (A6) The e\ufb03ciencies for the case with a preexisting CR component (in bottom panel of Figure 5) are \ufb01tted as follows: for M \u22641.5 \u03b4(M) = 0.90 \u03b40 (A7) \u03b7(M) = 1.025 \u03b40 (A8) for M > 1.5 \u03b4(M) = 4 X n=0 an (M \u22121)n M4 (A9) a0 = \u22120.287, a1 = 0.837, a2 = \u22120.0467, a3 = 0.713, a4 = 0.289 (A10) \u03b7(M) = 4 X n=0 bn (M \u22121)n M4 (A11) b0 = 0.240, b1 = \u22121.56, b2 = 2.80, b3 = 0.512, b4 = 0.557 (A12) Here \u03b40(M) is the gas thermalization e\ufb03ciency at shocks without CRs, which was calculated from the Rankine-Hugoniot jump condition, (black solid line in Figure 5): \u03b40(M) = 2 \u03b3(\u03b3 \u22121)M2R \u00142\u03b3M2 \u2212(\u03b3 \u22121) (\u03b3 + 1) \u2212R\u03b3 \u0015 (A13) R \u2261\u03c12 \u03c11 = \u03b3 + 1 \u03b3 \u22121 + 2/M2 (A14)" + } + ], + "Soohyun Kim": [ + { + "url": "http://arxiv.org/abs/2304.04960v1", + "title": "Panoramic Image-to-Image Translation", + "abstract": "In this paper, we tackle the challenging task of Panoramic Image-to-Image\ntranslation (Pano-I2I) for the first time. This task is difficult due to the\ngeometric distortion of panoramic images and the lack of a panoramic image\ndataset with diverse conditions, like weather or time. To address these\nchallenges, we propose a panoramic distortion-aware I2I model that preserves\nthe structure of the panoramic images while consistently translating their\nglobal style referenced from a pinhole image. To mitigate the distortion issue\nin naive 360 panorama translation, we adopt spherical positional embedding to\nour transformer encoders, introduce a distortion-free discriminator, and apply\nsphere-based rotation for augmentation and its ensemble. We also design a\ncontent encoder and a style encoder to be deformation-aware to deal with a\nlarge domain gap between panoramas and pinhole images, enabling us to work on\ndiverse conditions of pinhole images. In addition, considering the large\ndiscrepancy between panoramas and pinhole images, our framework decouples the\nlearning procedure of the panoramic reconstruction stage from the translation\nstage. We show distinct improvements over existing I2I models in translating\nthe StreetLearn dataset in the daytime into diverse conditions. The code will\nbe publicly available online for our community.", + "authors": "Soohyun Kim, Junho Kim, Taekyung Kim, Hwan Heo, Seungryong Kim, Jiyoung Lee, Jin-Hwa Kim", + "published": "2023-04-11", + "updated": "2023-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Image-to-image translation (I2I) aims to modify an input image aligning with the style of the target domain, preserving the original content from the source domain. This paradigm enables numerous applications, such as colorization, style transfer, domain adaptation, data augmentation, etc. [16, 34, 39, 17, 19]. However, existing I2I has been used to synthesize pinhole images with narrow \ufb01eld-ofview (FoV), which limits the scope of applications considering diverse image-capturing devices. *Corresponding authors Pinhole Image Panorama Panoramic Image-to-Image Translation Translated Panorama Geometric Gap Style Difference Day Night \uacb0\uacfc \uad50\uccb4\ud574\uc8fc\uc138\uc694 Figure 1. Illustration of our problem formulation. Our PanoI2I is trained on panoramas in the daytime as the source domain and pinhole images with diverse conditions as the target domain, where the two domains have signi\ufb01cant geometric and style gaps. Panoramic 360\u00b0 cameras have recently grown in popularity, which enables many applications, e.g., AR/VR, autonomous driving, and city map modeling [36, 1, 3, 59]. Unlike pinhole images of narrow FoV, panoramic images (brie\ufb02y, panoramas) capture the entire surroundings, providing richer information with 360\u00b0\u00d7180\u00b0 FoV. Translating panoramas into other styles can enable novel applications, such as immersive view generations or enriching user experiences with robust car-surrounding recognition [10, 55, 56, 33]. However, naively applying conventional I2I methods for pinhole images [41, 47, 7, 8, 63, 20, 26] to panoramas can signi\ufb01cantly distort the geometric properties of panoramas as shown in Fig. 1. One may project the panoramic image into pinhole images to apply the conventional methods. However, it costs a considerable amount of computation since sparse projections cannot cover the whole scene due to the narrow FoV of pinhole images. In addition, the discontinuity problem at edges (left-right boundaries in panorama) requires panorama-speci\ufb01c modeling, as in the other tasks, 1 arXiv:2304.04960v1 [cs.CV] 11 Apr 2023 \fe.g., panorama depth estimation, panoramic segmentation, and panorama synthesis [48, 62, 4]. Another challenge of panoramic image-to-image translation is the absence of suf\ufb01cient panorama datasets. Compared to the pinhole images, panoramic images are captured by a specially-designed camera (360\u00b0 camera) or postprocessed using multi-view images obtained from the calibrated cameras. Especially for I2I, panoramas obtained under diverse conditions such as sunny, rainy, and night are needed to de\ufb01ne the target or style domain. Notice that panoramic images for the Street View service are mainly taken during the day [37]. Instead of constructing a new panorama dataset that is costly to obtain, it would be highly desirable if we could leverage existing pinhole image datasets with various conditions as style guidance. In summary, there are several challenges to translating panoramas into another condition: 1) the geometric deformation due to the wide FoV of panoramas, 2) distortion and discontinuity problems arising when existing methods are directly applied, and 3) the lack of panoramic image datasets with diverse conditions. We present typical failure cases of existing approaches [63] in Fig. 2. Based on the above analysis, we seek to expand the applicability of I2I to panoramic images by employing existing pinhole image datasets as style domain, dubbed the Panoramic Image-toImage Translation, shortly, Pano-I2I. To address geometric deformation in panoramas, we adopt deformable convolutions [65] to our encoders, with different offsets for panoramas and pinhole images to re\ufb02ect the geometric differences between the source and target. To handle the large domain gap between the source and target domain, we propose a distortion-free discrimination that attenuates the effects of the geometric differences. In addition, we adopt panoramic rotation augmentation techniques to solve discontinuity problems at edges, considering that a 360\u00b0 panorama should be continuous at boundaries. Moreover, we propose a two-stage learning framework for stable training since learning with panorama and pinhole images simultaneously might increase the problem\u2019s complexity. Along with the Stage-I that \ufb01rst performs \ufb01ne-tuning on panoramas, the Stage-II learns how to translate the panoramas attaining styles from pinhole images. We validate the proposed approach by evaluating the panorama dataset, StreetLearn [37], and day-and-night and weather conditions of the pinhole datasets [47, 46]. Our proposed method signi\ufb01cantly outperforms all existing methods across various target conditions from the pinhole datasets. We also provide ablation studies to validate and analyze the components in Pano-I2I. In summary, our main contributions are: \u2022 For the \ufb01rst time, to the best of our knowledge, we propose the panoramic I2I task and approach translating panoramas with pinhole images as a target domain. Panoramic Source Output (FSeSim) Output (Ours) Rotated Output (FSeSim) Rotated Output (Ours) Figure 2. Typical failure cases of an existing method (FSeSim [63]) for the panoramic image-to-image translation task. The generated image from FSeSim [63] shows the collapsed result having a pinhole-like structure (blue box), as it over\ufb01ts the pinhole target. Additionally, it has structuraland style-discontinuity in the edges (green box). In contrast, our method generates high-quality panoramic images, achieving a rotation-equivariant structure at the edges. We visualize rotated outputs (\u03b8 = 180\u00b0) to highlight the discontinuity. \u2022 We present distortion-free discrimination to deal with a large geometric gap between the source and the target. \u2022 Our spherical positional embedding and sphere-based rotation augmentation ef\ufb01ciently handle the geometric deformation and structuraland style-discontinuity at the edges of panoramas. \u2022 Pano-I2I notably outperforms the previous methods in the quantitative evaluations for style relevance and structural similarity, providing qualitative analyses. 2. Related work Image-to-image translation. Different from early works requiring paired dataset [19], the seminal works [64] enabled unpaired source/target training (i.e., learning without the ground-truth of the translated image). Some works enable multimodal learning [18, 28, 29], multi-domain learning [7, 8, 54] for diverse translations from unpaired data, and instance-aware learning [47, 2, 20, 26] in complex scenes. Nevertheless, existing I2I methods are restrictive to speci\ufb01c source-target pairs; they are limited to handling geometric variations (e.g., part deformation, viewpoint, and scale) between the source domain and the target domain. Our approach introduces a robust framework to an unpaired 2 \fsetting, even with geometric differences. Also, the abovementioned methods may fail to obtain rotational equivalence for panorama I2I. On the other hand, several works have adopted the architecture of vision transformers [11] to image generation [30, 23, 61]. Being capable of learning long-range interactions, the transformer is often employed for high-resolution image generation [12, 61], or complex scene generation [52, 26]. For instance, InstaFormer [26] proposed to use transformer-based networks for I2I, capturing global consensus in complex street-view scenes. Panoramic image modeling. Panoramic images from 360\u00b0 cameras provide a thorough view of the scene with a wide FoV, bene\ufb01cial in understanding the scene holistically. A common practice to address distortions in panoramas is to project an image into other formats of 360\u00b0 images (e.g., equirectangular, cubemap) [5, 51, 57], and some works even combine both equirectangular and cubemap projections with improving performance [21, 50]. However, they do not consider the properties of 360\u00b0 images, such as the connection between the edges of the images and the geometric distortion caused by the projection. Several works leverage narrow FoV projected images [31, 58, 10], but they require many projected images (e.g., 81 images [31]), which is an additional burden. To deal with such discontinuity and distortion, recent works introduce modeling in spherical domain [13, 9], projecting an image to local tangent patches with minimal geometric error. It is proved that leveraging transformer architecture in 360\u00b0 image modeling reduces distortions caused by projection and rotation [6]. For this reason, recent approaches [44, 45] including PAVER [60], PanoFormer [48], and Text2Light [4] used the transformer achieving global structural consistency. 3. Methodology 3.1. Problem de\ufb01nition In our setting, we use the panoramic domain as a source that forms content structures and the pinhole domain as a target style. More formally, given a panorama of the source domain X, Pano-I2I aims to learn a mapping function that translates its style into target pinhole domain Y retaining the content and structure of the panorama. Unlike the general I2I methods [64, 41, 7, 8, 63] that have selected source and target domains both in a narrow FoV condition, our setting varies both in style and structure: the source domain as panoramas with wide FoV, captured in the daytime and the target domain as pinhole images in diverse conditions with narrow FoV. In this setting, existing state-of-the-art I2I methods [64, 41, 42, 22, 8, 32, 63, 20, 26] designed for pinhole images may fail to preserve the panoramic structure of the content, since their existing feature disentanglement methods cannot separate style from the target content because there exist both geometric and style differences between the source and the target domains. We empirically observed that 1) the outputs of the existing I2I method result in pinhole-like images, as shown in Fig. 2, and 2) pinhole image-based network design that does not consider the spherical structure of 360\u00b0 causes discontinuity at left-right boundaries and low visual \ufb01delity. 3.2. Architecture design Overall architecture. On a high level, the proposed method consists of a shared content encoder Ec, a shared style encoder Es to estimate the disentangled representation, a transformer encoder T to mix the style and content through AdaIN [17] layers, and a uni\ufb01ed generator G and discriminator D to generate the translated image. Our transformer encoder block consists of a multi-head self-attention layer and a feed-forward MLP with GELU [49]. In speci\ufb01c, to translate an image x in source domain X to target domain Y, we \ufb01rst extract a content feature map cx \u2208Rh\u00d7w\u00d7lc by the content encoder from x, with height h, width w, and lc content channels, and receive a random latent code s \u2208R1\u00d71\u00d7ls from Gaussian distribution N(0, I) \u2208Rls, which is used to control the style of the output with the af\ufb01ne parameters for AdaIN [17] layers in transformer encoders. Finally, we get the output image by \u02c6 y = G(T (cx, s)). In the following, we will explain our key ingredients \u2013 panoramic modeling in the content encoder, style encoder and the transformer, distortion-free discrimination, and sphere-based rotation augmentation and its ensemble \u2013 in detail. Panoramic modeling in encoders. In Pano-I2I, latent spaces are shared to embed domain-invariant content or style features as done in [28]. Namely, the content and style encoders, Ec and Es, respectively, take either panoramas or pinhole images as inputs. However, the geometric distortion gap from the different FoVs prevents the encoders from understanding each corresponding structural information. To overcome this, motivated from [48, 60], we use the specially-designed deformable convolution layer [65] at the beginning of the content and style encoders by adjusting the offset re\ufb02ecting the characteristics of the image types. Speci\ufb01cally, given a panorama, the deformable convolution layer can be applied directly to the panorama by deriving an equirectangular plane (ERP) offset [60], \u0398ERP, that considers the panoramic geometry. To this end, we project a tangential local patch P of a 3D-Cartesian domain into ERP to obtain the corresponding ERP offset, \u0398ERP, as follows: \u0398ERP(\u03b8, \u03c6) = fSPH\u2192ERP(f3D\u2192SPH( P \u00d7 R(\u03b8, \u03c6) ||P \u00d7 R(\u03b8, \u03c6)||2 )), (1) where R(\u03b8, \u03c6) indicates the rotation matrix with the longitude \u03b8 \u2208[0, 2\u03c0] and latitude \u03c6 \u2208[0, \u03c0], fSPH\u2192ERP indicates the conversion function from spherical domain to ERP domain, and f3D\u2192SPH indicates the conversion function from 3 \fx x\u2032\ufffc cx \u03b8 \u2295 Spherical PE MLP Style or\u2028 Gaussian noise AdaIN \u0302 y(0) \u0302 y(1) \u2211 \u03b1 1-\u03b1 -\u03b8 \u2295 PE MLP Style AdaIN cy y Pinhole Target Panoramic Source Panoramic Augment NCE sy y\u2032\ufffc GAN \u0302 y sx recon img recon style Stage I Stage II cont ( \u0302 y) (sy, \u0302 y) (y, y\u2032\ufffc ) s w/ w/ Differentiable \u2028 Projection to \u2028 Pinhole df-GAN or Figure 3. Overall network con\ufb01guration of Pano-I2I, consisting of content and style encoders, transformer, generator, and discriminator. Given panoramas as the source domain, we disentangle the content and translate its style aligned to the target domain. For training, pinhole datasets are used as targets referring to styles. Panoramic augmentation and ensemble are also introduced to preserve the spherical structure of panoramas. In our framework, Stage I only learns to reconstruct the panorama source, and panorama translation is learned in Stage II. 3D-Cartesian domain to spherical domain, as in [60]. To be detailed, we \ufb01rst project the tangential local patch P to the corresponding spherical patch on a unit sphere S2, aligning the patch center to (\u03b8, \u03c6) \u2208S2. Notice that the number of P is H \u00d7 W with the stride of 1 and proper paddings, while the center of P corresponds to the kernel center. Then, we obtain the relative spherical coordinates from the center point and all discrete locations in P. Finally, these positions are projected to the ERP domain, represented as offset points. We compute such 2D-offset points \u0398ERP \u2208R2\u00d7H\u00d7W \u00d7kerh\u00d7kerw for each kernel location, and \ufb01xed to use them throughout training and test phase. Unlike basic convolution, which has a square receptive \ufb01eld having limited capability to deal with geometric distortions in panoramas, our deformable convolution with \ufb01xed offsets can encode panoramic structure. We carefully clarify the objective of using deformable convolution is different from PAVER [60], which exploits the pinhole-based pretrained model for panoramic modeling. In contrast, PanoI2I aims to learn both pinhole images and panorama in the shared networks simultaneously. For pinhole image encoding, \u0398ERP is replaced to zero-offset \u0398\u2205in both content and style encoders, which are vanilla convolutions. Panoramic modeling in the transformer. After extracting the content features from the source image, we \ufb01rst patchify the content features to be processed through transformer blocks, then add positional embedding (PE) [49, 43]. We represent the center coordinates of the previous patchi\ufb01ed grids as (ip, jp) corresponding to the p-th patch having the width w and the height h. As two kinds of inputs {x, y} \u2014 panorama, pinhole image, for each\u2014 have different structural properties, we adopt the sinusoidal PE in two ways: using 2D PE and spherical PE (SPE), respectively. To start with, we de\ufb01ne the absolute PE in transformer [49] as \u03b3(\u00b7), a sinusoidal mapping into R2K as \u03b3(a) = {(sin(2k\u22121\u03c0a), cos(2k\u22121\u03c0a))|k = 1, ..., K} (2) for an input scalar a. Based on this, we de\ufb01ne the 2D PE for common pinhole images as follows: PE = concat(\u03b3(ip), \u03b3(jp)). (3) Following the previous work [4], we consider the 360\u00b0 spherical structure of panorama presenting a spherical positional embedding for the center position (ip,jp) of each grid de\ufb01ned as follows, further added to the patch embedded tokens to work as explicit guidance: SPE = concat(\u03b3(\u03b8), \u03b3(\u03c6)), where \u03b8 = (2ip/h \u22121)\u03c0, \u03c6 = (2jp/w \u22121)\u03c0/2. (4) Since SPE explicitly provides cyclic spatial guidance and the relative spatial relationship between the tokens, it helps to maintain rotational equivariance for the panorama by encouraging structural continuity at boundaries in an 360\u00b0 input. On the other hand, the previous spherical modeling methods used the standard learnable PE [60, 62] or limited to employ SPE as a condition for implicit guidance [4], which does not provide token-wise spatial information. This positional embedding is added to patchembedded tokens and further processed into transformer encoders with AdaIN. 4 \fDistortion-free discrimination. Contrary to the domain setting in traditional I2I methods that have only differences in style, our source and target domains exhibit two distinct features: geometric structure and style. As shown in the blue box in Fig. 2, directly applying the existing I2I method for panoramic I2I guided by pinhole images brings severe structural collapse blocking artifacts, severely affecting the synthesizing quality. We speculate that this problem, which has not been explored before, breaks the discriminator and causes structural collapse. Concretely, while the discriminator in I2I typically learns to distinguish real data y and fake data \u02c6 y, mainly focusing on style difference, in our task, there is an additional large deformation gap and FoV difference between y and \u02c6 y that confuses what to discriminate. To address this issue, we present a distortion-free discrimination technique. The key idea is to transform a randomly selected region of a panorama into a pinhole image while maintaining the degree of FoV. Speci\ufb01cally, we adopt a panorama-to-pinhole image conversion fT by a rectilinear projection [31]. To obtain a narrow FoV (pinhole-like) image with fT , we \ufb01rst select a viewpoint in the form of longitude and latitude coordinates (\u03b8, \u03c6) in the spherical coordinate system, where \u03b8 and \u03c6 are randomly selected from [0,2\u03c0] and [0,\u03c0], respectively, and extract a narrow FoV region from the 360\u00b0 panorama image by a differentiable projection function fT . To further improve the discriminative ability of our model, we adopt a weighted sum of the original discrimination and the proposed discrimination to encourage the model to learn more robust features by considering both the original full-panoramas and pinhole-like converted panoramas. Sphere-based rotation augmentation and ensemble. We introduce a panoramic rotation-based augmentation since a different panorama view consistently preserves the content structure without the discontinuity problem at left-right boundaries. Given a panorama x, a rotated image x\u2032 is generated by horizontally rotating \u03b8 angle. We ef\ufb01ciently implement this rotation by rolling the images in the ERP space without the burden of ERP\u2192SPH\u2192ERP projections since both are effectively the same operation for the ERP domain. The rotation angle is randomly sampled in [0, 2\u03c0], where the step size is 2\u03c0/10. Such rotation is also re\ufb02ected in SPE by adding the rotation angle \u03b8 to help the model learn the horizontal cyclicity of panoramas. Later, the translated images \u02c6 y(0) and \u02c6 y(1) from the generator with x and x\u2032, respectively, are blended together, after rotating back with \u2212\u03b8 for \u02c6 y(1) of course, to generate the \ufb01nal ensemble output \u02c6 y: \u02c6 y = \u02c6 y(0) + \u02c6 y(1)\u2032 2 , (5) where \u02c6 y(1)\u2032 is indicates \u2212\u03b8 rotated version of \u02c6 y(1). Thus, the result \u02c6 y has more smooth boundary than the results predicted alone, mitigating discontinuous edge effects. 3.3. Loss functions Adversarial loss minimizes the distribution discrepancy between two different features [14, 38]. We adopt this to learn the translated image \u02c6 y = G(T (Ec(x, \u0398ERP), s)) and the image x from X to have indistinguishable distribution to preserve panoramic contents, de\ufb01ned as: LGAN =Ex\u223cX [log(1 \u2212D(\u02c6 y))] + Ey\u223cY[log D(y)], (6) with the R1 regularization [35] to enhance training stability. To consider the panoramic distortion-free discrimination using the panorama-to-pinhole conversion fT , we de\ufb01ne additional adversarial loss as follows: Ldf-GAN = Ex\u223cX [log(1 \u2212D(fT (\u02c6 y)))] + Ey\u223cY[log D(y)]. (7) Content loss. To maintain the content between the source image x and translated image \u02c6 y, we exploit the spatiallycorrelative loss [63] to de\ufb01ne a content loss, with an augmented source xaug. To get xaug, we apply structurepreserving transformations to x. This helps preserve the structure and learn the spatially-correlative map [63] based on patchwise infoNCE loss [40], since it captures the domain-invariant structure representation. Denoting that \u02c6 v as spatially-correlative map of the query patch from c\u02c6 y = Ec(\u02c6 y, \u0398ERP), we pick the pseudo-positive patch sample v+ from cx = Ec(x, \u0398ERP) in the same position of the query patch \u02c6 v, and the negative patches v\u2212from the other positions of cx aug and cx, except for the position of query patches \u02c6 v. We \ufb01rst de\ufb01ne a score function \u2113(\u00b7) at the l-th convolution layer in Ec: \u2113( \u02c6 vl,v+ l , v\u2212 l ) = \u2212log \" exp( \u02c6 vl \u00b7 v+ l /\u03c4) exp( \u02c6 vl \u00b7 v+ l /\u03c4) + PN n=1 exp( \u02c6 vl \u00b7 v\u2212 n /\u03c4) # , (8) where \u03c4 is a temperature parameter. Then, the overall content loss function is de\ufb01ned as follows: Lcont NCE = Ex\u223cX X l X s \u2113(\u02c6 vl(s),v+ l (s), v\u2212 l (S\\s)), (9) where the index s \u2208{1, 2, ..., Sl} and Sl is a set of patches in each l-th layer, and S\\s indicates the indices except s. Image reconstruction loss. We additionally use the image reconstruction loss to enhance the disentanglement between content and style in a manner that our G can reconstruct an image for domain Y. To be speci\ufb01c, y is fed into content encoder Ec and style encoder Es to obtain a content feature map cy = Ec(y, \u0398\u2205) and a style code sy = Es(y, \u0398\u2205). We then compare the reconstructed image G(T (cy, sy)) with y as follows: Limg recon = Ey\u223cY[\u2225G(T (cy, sy)) \u2212y\u22251]. (10) 5 \fStyle reconstruction loss. In order to better learn disentangled representation, we compute L1 loss between the style code from the translated image and input panorama, Lstyle ref-recon = Ex\u223cX [\u2225Es(\u02c6 y, \u0398ERP) \u2212Es(x, \u0398ERP)\u22251]. (11) We also de\ufb01ne the style reconstruction loss to reconstruct the style code s, which is used for the generation of \u02c6 y. Note that the style code s is randomly sampled from Gaussian distribution, not extracted from an image. Lstyle rand-recon = Ex\u223cX,y\u223cY[\u2225Es(\u02c6 y, \u0398ERP) \u2212s\u22251]. (12) 3.4. Training strategy Stage I: Panorama reconstruction. To our knowledge, there is no publicly-available large-scale outdoor panorama data, especially captured in various weather or season conditions. For this reason, we cannot use panoramas as a style reference. In addition, in order to share the same embedding space in content and style, the network must be able to process pinhole images and panoramas simultaneously. For the stable training of Pano-I2I, the training procedure is split into two stages corresponding with different objectives. In Stage I, we pretrain the content and style encoders Ec,s, transformer T , generator G, and discriminator D using the panorama dataset only. Given a panorama, the parameters of our network are optimized to reconstruct the original with adversarial and content losses, and style reconstruction loss. As the network learns to reconstruct the input self again, we use the style feature represented by a style encoder instead of the random style code. In addition, for LGAN in Stage I, the original discriminator receives x instead of y as an input. The total objective in Stage I as follows: LStageI =LGAN + \u03bbcontLcont NCE + \u03bbstyleLstyle ref-recon, (13) where \u03bb{\u2217} denotes balancing hyperparameters that control the importance of each loss. Stage II: Panoramic I2I guided by pinhole image. In Stage II, the whole network is fully trained with robust initialization by Stage I. Compared to Stage I, panorama and pinhole datasets are all used in this stage. Concretely, the main difference is that; (1) original discrimination is combined with our distortion-free discrimination as a weighted sum, (2) the style code is sampled from the Gaussian distribution to translate the panorama, (3) the panoramic rotationbased augmentation and its ensemble technique are leveraged to enhance the generation quality. Therefore, the total objective in Stage II is de\ufb01ned as: LStage2 =\u03bbdf-GANLdf-GAN + (1 \u2212\u03bbdf-GAN)LGAN + \u03bbcontLcont NCE + \u03bbstyleLstyle rand-recon + \u03bbreconLimg recon. (14) Notice that \u03bb{\u2217} is differently set to each stage, and please refer to Appendix A. 4. Experiments 4.1. Experimental setup Datasets. We conduct experiments on the panorama dataset, StreetLearn [37], as the source domain, and a standard street-view dataset for I2I, INIT [47] and Dark Zurich [46], as the target domain. StreetLearn provides 360\u00b0 outdoor 56k Manhattan panoramas taken from the Google Street View. Although INIT consists of four conditions (sunny, night, rainy, and cloudy), we use two conditions, night and rainy, since the condition of the StreetLearn is captured during the daytime, including sunny and cloudy. We use the Batch1 of the INIT dataset, a total of 62k images for the four conditions. Dark Zurich has three conditions (daytime, night, and twilight), a total of 8779 images, and we use night and twilight. Metrics. For quantitative comparison, we report the Fr\u00b4 echet Inception Distance (FID) metric [15] to evaluate style relevance, and the structural similarity (SSIM) index [53] metric to evaluate the panoramic content preserving. Considering that the structure of outputs tends to become pinhole-like in panoramic I2I tasks, we measure the FID metric after applying panorama-to-pinhole projection (fT ) for randomly chosen horizontal angle \u03b8 and \ufb01xed vertical angle \u03c6 as 0 with a \ufb01xed FoV of 90\u00b0, for consistent viewpoint with the target images. Notice that the SSIM mediately shows the degree of content preservation because it measures the structural similarity between the original panorama and the translated panorama based on luminance, contrast and structure. Comparison methods. We compare our approach against the state-of-the-art I2I methods, including MGUIT [20] and InstaFormer [26], CUT [41], and FSeSim [63]. Since MGUIT and InstaFormer require bounding box annotations to train their models, we exploit pretrained YOLOv5 [24] model to generate pseudo bounding box annotations. 4.2. Implementation details We summarize the implementation details in the PanoI2I. We formulate the proposed method with vision transformers [11] inspired by InstaFormer [26], but without instance-level approaches due to the absence of groundtruth bounding box annotations. In training, we use the Adam optimizer [27] with \u03b21 = 0.5, \u03b22 = 0.999. The input of the network is resized into 256 \u00d7 512. We design our content and style encoders, a transformer encoder, the generator-and-discriminator for our GAN losses based on [26], where all modules are learned from scratch. The initial learning rate is 1e-4, and the model is trained on 8 Tesla V100 with batch size 8 for Stage I and 4 for Stage II. 6 \fInputs CUT [41] FSeSim [63] MGUIT [20] InstaFormer [26] Pano-I2I (ours) Figure 4. Qualitative comparison on StreetLearn dataset (day) to INIT dataset (night, rainy): (top to bottom) day\u2192night, and day\u2192rainy results. Among the methods, Pano-I2I (ours) preserves object details well and shows realistic results. Methods Day\u2192Night Day\u2192Rainy FID\u2193 SSIM\u2191 FID\u2193 SSIM\u2191 CUT [41] 131.3 0.232 119.8 0.439 FSeSim [63] 106.0 0.309 110.3 0.541 MGUIT [20] 129.9 0.156 141.5 0.268 InstaFormer [26] 151.1 0.201 136.2 0.495 Pano-I2I (ours) 94.3 0.417 86.6 0.708 Table 1. Quantitative evaluation on the translated panoramas from the StreetLearn dataset to the INIT dataset. 4.3. Experimental results Qualitative evaluation. In Fig. 4, we compare our method with other I2I methods. We observe all the other methods [41, 22, 63, 20, 26] fail to synthesize reasonable panoramic results and show obvious inconsistent output regarding either structure or style in an image. Moreover, previous methods recognize structural discrepancies between source and target domains as style differences, indicating failed translation results that change like pinhole images. Surprisingly, in the case of \u2018day\u2192night\u2019, all existing methods fail to preserve the objectness as a car or building. We conjecture that they can hardly deal with the large domain gap in \u2018day\u2192night,\u2019 thus naively learning to follow the target distribution without considering the context from the source. By comparison, our method shows the overall best performance in visual quality, preserving panoramic content, and structuraland style-consistency. Especially, we can observe the ability of our discrimination design to generate distortion-tolerate outputs. The qualitative results on Dark Zurich are provided in Appendix E. Quantitative evaluation. Tab. 1 and Tab. 2 show the quantitative comparison in terms of FID [15] and SSIM [53] index metrics. Our method consistently outperforms the competitive methods in all metrics, demonstrating that Pano-I2I successfully captures the style of the target domain while preserving the panoramic contents. Notably, our approach exhibits signi\ufb01cant improvements in terms of SSIM. In contrast, previous methods perform poorly in terms of SSIM compared to our results, which is also evident from the qual7 \fMethods Day\u2192Night Day\u2192Twilight FID\u2193 SSIM\u2191 FID\u2193 SSIM\u2191 FSeSim [63] 133.8 0.305 138.8 0.420 MGUIT [20] 205.3 0.156 229.9 0.124 Pano-I2I (ours) 120.2 0.431 126.6 0.520 Table 2. Quantitative evaluation on the translated panoramas from the StreetLearn dataset to the Dark Zurich dataset. 14% 6% 13% 19% 11% 21% 5% 6% 6% 10% 8% 5% 53% 68% 56% Image Quality Content Relevance Style Relevance 0% 25% 50% 75% 100% CUT FSeSim MGUIT InstaFormer Ours Figure 5. User study results. itative results presented in Fig. 4. User study. We also conduct a user study to compare the subjective quality. We randomly select 10 images for each task (sunny\u2192night, sunny\u2192rainy) on the INIT dataset, and let 60 users sort all the methods regarding \u201coverall image quality\u201d, \u201ccontent preservation from the source\u201d, and \u201cstyle relevance with the target, considering the context from the source\u201d. As seen in Fig. 5, our method has a clear advantage on every task. We provide more details in Appendix F. 4.4. Ablation study In Fig. 6 and Tab. 3, we show qualitative and quantitative results for the ablation study on the day\u2192night task on the INIT dataset. In particular, we analyze the effectiveness of our 1) distortion-free discrimination, 2) ensemble technique, 3) two-stage learning scheme, and 4) spherical positional embedding (SPE) and deformable convolution. As seen in Fig. 6, our full model smoothens the boundary with high-quality generation, successfully preserving the panoramic structure. We also observe the ability of our discrimination design to generate distortion-tolerated outputs. The result without an ensemble fails to alleviate the discontinuity problem, as seen in the middle area of the image. The result without two-stage learning shows the limited capability to reconstruct the \ufb01ne details of the contents from the input image. Since SPE and deformable convolution help the model learn the deformable structure of panoramas, the result without them fails to preserve the detailed structure. Note that the results are visualized after rotation (\u03b8 = 180\u00b0) to highlight the discontinuity. In Tab. 3, we measure the SSIM and FID scores to evaluate structural consistency and style relevance with respect to the choices of components. It demonstrates that Input (I) Pano-I2I (ours) (II) Distortion-free D (III) Ensemble (IV) Two-stage learning (V) SPE, deform conv Figure 6. Qualitative evaluation on ablation study. ID Methods FID\u2193 SSIM\u2191 (I) Pano-I2I (ours) 94.3 0.417 (II) (I) Distortion-free D 105.6 0.321 (III) (I) Ensemble technique 96.8 0.390 (IV) (I) Two-stage learning 120.8 0.376 (V) (I) SPE, deform conv 94.5 0.355 Table 3. Quantitative evaluation on ablation study. our full model preserves input structure with our proposed components. We observe all the techniques and components contribute to improving the performance in terms of style relevance and content preservation, and the impact of distortion-free discrimination is substantially effective to handle geometric deformation. 5." + }, + { + "url": "http://arxiv.org/abs/2203.16248v1", + "title": "InstaFormer: Instance-Aware Image-to-Image Translation with Transformer", + "abstract": "We present a novel Transformer-based network architecture for instance-aware\nimage-to-image translation, dubbed InstaFormer, to effectively integrate\nglobal- and instance-level information. By considering extracted content\nfeatures from an image as tokens, our networks discover global consensus of\ncontent features by considering context information through a self-attention\nmodule in Transformers. By augmenting such tokens with an instance-level\nfeature extracted from the content feature with respect to bounding box\ninformation, our framework is capable of learning an interaction between object\ninstances and the global image, thus boosting the instance-awareness. We\nreplace layer normalization (LayerNorm) in standard Transformers with adaptive\ninstance normalization (AdaIN) to enable a multi-modal translation with style\ncodes. In addition, to improve the instance-awareness and translation quality\nat object regions, we present an instance-level content contrastive loss\ndefined between input and translated image. We conduct experiments to\ndemonstrate the effectiveness of our InstaFormer over the latest methods and\nprovide extensive ablation studies.", + "authors": "Soohyun Kim, Jongbeom Baek, Jihye Park, Gyeongnyeon Kim, Seungryong Kim", + "published": "2022-03-30", + "updated": "2022-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction For a decade, image-to-image translation (I2I), aiming at translating an image in one domain (i.e., source) to another domain (i.e., target), has been popularly studied, to the point of being deployed in numerous applications, such as style transfer [15, 22], super-resolution [12, 32], inpainting [25, 46], or colorization [64,65]. In particular, most recent works have focused on designing better disentangled representation to learn a multimodal translation from unpaired training data [23, 35, 45]. While they have demonstrated promising results, most of these methods only consider the translation on an whole image, and do not account for the fact that an image often contain many object instances of various sizes, thus showing *Corresponding author Input image (sunny) Translated image (rainy) Figure 1. Results of InstaFormer for instance-aware image-toimage translation. Our InstaFormer effectively considers globaland instance-level information with Transformers, which enables high quality instance-level translation. the limited performance at content-rich scene translation, e.g., driving scene, which is critical for some downstream tasks, such as domain adaptive object detection [3], that require well-translated object instances. To address the aforementioned issues, some methods [3, 28, 51] seek to explicitly consider an object instance in an image within deep convolutional neural networks (CNNs). This trend was initiated by instance-aware I2I (INIT) [51], which treats the object instance and global image separately. Following this [51], some variants were proposed, e.g., jointly learning translation networks and object detection networks, called detection-based unsupervised I2I (DUNIT) [3], or using an external memory module, called memory-guided unsupervised I2I (MGUIT) [28]. While these methods improve an instance-awareness to some extent, they inherit limitation of CNN-based architectures [3, 28,51], e.g., local receptive fields or limited encoding of relationships or interactions between pixels or patches within an image which are critical in differentiating an object instance from an whole image and boosting its translation. To tackle these limitations, for the first time, we present to utilize Transformer [55] architecture within I2I networks that effectively integrates globaland instance-level information present in an image, dubbed InstaFormer. We follow common disentangled representation approaches [23, 35] to extract both content and style vectors. By considering extracted content features from an image as tokens, \four Transformer-based aggregator mixes them to discover global consensus by considering global context information through a self-attention module, thus boosting the instanceawareness during translation. In addition, by augmenting such tokens by an instance-level feature extracted from the global content feature with respect to bounding box information, our framework is able to learn an interaction between not only object instance and global image, but also different instances, followed by a position embedding technique to consider both globaland instance-level patches at once, which helps the networks to better focus on the object instance regions. We also replace layer normalization (LayerNorm) [1] in Transformers with adaptive instance normalization (AdaIN) [22] to facilitate a multi-modal translation with extracted or random style vectors. Since aggregating raw content and style vectors directly with Transformers requires extremely large computation [13, 31], we further propose to apply a convolutional patch embedding and deconvolutional module at the beginning and end of our Transformer-based aggregator. In addition, to improve the instance-awareness and quality of translation images at object regions, we present an instance-level content contrastive loss defined between input and translated images. In experiments, we demonstrate our framework on several benchmarks [9,16,51] that contain content-rich scenes. Experimental results on various benchmarks prove the effectiveness of the proposed model over the latest methods for instance-aware I2I. We also provide an ablation study to validate and analyze components in our model. 2. Related Work Image-to-Image Translation. While early efforts for I2I are based on supervised learning [27], most recent state-ofthe-arts focus on unpaired settings [2,14,39,63,67,69]. CycleGAN [70] attempts this by proposing a cycle-consistency loss which has been one of standard losses for unpaired I2I. Inspired by CycleGAN, numerous methods utilize cycleconsistency [7, 21, 23, 33, 35, 61], and they can be largely divided into uni-modal models [38,61,63] and multi-modal models [7, 23, 35] methods. Specifically, MUNIT [23] assumes that an image representation can be disentangled into a domain-specific style and a domain-invariant content representation and uses these disentangled latent features with cycle-consistency to generate the translations. However, the content in translated image can be easily distorted, and cycle mapping requires multiple generators and discriminators. To address these, CUT [45]and F-LSeSim [67] propose novel losses inspired by infoNCE [44] to directly compute distance between input and translated images in an one-side framework without cycle-consistency. However, they still have shown limited performance to encode an object-awareness at the translated image. Instance-Aware Image-to-Image Translation. Some methods attempted to address the aforementioned issues [3, 28,43,51]. INIT [51] attempted to translate the whole image and object instances independently. DUNIT [3] proposed to further train detection module and adopted instanceconsistency loss for object-awareness. MGUIT [28] utilizes bounding box to read and write class-wise memory module, and has access to class-aware features on memory at test-time. The aforementioned methods inherit limitation of CNN-based architecture, e.g., local receptive fields or limited encoding of relationships or interactions within an image [23,35,70]. Vision Transformers and Image Generation Recently, Vision Transformers (ViT) have shown to attain highly competitive performance for a wide range of vision applications, such as image classification [11, 13, 54, 57], object detection [5, 10, 71], and semantic segmentation [60, 68]. Inspired by ViT [13], some improvements are made to improve the computational complexity [31, 40, 56, 57]. For example, Swin Transformer [40] proposes relative position biases, and restricts self-attentioncomputation within shifted windows. MLP-Mixer [52] suggests to replace selfattention with an MLP, achieving memory efficiency and competitive performance [37, 41, 53]. In this paper, we introduce ViT-based aggregator to further enhance to learn instance-awareness by aggregating information from local region as well as global image. On the other hands, there exist several efforts to adapt Vision Transformers to image generation tasks [6, 24, 30, 36, 66]. As seminal work, TransGAN [30] first presents a GAN structure using pure Transformer, but has only validated on low-resolution images. [66] has achieved success on generating high-resolution images. [24] leverages Transformers to build the bipartite structure to allow long-range interactions. To our best knowledge, our work is the first attempt to adopt Transformers in instance-aware image translation. 3. Methodology 3.1. Overview Our approach aims to learn a multi-modal mapping between two domains X \u2282RH\u00d7W \u00d73 and Y \u2282RH\u00d7W \u00d73 without paired training data, but with a dataset of unpaired instances X = {x \u2208X} and Y = {y \u2208Y}. Especially, we wish to model such a mapping function to have an ability that jointly accounts for whole image and object instances. Unlike conventional I2I methods [3, 17, 23, 28, 35, 51] that were formulated in a two-sided framework to exploit a cycle-consistency constraint, which often generates some distortions on the translated images and requires auxiliary networks for inverse mapping [70], we formulate our approach in an one-sided framework [45]. In specific, as illustrated in Fig. 2, our framework, \fPatch Embed ViT Encoder Blocks ( ) \u00d76 AdaIN Params RoI Align MLP Test time Feature Map Position Embed Element-wise Add Style Code Global Patch Instance Patch ... ... ... ... ... Patch Expand. (a) Architecture ViT Encoder Block AdaIN MSA AdaIN MLP (b) ViT Encoder Block Figure 2. Network configuration: (a) overall architecture for image-to-image translation, (b) ViT encoder block in details. Our networks consist of content encoder, Transformer encoder, and generator. The gray background represents the test phase, where we have no access on object instance bounding box (Best viewed in color). dubbed InstaFormer, consists of content encoder E and generator G, similar to [45,67], and additional encoder T with Transformers [55] to improve the instance-awareness by considering global consensus between whole image and object instances. To translate an image x in domain X to domain Y, our framework first extracts a content feature map c = E(x) \u2208Rh\u00d7w\u00d7lc from x, with height h, width w, and lc channels, and randomly draws a style latent code s \u2208R1\u00d71\u00d7ls from the prior distribution q(s) \u223cN(0, I) to achieve a multi-modal translation. Instead of directly feeding c and s to the generator G, as done in the literature [23, 35], we aggregate information in the content c to discover global consensus between the global image and object instances in a manner that we first extract an object instance content vector cins i for i-th object bounding box with parameters Bi = [xi, yi, hi, wi], where (xi, yi) represent a center point, and hi and wi represent height and width of the box and i \u22081, ..., N where N is the number of instance, and then mix {c, {cins i }i, s} through the proposed Transformer module T to extract global embedding u and instance embedding uins i , which are independently used to generate global-level translated image \u02c6 y = G(u) \u2208Rh\u00d7w\u00d73 and instance-level translated images \u02c6 yins i = G(uins i ) \u2208Rhi\u00d7wi\u00d73. In our framework, during training, we have access to the ground-truth object bounding boxes, while we do not access them at test-time. To train our networks, we first use an adversarial loss defined between a translated image \u02c6 y and a real image y from Y with discriminators, and a global content contrastive loss defined between x and \u02c6 y to preserve the global content. To improve the disentanglement ability for content and style, following [23], we also use both image reconstruction loss and style reconstruction loss by leveraging an additional style encoder for Y. To improve the instance-awareness and the quality of translation images at object instance regions, we newly present an instance-level content contrastive loss between x and \u02c6 y. 3.2. Content and Style Mixing with Transformers Most existing I2I methods [3,23,28,35,45,51] attempted to aggregate a content feature map with deep CNNs with residual connections, which are often called residual blocks, often inserted between encoder and generator networks. They are thus limited in the sense that they inherit limitation of CNN-based architecture, e.g., local receptive fields or limited encoding of relationships or interactions between pixels and patches within an image [23,35,51]. In instanceaware I2I task, enlarging the receptive fields and encoding an interaction between objects and global image may be of prime importance. For instance, if an image contains a car object on the road, using the context information of not only global background, e.g., road, but also other instances, e.g., other cars or person, would definitely help to translate the image more focusing on the instance, but existing CNNbased methods [23,35,45] would limitedly handle this. To overcome this, we present to utilize Transformer architecture [55] to enlarge the receptive fields and encode the interaction between features for instance-aware I2I. To this end, extracted content vector c \u2208Rh\u00d7w\u00d7lc from x can be flattened as a sequence c\u2032 = Reshpae(c) with the number of tokens hw and channel lc, which can be directly used as input for Transformers. However, this requires extremely \fA B A B (a) content image A B A B (b) translated image (c) w/o Lins NCE for A (d) w/o Lins NCE for B (e) w/ Lins NCE for A (f) w/ Lins NCE for B Figure 3. Visualization of learned self-attention. For (a) content image containing instances A and B, our networks generate (b) translated image, considering attention maps (c,d) without Lins NCE and (e,f) with Lins NCE for instance A, B, respectively. high computational complexity due to the huge number of tokens hw, e.g., full HD translation. Patch Embedding and Expanding. To address this issue, inspired by a patch embedding in ViT [13], we first apply sequential convolutional blocks to reduce the spatial resolutions. Instead of applying a single convolution in ViT [13] for extracting non-overlapping patches, we use sequential overlapped convolutional blocks to improve the stability of training while reducing the number of parameters involved [59]. We define this process as follows: \\mathbf {p} = \\mathrm { Conv}(\\mathbf {c}) \\in {\\mathbb {R}} ^ {(h/k) \\times (w/k) \\times {l'_{c}}}, (1) where k \u00d7 k is the stride size of convolutions, and l\u2032 c is a projected channel size. After feed-forwarding Transformer blocks such that z = T (p) \u2208R(h/k)\u00d7(w/k)\u00d7l\u2032 c, downsampled feature map z should be upsampled again with additional deconvolutional blocks, which are symmetric architectures to the convolutions, defined as follows: \\mathbf { u } = \\mathrm {DeConv}(\\mathbf {z}) \\in \\mathbb {R}^\\mathnormal {h \\times w \\times {l_{c}}}. (2) In addition, for a multi-modal translation, we leverage a style code vector s \u2208R1\u00d71\u00d7ls, and thus this should be considered during mixing with Transformers [55]. Conventional methods [22, 23, 35] attempted to mix content and style vectors using either concatenation [35] or AdaIN [22]. In our framework, by slightly changing the normalization module in Transformers, we are capable of simultaneously mixing content and style vectors such that T (p, s). Any forms of Transformers [13, 40, 52, 57, 62] can be considered as a candidate in our framework, and in experiments, ViT-like [13] architecture is considered for T . In the following, we explain the details of Transformer modules. Transformer Aggregator. In order to utilize Transformer to process content patch embeddings p, our work is built upon the ViT encoder, which is composed of an multi-head self-attention (MSA) layer and a feed-forward MLP with GELU [55], where normalization layers are applied before both parts. Especially, for I2I, we adopt AdaIN instead of LayerNorm [1] to control the style of the output with the affine parameters from style vector s and to enable multimodal outputs. In specific, content patch embedding p is first reshaped, and position embedding is achieved such that \\ mathbf {z} _ 0 = \\mathrm {Re shape}({\\mathbf {p}})+ {\\mathbf {E} \\in {\\mathbb {R}}^ {(h/k \\cdot w/k) \\times l_{c}'}}, (3) where E represents a position embedding [55], which will be discussed in the following. These embedded tokens z0 are further processed by the sequential Transformer encoder blocks as follows: \\ b egi n {spl it} &\\ math b f {z} '_ { t} = \\mat hrm { MSA} \\ le ft ( {\\mathrm {AdaIN}\\left ({\\mathbf {z}}_{t-1}, \\mathbf {s}' \\right )} \\right ) + {\\mathbf {z}}_{t-1},\\\\ &{\\mathbf {z}}_t = \\mathrm {MLP}\\left ( {\\mathrm {AdaIN}\\left (\\mathbf {z}'_{t}, \\mathbf {s}' \\right )} \\right ) + \\mathbf {z}'_{t}, \\end {split} (4) where z\u2032 t and zt denote the output of MSA and MLP modules for t-th block respectively and t \u22081, ..., T, respectively, s\u2032 indicates AdaIN parameters extracted from S. After L Transformer modules, followed by reshaping to original resolution, we finally achieve the output of Transformer block T such that zT = T (p, s\u2032). As exemplified in Fig. 3, our learned self-attention well considers the interaction between object instances and global image. 3.3. Instance-Aware Content and Style Mixing So far we discussed a method for content and style mixing with Transformers [13]. This framework can improve the translation quality especially at instance regions to some extent, but the nature of irregular shape of object instances may hinder the performance boosting of our framework. In particular, global-level aggregation itself is limited to capture details of a tiny object and it is not always guaranteed that an object is located in a single regular patch. To overcome this, we present a novel technique to aggregate instance-level content features and global-level content features simultaneously, which enables the model to pay more attention to the relationships between global scenes and object instances. In specific, given ground-truth bounding boxes with parameters Bi, we extract instance-level content feature maps through through ROI Align [19] module defined as follows: {\\ m a thbf {c}}^\\ mat h rm {ins}_{i} = \\mathrm {RoIAlign}({\\mathbf {c}}; {B}_{i}) \\in {\\mathbb {R}}^{k \\times k \\times {l_{c}}}, (5) where k \u00d7k is a fixed spatial resolution. This can be further processed with the convolutional blocks as proposed above such that {\\ m a thbf {p}} ^ \\ m athrm { ins}_{i} = \\mathrm {Conv}({\\mathbf {c}}^\\mathrm {ins}_{i}) \\in {\\mathbb {R}}^{1 \\times 1 \\times {l'_{c}}}. (6) In our framework, by concatenating p and pins i , we build a new input for Transformer \u02c6 z0 such that \\ h at {\\mathbf {z }}_0 = \\ma t h r m {Reshape}(\\mat hrm {Cat}({\\mathbf {p}},\\{{\\bf {p}}^\\mathrm {ins}_{i}\\}_{i}))+ \\hat {\\mathbf {E}} \\in {\\mathbb {R}}^ {(h/k \\cdot w/k+N) \\times {l'_{c}}}, (7) \fInstance Patch \ud835\udefe(\ud835\udc65!) \ud835\udefe(\ud835\udc66!) \ud835\udefe(\ud835\udc64!) \ud835\udefe(\u210e!) Regular Patch \ud835\udc65\" \ud835\udc66\" \u210e! \ud835\udc64! \u210e\" \ud835\udc64\" \ud835\udefe(\ud835\udc65\") \ud835\udefe(\ud835\udc66\" ) \ud835\udefe(\ud835\udc64\" ) \ud835\udefe(\u210e\") \ud835\udc65! \ud835\udc66! Figure 4. Illustration of building position embedding for regular patches and instance-level patches. where Cat(\u00b7, \u00b7) denotes a concatenation operator and \u02c6 E is a corresponding positional embedding. Transformer blocks are then used to process \u02c6 z0 similarly to above to achieve \u02c6 zT , which is decomposed into zT and zins T,i. 3.4. Instance-Aware Position Embedding Since Transformer [55] block itself does not contain positional information, we add positional embedding E as described above. To this end, our framework basically utilizes existing technique [13], but the main difference is that our proposed strategy enables simultaneously considering regularly-partitioned patches p and instance patches pins i in terms of their spatial relationships. The deep networks are often biased towards learning lower frequency functions [47], so we use high frequency functions to alleviate such bias. We denote \u03b3(\u00b7) as a sinusoidal mapping into R2K such that \u03b3(a) = (sin(20\u03c0a), cos(20\u03c0a), ..., sin(2K\u22121\u03c0a), cos(2K\u22121\u03c0a)) for a scalar a. In specific, as a global feature map is divided into regular girds, each regular patch can be represented to have center coordinates (xg, yg) with patch width wg and height hg of regular size for g-th patch p(g). After embedding for each information through \u03b3(\u00b7) and concatenating along the channel axis, it is further added to the patch embedded tokens. \\mathbf {E } = \\m athrm {Cat}(\\gamma (x_g),\\gamma (y_g),\\gamma (w_g),\\gamma (h_g)) (8) Unlike regular patches, which have the same size of width and height for each, instance patches contain positional information of corresponding bounding boxes, which contain the centerpoint coordinates (xi, yi) and width and height (wi, hi). Instance-wise E is denoted as: \\m a thbf {E}^\\ mathrm {ins} = \\mathrm {Cat}(\\gamma (x_i),\\gamma (y_i),\\gamma (w_i),\\gamma (h_i)). (9) Then \u02c6 E = Cat(E, Eins). Fig. 4 illustrates the difference in how regular patch and instance patch are handled. 3.5. Loss Functions Adversarial Loss. Adversarial loss aims to minimize the distribution discrepancy between two different features [18, Global-level Instance-level Content image Translated image Figure 5. Illustration of global content loss and instance-level content loss. Blue box indicates a positive sample, while yellow box means a negative sample (Best viewed in color). 42]. We adopt this to learn the translated image \u02c6 y to be similar to an image y from Y defined such that \\b egin {split } \\m athc a l {L}_\\m athrm {GAN} = &\\mathbb {E}_{\\mathbf {x}\\sim \\mathcal {X}}[\\mathrm {log}(1-\\mathcal {D}(\\hat {\\mathbf {y}}))]+ \\mathbb {E}_{\\mathbf {y} \\sim \\mathcal {Y}}[\\mathrm {log}\\, \\mathcal {D}(\\mathbf {y})], \\end {split} (10) where D(\u00b7) is the discriminator. Global Content Loss. To define the content loss between x and \u02c6 y, we exploit infoNCE loss [44], defined as \\ be gin {s p lit} &\\ell ( \\hat {\\mat h b f {v} } , \\ma thbf { v }^ { +}, \\mathbf {v}^{-}) =\\\\ &\\mathrm {-log}\\left [\\frac {\\mathrm {exp}(\\hat {\\mathbf {v}}\\cdot \\mathbf {v}^{+}/\\tau )} {\\mathrm {exp}(\\hat {\\mathbf {v}}\\cdot \\mathbf {v}^{+}/\\tau ) + \\sum _{\\mathrm n=1}^{\\mathrm N}\\mathrm {exp}(\\hat {\\mathbf {v}}\\cdot \\mathbf {v}^{-}_{\\mathrm n}/\\tau )}\\right ], \\end {split} (11) where \u03c4 is the temperature parameter, and v+ and v\u2212represent positive and negative for \u02c6 v. We set pseudo positive samples between input image x and translated image \u02c6 y. For the content feature from translated image \u02c6 c(s) = E(\u02c6 y), we set positive patches c(s), and negative patches c(S \\ s) from x, where S \\ s represents indexes except for s, following [45,67]. Global content loss function is then defined as \\begi n { s plit } \\ m ath cal {L }_\\mat hrm { NCE}^{\\mathrm {global}} = \\mathbb {E}_{\\mathbf {x}\\sim \\mathcal {X}}\\sum _{l}\\sum _{s}\\ell (\\hat {\\mathbf {c}}_{l}(s),{\\mathbf {c}}_{l}(s) , {\\mathbf {c}}_{l}({S\\setminus s})), \\end {split} (12) where cl is feature at l-th level, s \u2208{1, 2, ..., Sl} and Sl is the number of patches in each l-th layer. Instance-level Content Loss. To improve the instanceawareness and the quality of translation images at object regions, we newly present an instance-level content contrastive loss. Our instance-level content loss is then defined such that \\b egi n {sp l i t } \\m athc a l {L }_\\m a thrm {NC E }^ \\ mathrm {ins}=\\mathbb {E}_{\\mathbf {x}\\sim \\mathcal {X}}\\sum _{i}\\sum _{m}\\ell (\\hat {\\mathbf {c}}_{i}^{\\mathrm {ins}}(m),{\\mathbf {c}}_{i}^{\\mathrm {ins}}(m) , {\\mathbf {c}}_{i}^{\\mathrm {ins}}({M\\setminus m})), \\end {split} (13) where m \u2208{1, 2, ..., Mi} and Mi is the number of patches at each instance. Fig. 5 illustrates how our suggested content losses work, with the procedure to define positive and negative samples. \fCycleGAN [70] UNIT [38] MUNIT [23] DRIT [35] INIT [51] DUNIT [3] MGUIT [28] InstaFormer CIS IS CIS IS CIS IS CIS IS CIS IS CIS IS CIS IS CIS IS sunny\u2192night 0.014 1.026 0.082 1.030 1.159 1.278 1.058 1.224 1.060 1.118 1.166 1.259 1.176 1.271 1.200 1.404 night\u2192sunny 0.012 1.023 0.027 1.024 1.036 1.051 1.024 1.099 1.045 1.080 1.083 1.108 1.115 1.130 1.115 1.127 sunny\u2192rainy 0.011 1.073 0.097 1.075 1.012 1.146 1.007 1.207 1.036 1.152 1.029 1.225 1.092 1.213 1.158 1.394 sunny\u2192cloudy 0.014 1.097 0.081 1.134 1.008 1.095 1.025 1.104 1.040 1.142 1.033 1.149 1.052 1.218 1.130 1.257 cloudy\u2192sunny 0.090 1.033 0.219 1.046 1.026 1.321 1.046 1.249 1.016 1.460 1.077 1.472 1.136 1.489 1.141 1.585 Average 0.025 1.057 0.087 1.055 1.032 1.166 1.031 1.164 1.043 1.179 1.079 1.223 1.112 1.254 1.149 1.353 Table 1. Quantitative evaluation on INIT dataset [51]. For evaluation, we perform bidirectional translation for each domain pair. We measure CIS [23] and IS [50](higher is better). Our results shows the best results in terms of CIS and IS. Image Reconstruction Loss. We additionally make use of image reconstruction loss to help disentanglement between content and style. For regularization, we use a reconstruction loss to ensure that our G can reconstruct an image for domain Y. To be specific, y is fed into E and style encoder S to obtain a content feature map cY = E(y) and a style code sY = S(y). We then compare the reconstructed image G(T (cY, sY)) for domain Y with y as follows: \\m athca l {L}_\\mat hrm {rec o n}^\\mathrm {img} = \\, \\mathbb {E}_{\\mathbf {y} \\sim \\mathcal {Y}}[\\|{\\mathcal {G}(\\mathcal {T}(\\mathbf {c^{\\mathcal {Y}}},\\mathbf {s^\\mathcal {Y}})) -\\,\\mathbf {y}}\\|_{1}].\\vspace {-10pt} (14) Style Reconstruction Loss. In order to better learn disentangled representation, we compute L1 loss between style code from the translated image and randomly generated style code in order to enable mapping generated style features to Gaussian distribution such that \\beg in {s p lit} \\mathcal { L }_\\mathrm {recon}^\\mathrm {style} = \\mathbb {E}_{\\mathbf {x}\\sim \\mathcal {X} ,\\mathbf {y}\\sim \\mathcal {Y}}[\\|{\\mathcal {S}(\\hat {\\mathbf {y}}) -\\,{\\mathbf {s}}}\\|_{1}].\\vspace {-10pt} \\end {split} (15) Total Loss. The total loss function is as follows: \\ begin {s p lit} \\ mi n \\li m its _{{\\math cal {E}},\\ma thc a l {G},\\mathc al {S } } \\max \\ limits _\\mathcal {D}\\mathcal {L}(\\mathcal {E},\\mathcal {G},\\mathcal {D}) = &\\mathcal {L}_{\\mathrm {GAN}} +\\lambda ^{\\mathrm {glob}}\\mathcal {L}_\\mathrm {NCE}^{\\mathrm {global}} +\\lambda ^\\mathrm {ins}\\mathcal {L}_\\mathrm {NCE}^{\\mathrm {ins}} \\\\&+\\lambda ^\\mathrm {style}\\mathcal {L}_\\mathrm {recon}^\\mathrm {style} +\\lambda ^\\mathrm {img} \\mathcal {L}_\\mathrm {recon}^\\mathrm {img}, \\end {split} (16) where \u03bbglob, \u03bbins, \u03bbstyle, and \u03bbimg are weights that control the importance of each loss. 4. Experiments 4.1. Implementation Details We first summarize implementation details in our framework. We conduct experiments using a single 24GB RTX 3090 GPU. Training datasets are resized to the size of 352\u00d7352. We employ an Adam optimizer for 200 epochs using a step decay learning rate scheduler. A batch size of 8, an initial learning rate of 2e-4. The number of NCE layers L is 3. For the loss weights, we set as \u03bbglob = 1, \u03bbins = 1, \u03bbstyle = 10, and \u03bbimg = 5. As described above, we implement our framework with the most representative vision Transformer-based, i.e., ViT [13], but we will show our framework works with MLP-Mixer [52] in the following. We will make our code publicly available. 4.2. Experimental Setup We conduct experiments on two standard datasets for instance-aware I2I, INIT dataset [51] and KITTI-Cityscapes dataset [9, 16]. INIT dataset [51] provides street scene images including 4 domain categories (sunny, night, rainy, cloudy) with object bounding box annotations for car, person, and traffic sign. We conduct translation experiments for sunny\u2192night, night\u2192sunny, sunny\u2192rainy, sunny\u2192cloudy, and cloudy\u2192sunny. KITTI object detection benchmark [16] and Cityscapes [9] dataset are used to evaluate domain adaptation for object detection on KITTI\u2192Cityscapes. KITTI contains 7,481 images for training and 7,518 images for testing with the bounding box annotations for 6 object classes. Cityscapes dataset consists of 5,000 images with pixel-level annotations for 30 classes. In this section, we compared our InstaFormer with recent state-of-the-art instance-aware I2I methods: INIT [51], DUNIT [3], MGUIT [28], and several unsupervised imageto-image translation methods: CycleGAN [70], UNIT [38], CUT [45], MUNIT [23], and DRIT [35]. 4.3. Experimental Results Qualitative Evaluation. We first conduct qualitative comparisons of our method to CycleGAN [70], UNIT [38], MUNIT [23], DRIT [35], and MGUIT [28] on sunny\u2192night, night\u2192sunny, sunny\u2192cloudy, and sunny\u2192rainy tasks in INIT dataset [51]. As shown in Fig. 6, our model generates higher quality translated results, particularly at object instance regions. Especially, as exemplified in the highlighted regions in Fig. 7, our model is good at capturing local regions within multiple instances thanks to Transformer-based architecture that simultaneously consider object instances and global image, and proposed instance-level contrastive learning. Note that our attention map visualization also proves this, well illustrated in Fig. 3. Note that MGUIT [28] has access on their trained memory module during test-time, which is additional burden. Quantitative Evaluation. Following the common practice [3,28,51], we evaluate our InstaFormer with inception score (IS) [50] and conditional inception score (CIS) [23]. Since the metrics above are related to diversity of translated images, we also evaluate our methods with fr\u00b4 echet incep\f(a) Input (b) CycleGAN [70] (c) UNIT [38] (d) MUNIT [23] (e) DRIT [35] (f) MGUIT [28] (g) InstaFormer Figure 6. Qualitative comparison on INIT dataset [51]: (top to bottom) sunny\u2192night, night\u2192sunny, and cloudy\u2192sunny results. Among the methods, ours preserves object details well and show realistic results. (a) (b) (c) (c) Figure 7. Visual comparison with MGUIT [28]: (a) input, (b) MGUIT [28], and (c) InstaFormer. We show the results for sunny\u2192rainy (left) and sunny\u2192cloudy (right). tion score (FID) [20] and structural similarity index measure (SSIM) [58] in terms of quality of translated images. Note that we evaluate the results under the same settings for all the methods. We adopt FID to measure the distance between distributions of real images and synthesized images in a deep feature domain. In addition, since SSIM index is an error measurement which is computed between the original content images and synthesized images, we apply to measure instance-wise structural consistency. It should be noted that for image translation tasks, there often exists some discrepancy between quantitative evaluations and human perceptions [4], thus the user study in the following would be a better precise metric. As shown in Table 1, our InstaFormer outperforms the current state-of-the-art methods in terms of diversity (CIS, IS). Furthermore, in terms of global distribution, or instance-level similarity as shown in Table 2, FID and SSIM 0% 20% 40% 60% 80% 100% Style Relevance Content Relevance Most preferred CUT MUNIT DRIT MGUIT Ours Figure 8. User study results on INIT dataset [51]. Our method is most preferred for overall quality, semantic consistency and style relevance, compared to CUT [45], MUNIT [23], DRIT [35], and MGUIT [28]. score show our InstaFormer tends to outperform prior methods in almost all the comparisons. In particular, results on SSIM demonstrate that our network is faithfully designed to encode an instance-awareness. Our method improves the FID score by a large margin compared to previous leading methods MGUIT [28] on INIT dataset [51]. User Study. We also conducted a user study on 110 participants to evaluate the quality of synthesized images in the experiments with the following questions: \u201cWhich do you think has better image quality in overall/ similar content to content image / represent style similar to target domain?\u201d on INIT dataset, summarized in Fig. 8. Our method ranks the first in every case, especially on content relevance and overall preference. Note that no standard evaluation metric has been emerged yet, human evaluation has an effect as evaluation metrics in image translation tasks. 4.4. Ablation Study In order to validate the effectiveness of each component in our method, we conduct a comprehensive ablation study. In particular, we analyze the effectiveness of instance-level loss (Lins NCE), Transformer encoder (T ), and AdaIN, shown in Fig. 9. It should be noted that CUT [45] can be regarded as the setting without Lins NCE, T , AdaIN from InstaFormer. Without Lins NCE, our self-attention module has limited capa\f(a) Content image (b) InstaFormer (c) MLP-Mixer [52] (d) w/o Lins NCE (e) w/o Lins NCE, T (f) CUT [45] (g) w/o AdaIN Figure 9. Ablation study on different settings: instance-level loss (Lins NCE), Transformer encoder (T ), normalization, and another backbone (MLP-Mixer). Note that CUT equals to the setting w/o Lins NCE, T , and AdaIN. Methods sunny\u2192night night\u2192sunny Average FID\u2193 SSIM\u2191 FID\u2193 SSIM\u2191 FID\u2193 SSIM\u2191 CUT [45] 75.28 0.698 80.72 0.634 78.00 0.666 MUNIT [23] 100.32 0.703 98.04 0.631 99.18 0.680 DRIT [35] 79.59 0.312 99.33 0.266 89.46 0.289 MGUIT [28] 98.03 0.836 82.17 0.848 90.10 0.842 InstaFormer 84.72 0.872 71.65 0.818 79.05 0.845 Table 2. Quantitative evaluation with FID [20] metric for data distribution and SSIM [58] index measured at each instance. 0% 20% 40% 60% 80% 100% Style Relevance Content Relevance Image Quality MLP-Mixer w/o w/o CUT w/o AdaIN InstaFormer Figure 10. User study results on ablation study. bility to focus on objects, thus generating images containing blurred objects, as evaluated in Fig. 3 as well. To validate the effect of T in our model, we conduct ablation experiments by replacing it with Resblocks (without Lins NCE, T ). Without Transformers, it fails to capture global relationship between features. It is obvious that CUT [45] shows limited results containing artifacts, while InstaFormer dramatically improves object-awareness and quality of the generated image thanks to our architecture. Since AdaIN helps to understand global style by leveraging affine parameters, the result without AdaIN, which is replaced with LayerNorm, shows limited preservation on style with a single-modal output. We also validate our ablation study results on human evaluation. 110 participants are asked to consider three aspects: overall quality, semantic consistency and style consistency, summarized in Fig. 10, where we also validate the superiority of each proposed component. In addition, we conduct experiments using MLPMixer [52]-based aggregator that replaces T consisted of ViT [13] blocks to justify robustness of our framework. Fig. 9(c) shows result examples by MLP-Mixer [52]-based aggregator. Although ViT-based model is slightly better on MLP-Mixer [52]-based model in overall quality in Fig. 9(b), the object instance and style representation are faithfully preserved, which indicates that our method can be adopted in another Transformer backbone. Method Pers Car Truc. Bic mAP DT [26] 28.5 40.7 25.9 29.7 31.2 DAF [23] 39.2 40.2 25.7 48.9 38.5 DARL [34] 46.4 58.7 27.0 49.1 45.3 DAOD [49] 47.3 59.1 28.3 49.6 46.1 DUNIT [3] 60.7 65.1 32.7 57.7 54.1 MGUIT [28] 58.3 68.2 33.4 58.4 54.6 InstaFormer 61.8 69.5 35.3 55.3 55.5 Table 3. Results for domain adaptive detection. We compare the per-class Average Precision for KITTI \u2192CityScape. 4.5. Domain Adaptive Object Detection Additionally, we evaluate our method on the task of unsupervised domain adaptation for object detection. We follow the experimental setup in DUNIT [3]. We used Faster-RCNN [48] as baseline detector. In Table 3, we report the per-class average precisions (AP) for the KITTI\u2192Cityscapes case [9, 16]. Compared to DUNIT [3] and MGUIT [28], our model shows impressive results. It should be noted that we do not access any information about bounding box information on test-time, while DUNIT contains object detection network and MGUIT has access to trained external memory by reading class-aware features. In particular, our model significantly outperforms other methods in almost all classes, which indicates that our suggested instance loss has strength on instance-awareness. 5." + } + ], + "Santabrata Das": [ + { + "url": "http://arxiv.org/abs/2205.07737v1", + "title": "On the origin of core radio emissions from black hole sources in the realm of relativistic shocked accretion flow", + "abstract": "We study the relativistic, inviscid, advective accretion flow around the\nblack holes and investigate a key feature of the accretion flow, namely the\nshock waves. We observe that the shock-induced accretion solutions are\nprevalent and such solutions are commonly obtained for a wide range of the flow\nparameters, such as energy (${\\cal E}$) and angular momentum ($\\lambda$),\naround the black holes of spin value $0\\le a_{\\rm k} < 1$. When the shock is\ndissipative in nature, a part of the accretion energy is released through the\nupper and lower surfaces of the disc at the location of the shock transition.\nWe find that the maximum accretion energies that can be extracted at the\ndissipative shock ($\\Delta{\\cal E}^{\\rm max}$) are $\\sim 1\\%$ and $\\sim 4.4\\%$\nfor Schwarzschild black holes ($a_{\\rm k}\\rightarrow 0$) and Kerr black holes\n($a_{\\rm k}\\rightarrow 1$), respectively. Using $\\Delta{\\cal E}^{\\rm max}$, we\ncompute the loss of kinetic power (equivalently shock luminosity, $L_{\\rm\nshock}$) that is enabled to comply with the energy budget for generating\njets/outflows from the jet base ($i.e.$, post-shock flow). We compare $L_{\\rm\nshock}$ with the observed core radio luminosity ($L_R$) of black hole sources\nfor a wide mass range spanning $10$ orders of magnitude with sub-Eddington\naccretion rate and perceive that the present formalism seems to be potentially\nviable to account $L_R$ of $16$ Galactic black hole X-ray binaries (BH-XRBs)\nand $2176$ active galactic nuclei (AGNs). We further aim to address the core\nradio luminosity of intermediate-mass black hole (IMBH) sources and indicate\nthat the present model formalism perhaps adequate to explain core radio\nemission of IMBH sources in the sub-Eddington accretion limit.", + "authors": "Santabrata Das, Anuj Nandi, C. S. Stalin, Suvendu Rakshit, Indu Kalpa Dihingia, Swapnil Singh, Ramiz Aktar, Samik Mitra", + "published": "2022-05-16", + "updated": "2022-05-16", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION The observational evidence of the ejections of matter from the BH-XRBs (Rodriguez, Mirabel, & Marti 1992; Mirabel & Rodr\u00b4 \u0131guez 1994) and AGNs (Jennison & Das Gupta 1953; Junor, Biretta, & Livio 1999) strongly suggests that there possibly exists a viable coupling between the accreting and the out\ufb02owing matters (Feroci et al. 1999; Willott et al. 1999; Ho & Peng 2001; Pahari et al. 2018; Russell et al. 2019a; de Haas et al. \u22c6E-mail: sbdas@iitg.ac.in (SD) \u2020 E-mail: anuj@ursc.gov.in (AN) 2021). Since the ejected matters are in general collimated, they are likely to be originated from the inner region of the accretion disc and therefore, they may reveal the underlying physical processes those are active surrounding the black holes. Further, observational studies indicate that there is a close nexus between the jet launching and the spectral states of the associated black holes (Vadawale et al. 2001; Chakrabarti et al. 2002; Gallo, Fender, & Pooley 2003; Fender, Homan, & Belloni 2009; Radhika et al. 2016; Blandford, Meier, & Readhead 2019). All these \ufb01ndings suggest that the jet generation mechanism seems to be strongly connected with the accretion process around the black holes of di\ufb00erent mass \u00a9 0000 The Authors \f2 Das et al. irrespective to be either BH-XRBs or AGNs. Meanwhile, numerous e\ufb00orts were made both in theoretical (Chakrabarti 1999; Das & Chakrabarti 1999; Blandford & Begelman 1999; Das, Chattopadhyay, & Chakrabarti 2001; McKinney & Blandford 2009; Das et al. 2014; Ressler et al. 2017; Aktar, Nandi, & Das 2019; Okuda et al. 2019) as well as observational fronts to explain the disc-jet symbiosis (Feroci et al. 1999; Brinkmann et al. 2000; Nandi et al. 2001; Fender, Belloni, & Gallo 2004; Miller-Jones et al. 2012; Miller et al. 2012; Sbarrato, Padovani, & Ghisellini 2014; Radhika et al. 2016; Svoboda, Guainazzi, & Merloni 2017; Blandford, Meier, & Readhead 2019). The \ufb01rst ever attempt to examine the correlation between the X-ray (LX) and radio (LR) luminosities for black hole candidate GX 339-4 during its hard states was carried out by Hannikainen et al. (1998), where it was found that LR scales with LX following a power-law. Soon after, Fender (2001) reported that the compact radio emissions are associated with the Low/Hard State (LHS) of several black hole binaries. Similar trend was seen to follow by several such BH-XRBs (Corbel et al. 2003; Gallo, Fender, & Pooley 2003). Later, Merloni, Heinz, & di Matteo (2003) revisited this correlation including the low-luminosity AGNs (LLAGNs) and found tight constraints on the correlation described as the Fundamental Plane of the black hole activity in a three-dimensional plane of (LR, LX, MBH), where MBH denotes the mass of the black hole. Needless to mention that the above correlation study was conducted considering the core radio emissions at 5 GHz in all mass scales ranging from stellar mass (\u223c10 M\u2299) to Supermassive (\u223c106\u221210 M\u2299) black holes. To explain the correlation, Heinz & Sunyaev (2003) envisaged a non-linear dependence between the mass of the central black hole and the observed \ufb02ux considering core dominated radio emissions. Subsequently, several group of authors further carried out the similar works to reveal the rigor of various physical processes responsible for such correlation (Falcke, K\u00a8 ording, & Marko\ufb00 2004; K\u00a8 ording, Falcke, & Corbel 2006; Merloni et al. 2006; Wang, Wu, & Kong 2006; Panessa et al. 2007; G\u00a8 ultekin et al. 2009; Plotkin et al. 2012; Corbel et al. 2013; Dong & Wu 2015; Panessa et al. 2015; Nisbet & Best 2016; G\u00a8 ultekin et al. 2019). In the quest of the disc-jet symbiosis, many authors pointed out that the accretion-ejection phenomenon is strongly coupled and advective accreting disc plays an important role in powering the jets/out\ufb02ows (Das & Chakrabarti 1999; Blandford & Begelman 1999; Chattopadhyay, Das, & Chakrabarti 2004; Aktar, Nandi, & Das 2019, and references therein). In reality, an advective accretion \ufb02ow around the black holes is necessarily transonic because of the fact that the infalling matter must satisfy the inner boundary conditions imposed by the event horizon. During accretion, rotating matter experiences centrifugal repulsion against gravity that yields a virtual barrier in the vicinity of the black hole. Eventually, such a barrier triggers the discontinuous transition of the \ufb02ow variables to form shock waves (Landau & Lifshitz 1959; Frank et al. 2002). In reality, the downstream \ufb02ow is compressed and heated up across the shock front that eventually generates additional entropy all the way up to the horizon. Hence, accretion solutions harboring shock waves are naturally preferred according to the 2nd law of thermodynamics (Becker & Kazanas 2001). Previous studies corroborate the presence of hydrodynamic shocks (Fukue 1987; Chakrabarti 1989; Nobuta & Hanawa 1994; Lu et al. 1999; Fukumura & Tsuruta 2004; Chakrabarti & Das 2004; Mo\u00b4 scibrodzka, Das, & Czerny 2006; Das & Czerny 2011; Aktar, Das, & Nandi 2015; Dihingia et al. 2019), and magnetohydrodynamic (MHD) shocks (Koide, Shibata, & Kudoh 1998; Takahashi et al. 2002; Das & Chakrabarti 2007; Fukumura & Kazanas 2007; Takahashi & Takahashi 2010; Sarkar & Das 2016; Fukumura et al. 2016; Okuda et al. 2019; Dihingia et al. 2020) in both BH-XRB and AGN environments. Extensive numerical simulations of the accretion disc independently con\ufb01rm the formation of shocks as well (Ryu, Chakrabarti, & Molteni 1997; Fragile & Blaes 2008; Das et al. 2014; Generozov et al. 2014; Okuda & Das 2015; Sukov\u00b4 a & Janiuk 2015; Okuda et al. 2019; Palit, Janiuk, & Czerny 2020). Due to the shock compression, the post-shock \ufb02ow becomes hot and dense that results in a pu\ufb00ed up torus like structure which acts as the e\ufb00ective boundary layer of the black hole and is commonly called as post-shock corona (hereafter PSC). In general, PSC is hot enough (T \u2273109 K) to de\ufb02ect out\ufb02ows which may be further accelerated by the radiative processes active in the disc (Chattopadhyay, Das, & Chakrabarti 2004). Hence, the out\ufb02ows/jets are expected to carry a fraction of the available energy (equivalently core emission) at the PSC, which in general considered as the base of the out\ufb02ows/jets (Chakrabarti 1999; Das et al. 2001; Chattopadhyay & Das 2007; Das & Chattopadhyay 2008; Singh & Chakrabarti 2011; Sarkar & Das 2016). Becker and his collaborators showed that the energy extracted from the accretion \ufb02ow via isothermal shock can be utilized to power the relativistic particles emanating from the disc (Le & Becker 2005; Becker, Das, & Le 2008; Das, Becker, & Le 2009; Lee & Becker 2020). Moreover, magnetohydrodynamical study of the accretion \ufb02ows around the black holes also accounts for possible role of shock as the source of high energy radiation (Nishikawa et al. 2005; Takahashi et al. 2006; Hardee, Mizuno, & Nishikawa 2007; Takahashi & Takahashi 2010). An important generic feature of shock wave is that it is likely to be radiatively e\ufb03cient. For that, shocks become dissipative in nature where an amount of accreting energy is escaped at the shock location through the disc surface resulting the overall reduction of downstream \ufb02ow energy all the way down to the horizon. This energy loss is mainly regulated by a plausible mechanism known as the thermal Comptonization process (Chakrabarti & Titarchuk 1995; Das, Chakrabarti, & Mondal 2010, and references therein). Assuming the energy loss to be proportional to the di\ufb00erence of temperatures across the shock front, the amount of energy dissipation at the shock can be estimated (Das, Chakrabarti, & Mondal 2010), which is same as the accessible energy at the PSC. A fraction of this energy could be utilized to produce and power out\ufb02ows/jets as they are likely to originate from the PSC around the black holes (Chakrabarti 1999; Das, Chattopadhyay, & Chakrabarti 2001; Aktar, Das, & Nandi 2015; Okuda et al. 2019). Being motivated with this appealing energy extraction mechanism, in this paper, we intend to study the stationary, axisymmetric, relativistic, advective accretion \ufb02ow around MNRAS 000, 1\u201315 (0000) \fCore radio emissions from black hole sources 3 the black holes in the realm of general relativity and selfconsistently obtain the global accretion solutions containing dissipative shock waves. Such dissipative shock solution has not yet been explored in the literature for maximally rotating black holes having spin ak \u21921. We quantitatively estimate the amount of the energy released through the upper and lower surface of the disc at the shock location and show how the liberated energy a\ufb00ects the shock dynamics. We also compute the maximum available energy dissipated at the shock for 0 \u2264ak < 1. Utilizing the usable energy available at the PSC, we estimate the loss of kinetic power (which is equivalent to shock luminosity) from the disc (Lshock) which drives the jets/out\ufb02ows. It may be noted that the kinetic power associated with the base of the out\ufb02ows/jets is interpreted as the core radio emission. Further, we investigate the observed correlation between radio luminosities and the black hole masses, spanning over ten orders of magnitude in mass for BH-XRBs as well as AGNs. We show that the radio luminosities in both BH-XRBs and AGNs are in general much lower as compared to the possible energy loss at the PSC and therefore, we argue that the dissipative shocks seem to be potentially viable to account the energy budget associated with the core radio luminosities in all mass scales. Considering this, we aim to reveal the missing link between the BH-XRBs and AGNs in connection related to the jets/out\ufb02ows. Employing our model formalism, we estimate the core radio luminosity of the intermediate mass black hole (IMBH) sources in terms of the central mass. The article is organized as follows: In Section 2, we describe our model and mention the governing equations. We present the solution methodology in Section 3. In Section 4, we discuss our results in detail. In Section 5, we discuss the observational implications of our formalism to explain the core radio emissions from black holes in all mass scales. Finally, we present the conclusion in Section 6. 2 ASSUMPTIONS AND GOVERNING MODEL EQUATIONS We consider a steady, geometrically thin, axisymmetric, relativistic, advective accretion disc around a black hole. Throughout the study, we use a unit system as G = MBH = c = 1, where MBH, G and c are the mass of the black hole, gravitational constant and speed of light, respectively. In this unit system, length and angular momentum are expressed in terms of GMBH/c2 and GMBH/c. Since we have considered MBH = 1, the present analysis is applicable for black holes of all mass scales. In this work, we investigate the accretion \ufb02ow around a Kerr black hole and hence, we consider Kerr metric in Boyer-Lindquist coordinates (Boyer & Lindquist 1967) as, ds2 = g\u00b5\u03bddx\u00b5dx\u03bd, = gttdt2 + 2gt\u03c6dtd\u03c6 + grrdr2 + g\u03b8\u03b8d\u03b82 + g\u03c6\u03c6d\u03c62, (1) where x\u00b5 (\u2261t, r, \u03b8, \u03c6) denote coordinates and gtt = \u2212(1 \u2212 2r/\u03a3), gt\u03c6 = \u22122akr sin2 \u03b8/\u03a3, grr = \u03a3/\u2206, g\u03b8\u03b8 = \u03a3 and g\u03c6\u03c6 = A sin2 \u03b8/\u03a3 are the non-zero metric components. Here, A = (r2 + a2 k)2 \u2212\u2206a2 k sin2 \u03b8, \u03a3 = r2 + a2 k cos2 \u03b8, \u2206= r2 \u2212 2r+a2 k, and ak is the black hole spin. In this work, we follow a convention where the four velocities satisfy u\u00b5u\u00b5 = \u22121. Following Dihingia, Das, & Nandi (2019), we obtain the governing equations that describe the accretion \ufb02ow for a geometrically thin accretion disc which are given by, (a) the radial momentum equation: urur ,r + 1 2grr gtt,r gtt + 1 2urur \u0012gtt,r gtt + grrgrr,r \u0013 +u\u03c6utgrr \u0012gt\u03c6 gtt gtt,r \u2212gt\u03c6,r \u0013 + 1 2u\u03c6u\u03c6grr \u0012g\u03c6\u03c6gtt,r gtt \u2212g\u03c6\u03c6,r \u0013 + (grr + urur) e + p p,r = 0. (2) (b) the continuity equation: \u02d9 M = \u22124\u03c0rur\u03c1H, (3) where e is the energy density, p is the local gas pressure, \u02d9 M is the accretion rate treated as global constant, and r stands for radial coordinate. Moreover, H refers the local half-thickness of the disc and is given by (Ri\ufb00ert & Herold 1995; Peitz & Appl 1997; Dihingia, Das, & Nandi 2019), H = \u0012pr3 \u03c1F \u00131/2 ; with F = \u03b32 \u03c6 (r2 + a2 k)2 + 2\u2206a2 k (r2 + a2 k)2 \u22122\u2206a2 k , where \u03b32 \u03c6 = 1/(1 \u2212v2 \u03c6) is the bulk azimuthal Lorentz factor and v2 \u03c6 = u\u03c6u\u03c6/(\u2212utut). We de\ufb01ne the radial three velocity in the co-rotating frame as v2 = \u03b32 \u03c6v2 r and thus, we have the bulk radial Lorentz factor \u03b32 v = 1/(1 \u2212v2), where v2 r = urur/(\u2212utut). In order to solve equations (2-3), a closure equation in the form of Equation of State (EoS) describing the relation among the thermodynamical quantities, namely density (\u03c1), pressure (p) and energy density (e) is needed. For that we adopt an EoS for relativistic \ufb02uid which is given by (Chattopadhyay & Ryu 2009), e = \u03c1f \u0010 2 \u2212mp me \u0011, with f = \u0014 1 + \u0398 \u00129\u0398 + 3 3\u0398 + 2 \u0013\u0015 + \u0014mp me + \u0398 \u00129\u0398me + 3mp 3\u0398me + 2mp \u0013\u0015 , where \u0398 (= kBT/mec2) is the dimensionless temperature, me is the mass of electron, and mp is the mass of ion, respectively. According to the relativistic EoS, we express the speed of sound as as = p 2\u0393\u0398/(f + 2\u0398), where \u0393 = (1 + N)/N is the adiabatic index, and N = (1/2)(d f/d\u0398) is the polytropic index of the \ufb02ow (Dihingia, Das, & Nandi 2019). In this work, we use a stationary metric g\u00b5\u03bd which has axial symmetry and this enables us to construct two Killing vector \ufb01elds \u2202t and \u2202\u03c6 that provide two conserved quantities for the \ufb02uid motion in this gravitational \ufb01eld and are given by, hu\u03c6 = constant; \u2212hut = constant = E, (4) where h [= (e+p)/\u03c1] is the speci\ufb01c enthalpy of the \ufb02uid, E is the relativistic Bernoulli constant (i.e., the speci\ufb01c energy of the \ufb02ow). Here, ut = \u2212\u03b3v\u03b3\u03c6/ p \u03bbgt\u03c6 \u2212gtt, where \u03bb (= \u2212u\u03c6/ut) denotes the conserved speci\ufb01c angular momentum. MNRAS 000, 1\u201315 (0000) \f4 Das et al. 3 SOLUTION METHODOLOGY We simplify equations (2) and (3) to obtain the wind equation in the co-rotating frame as, dv dr = N D , (5) where the numerator N is given by, N = \u2212 1 r(r \u22122) + \u03b32 \u03c6\u03bb 2ak r2\u2206+ \u03b32 \u03c6 4a2 k r2\u2206(r \u22122) \u2212\u03b32 \u03c6\u2126\u03bb2a2 k \u2212r2(r \u22123) r2\u2206 + 2ak\u03b32 \u03c6\u2126r2(r \u22123) \u22122a2 k r2\u2206(r \u22122) + 2a2 s \u0393 + 1 \u0014\u0000r \u2212a2 k \u0001 r\u2206 + 5 2r \u22121 2F dF dr \u0015 , (6) and the denominator D is given by, D = \u03b32 v \u0014 v \u2212 2a2 s v(\u0393 + 1) \u0015 , (7) where, \u2126= u\u03c6/ut is the angular velocity of the \ufb02ow. Following Dihingia, Das, & Nandi (2019), we obtain the temperature gradient as, d\u0398 dr = \u2212 2\u0398 2N + 1 \u0014\u0000r \u2212a2 k \u0001 r\u2206 + \u03b32 v v dv dr + 5 2r \u22121 2F dF dr \u0015 . (8) In order to obtain the accretion solution around the black hole, we solve equations (5-8) following the methodology described in Dihingia, Das, & Nandi (2019). While doing this, we speci\ufb01cally con\ufb01ne ourselves to those accretion solutions that harbor standing shocks (Fukue 1987; Chakrabarti 1989; Yang & Kafatos 1995; Lu et al. 1999; Chakrabarti & Das 2004; Fukumura & Tsuruta 2004; Das 2007; Chattopadhyay & Kumar 2016; Sarkar & Das 2016; Dihingia et al. 2019). In general, during the course of accretion, the rotating infalling matter experiences centrifugal barrier at the vicinity of the black hole. Because of this, matter slows down and piles up causing the accumulation of matter around the black hole. This process continues until the local density of matter attains its critical value and once it is crossed, centrifugal barrier triggers the transition of the \ufb02ow variables in the form of shock waves. In reality, shock induced global accretion solutions are potentially favored over the shock free solutions as the entropy content of the former type solution is always higher (Das, Chattopadhyay, & Chakrabarti 2001; Becker & Kazanas 2001). At the shock, the kinetic energy of the supersonic pre-shock \ufb02ow is converted into thermal energy and hence, post-shock \ufb02ow becomes hot and behaves like a Compton corona (Chakrabarti & Titarchuk 1995; Iyer, Nandi, & Mandal 2015; Nandi et al. 2018; Aktar, Nandi, & Das 2019). As there exists a temperature gradient across the shock front, it enables a fraction of the available thermal energy to dissipate away through the disc surface. Evidently, the energy accessible at the postshock \ufb02ow is same as the available energy dissipated at the shock. A part of this energy is utilized in the form of high energy radiations, namely the gamma ray and the X-ray emissions, and the rest is used for the jet/out\ufb02ow generation as they are expected to be launched from the post-shock region (Chakrabarti 1999; Becker, Das, & Le 2008; Das, Becker, & Le 2009; Becker, Das, & Le 2011; Sarkar & Das 2016). These jets/out\ufb02ows further consume some energy simultaneously for their thermodynamical expansion and for the work done against gravity. The remaining energy is then utilized to power the jets/out\ufb02ows. It may be noted that for radiatively ine\ufb03cient adiabatic accretion \ufb02ow, the speci\ufb01c energy in the pre-shock as well as post-shock \ufb02ows remains conserved. In reality, the energy \ufb02ux across the shock front becomes uniform when the shock width is considered to be very thin and the shock is nondissipative (Chakrabarti 1989; Frank et al. 2002) in nature. However, in this study, we focus on the dissipative shocks where a part of the accreting energy is released vertically at the shock causing a reduction of speci\ufb01c energy in the postshock \ufb02ow. The mechanism by which the accreting energy could be dissipated at the shock is primarily governed by the thermal Comptonization process (Chakrabarti & Titarchuk 1995) and because of this, the temperature in the postshock region is decreased. Considering the above scenario, we model the loss of energy (\u2206E) to be proportional to the temperature di\ufb00erence across the shock front and \u2206E is estimated as (Das, Chakrabarti, & Mondal 2010; Singh & Chakrabarti 2011; Sarkar & Das 2016, and references therein), \u2206E = \u03b2(N+a2 s+ \u2212N\u2212a2 s\u2212), (9) where \u03b2 is the proportionality constant that accounts the fraction of the accessible thermal energy across the shock front. Here, the quantities expressed using the subscripts \u2018\u2212\u2019 and \u2018+\u2019 refer their immediate pre-shock and post-shock values, respectively. Needless to mention that because of the energy dissipation at the shock, the post-shock \ufb02ow energy (E+) can be expressed as E+ = E\u2212\u2212\u2206E, where E\u2212denotes the energy of the pre-shock \ufb02ow. In this work, we treat E\u2212 and E+ as free parameters and applying them, we calculate \u2206E from the shocked accretion solutions. Needless to mention that the post-shock \ufb02ow may become bound due to the energy dissipation (\u2206E > 0) across the shock front, however, all the solutions under consideration are chosen as unbound in the pre-shock domain. With this, we calculate \u03b2 using equation (9) for shocked accretion solutions that lies in the range 0 < \u03b2 < 1. It is noteworthy that in this work, the global accretion solutions containing shocks are independent of the accretion rate as radiative cooling processes are not taken into account for simplicity. This eventually imposes limitations in explaining the physical states of the accretion \ufb02ow although the model solutions are su\ufb03ce to characterize the accretion \ufb02ow kinematics in terms of the conserved quantities, namely energy and angular momentum of the \ufb02ow. Now, based on the above insight on the energy budget, the total usable energy available in the post-shock \ufb02ow is \u2206E. Keeping this in mind, we calculate the loss of kinetic power by the disc corresponding to \u2206E in terms of the observable quantities and obtain the shock luminosity (Le & Becker 2004, 2005) as, Lshock = \u02d9 M \u00d7 \u2206E \u00d7 c2 erg s\u22121, (10) where Lshock is the shock luminosity and \u02d9 M is the accretion rate. With this, we compute Lshock considering the dissipative shock mechanism and compare it with core radio luminosity observed from the black hole sources. Indeed, it is clear from equation (10) that Lshock may be degenerate due MNRAS 000, 1\u201315 (0000) \fCore radio emissions from black hole sources 5 Figure 1. Plot of Mach number (M = v/as) as function of radial coordinate (r). Here, the \ufb02ow parameters are chosen as E\u2212= 1.002 and \u03bb = 2.01, and black hole spin is considered as ak = 0.99. Results depicted with solid (purple), dashed (orange) and dotted (green) curves are obtained for \u2206E = 0, 0.0025, and 0.0167, respectively. At the inset, inner critical points (rin) are zoomed which are shown using open circle, open triangle and a cross whereas outer critical point (rout) is shown using \ufb01lled circle. Vertical arrows represent the locations of the shock transition (rs) and the arrows indicate the overall direction of \ufb02ow motion towards the black hole. See text for details. to the di\ufb00erent combinations of \u02d9 M and \u2206E. In this work, we choose the spin value of the black hole in the range 0 \u2264ak \u22640.99. Moreover, in order to represent the LHS of the black hole sources (as \u2018compact\u2019 jets are commonly observed in the LHS (Fender, Belloni, & Gallo 2004)), we consider the value of accretion rate in the range \u02d9 m = \u02d9 M/ \u02d9 MEdd = 10\u22125 \u22121.0 (Wu & Liu 2004; Athulya M. et al. 2021), where \u02d9 MEdd is the Eddington mass accretion rate and is given by \u02d9 MEdd = 1.39 \u00d7 1017 (MBH/M\u2299) g sec\u22121. Furthermore, in order to examine the robustness of our model formalism, we vary the mass of the central black hole in a wide range starting form stellar mass to Supermassive scale, and \ufb01nally compare the results with observations. 4 RESULTS In Fig. 1, we depict the typical accretion solutions around a rotating black hole of spin ak = 0.99. In the \ufb01gure, we plot the variation of Mach number (M = v/a) as function of radial coordinate (r). Here, the \ufb02ow starts its journey from the outer edge of the disc at redge = 5000 subsonically with energy E\u2212= 1.002 and angular momentum \u03bb = 2.01. As the \ufb02ow moves inward, it gains radial velocity due to the in\ufb02uence of black hole gravity and smoothly makes sonic state transition while crossing the outer critical point at rout = 117.5285. At the supersonic regime, rotating \ufb02ow experiences centrifugal barrier against gravity that causes the accumulation of matter in the vicinity of the black hole. Because of this, matter locally piles up resulting the increase of density. Undoubtedly, this process is not continued inde\ufb01nitely due to the fact that at the critical limit of density, centrifugal barrier triggers the discontinuous transition in the \ufb02ow variables in the form of shock waves (Fukue 1987; Frank et al. 2002). At the shock, supersonic \ufb02ow jumps into the subsonic branch where all the preshock kinetic energy of the \ufb02ow is converted into thermal energy. In this case, the \ufb02ow experiences shock transition at rs = 50.47. Just after the shock transition, post-shock \ufb02ow momentarily slows down, however gradually picks up its velocity and ultimately enters into the black hole supersonically after crossing the inner critical point smoothly at rin = 1.4031. This global shocked accretion solution is plotted using solid (purple) curve where arrows indicate the direction of the \ufb02ow motion and the vertical arrow indicates the location of the shock transition. Next, when a part of the \ufb02ow energy (\u2206E) is radiated away through the disc surface at the shock, the post-shock thermal pressure is reduced and the shock front is being pushed further towards the horizon. Evidently, the shock settles down at a smaller radius in order to maintain the pressure balance across the shock front. Following this, when \u2206E = 0.0025 is chosen, we obtain rs = 24.47 and rin = 1.4047, and the corresponding solution is plotted using the dashed curve (orange). When the energy dissipation is monotonically increased, for the same set of \ufb02ow parameters, we \ufb01nd the closest standing shock location at rs = 8.22 for \u2206E = 0.0167. This solution is presented using dotted curve (green) where rin = 1.4147. For the purpose of clarity, in the inset, we zoom the inner critical point locations as they are closely separated. In the \ufb01gure, critical points and the energy dissipation parameters are marked. What is more is that following Chakrabarti & Molteni (1993); Yang & Kafatos (1995); Lu et al. (1997); Fukumura & Kazanas (2007), the stability of the standing shock is examined, where we vary the shock front radially by an in\ufb01nitesimally small amount in order to perturb the radial momentum \ufb02ux density (T rr, Dihingia, Das, & Nandi (2019)). When shock is dynamically stable, it must come back to its original position and the criteria for stable shock is given by, \u03ba(rs) = \u0010 dT rr 2 dr \u2212 dT rr 1 dr \u0011 < 0 (Fukumura & Kazanas 2007). Invoking this criteria, we ascertain that all the standing shocks presented in Fig. 1 are stable. For the same shocked accretion solutions, we compute the various shock properties (see Das 2007; Das, Becker, & Le 2009), namely, shock location (rs), compression ratio (R), shock strength (S), scale height ratio (H+/H\u2212), and present them in Table 1. In reality, as \u2206E increases, shock settles down at the lower radii (Fig. 1) and hence, the temperature of PSC increases due to enhanced shock compression. Moreover, since the disk thickness is largely depends on the local temperature, the scale height ratio increases with the increase of \u2206E yielding the PSC to be more pu\ufb00ed up for stronger shock. Accordingly, we infer that geometrically thick PSC seems to render higher energy dissipation (equivalently Lshock) that possibly leads to produce higher core radio luminosity. We examine the entire range of E+ and \u03bb that provides the global transonic shocked accretion solution around MNRAS 000, 1\u201315 (0000) \f6 Das et al. Table 1. Various shock properties computed for solutions presented in Fig. 1, where \u03bb = 2.01, E\u2212= 1.002 are chosen. See the text for details. \u2206E rout rin rs R S H+/H\u2212 0.0 117.5285 1.4031 50.47 1.58 1.75 1.11 0.0025 117.5285 1.4047 24.47 2.43 2.86 1.20 0.0167 117.5285 1.4147 8.22 3.67 4.35 1.26 Note: \u2206E is the energy loss, rout is the outer critical point, rin is the inner critical point, rs is the shock location, R is the compression ratio, S is the shock strength, and H+/H\u2212refers scale height ratio. Figure 2. Plot of parameter space in \u03bb \u2212E+ plane that admitted shock induced global accretion solutions around the black holes. For \ufb01xed ak = 0.99, we obtain shocked accretion solution passing through the inner critical point and having the minimum energy Emin + . The maximum amount of energy is lost by the \ufb02ow via the disc surface at the shock for Emin + . See text for details. a rapidly rotating black hole of spin value ak = 0.99. The obtained results are presented in Fig. 2, where the e\ufb00ective region bounded by the solid curve (in red) in \u03bb \u2212E+ plane provides the shock solutions for \u2206E = 0. Since energy dissipation at shock is restricted, the energy across the shock front remains same that yields E\u2212= E+. When energy dissipation at shock is allowed (i.e., \u2206E > 0), we have E+ < E\u2212 irrespective to the choice of \u03bb values. We examine all possible range of \u2206E that admits shock solution and separate the domain of the parameter space in \u03bb\u2212E+ plane using dashed curve (in blue). Further, we vary \u03bb freely and calculate the minimum \ufb02ow energy with which \ufb02ow enters into the black hole after the shock transition. In absence of any energy dissipation between the shock radius (rs) and horizon (rh), i.e., in the range rh < r < rs, this minimum energy is identical to the minimum energy of the post-shock \ufb02ow (E+) and we denote it as E min + . Needless to mention that E min + strongly depends on the spin of the black hole (ak) marked in the Figure 3. Plot of maximum available energy across the shock front (\u2206Emax) as function of the black hole spin (ak). Obtained results are depicted by the \ufb01lled circles in orange color which are joined by the green lines. See text for details. \ufb01gure. It is obvious that for a given ak, the maximum energy that can be dissipated at the shock is calculated as \u2206E max = E\u2212\u2212E min + . Subsequently, we freely vary all the input \ufb02ow parameters, namely E\u2212and \u03bb, and calculate \u2206E max for a given ak. The obtained results are presented in Fig. 3, where we depict the variation of \u2206E max as function ak. We \ufb01nd that around 1% of the \ufb02ow energy can be extracted at the dissipative shock for Schwarszchild black hole (weakly rotating, ak \u21920) and about 4.4% of the \ufb02ow energy can be extracted for Kerr black hole (maximally rotating, ak \u21921). In the next section, we use equation (10) to estimate the shock luminosity (Lshock) (equivalent to the kinetic power released by the disc) for black hole sources that include both BH-XRBs and AGNs. While doing this, the jets/out\ufb02ows are considered to be compact as well as core dominated surrounding the central black holes. Further, we compare Lshock with the observed core radio luminosity (LR) of both BHXRBs and AGNs. 5 ASTROPHYSICAL IMPLICATIONS In this work, we focus on the core radio emission at \u223c5 GHz from the black hole sources in all mass scales starting from BH-XRBs to AGNs. We compile the mass, distance, and core radio emission data of the large number of sample sources from the literature. 5.1 Source Selection: BH-XRBs We consider 16 BH-XRBs whose mass and distance are well constrained, and the radio observations of these sources in LHS are readily available (see Table 2). The accretion in MNRAS 000, 1\u201315 (0000) \fCore radio emissions from black hole sources 7 Table 2. Physical and observable parameters of BH-XRBs. Core radio luminosities (LR) are complied from the literature for several sources, if available. For the rest, LR is calculated using source distance (D), observation frequency (\u03bd) and core radio \ufb02ux (F5) values using LR = 4\u03c0\u03bdF5D2, where D refers source distance. Source Name Mass Distance Spin \u03bd Radio Flux Core radio luminosity References (MBH) (D) (ak) (F5) at 5 GHz (LR) (in M\u2299) (in kpc) (in GHz) (in mJy) (in 1030 erg s\u22121) 4U 1543-47 9.42 \u00b1 0.97 7.5 \u00b1 1.0 \u223c0.85 4.8 3.18 \u22124.00 1.03 \u22121.29 1, 2, 3, 44 Cyg X-1 14.8 \u00b1 1.0 1.86 \u00b1 0.12 > 0.99 15 6.00 \u221219.60 0.124 \u22120.406 4, 5, 3, 45ab GRO J1655-40 6.3 \u00b1 0.25 3.2 \u00b1 0.2 \u223c0.98 4.86 1.46 \u22122.01 0.087 \u22120.120 6, 7, 3, 46 GRS 1915+105 12.4+2.0 \u22121.8 8.6+2.0 \u22121.6 \u223c0.99 5.0 25.75 \u2212198.77 11.396 \u221287.967 8, 8, 3, 47 XTE J1118+480 7.1 \u00b1 1.3 1.8 \u00b1 0.6 \u2014 15 6.2 \u22127.5 0.069 \u22120.084 9, 9, 3 XTE J1550-564 9.1 \u00b1 0.6 4.4 \u00b1 0.5 \u223c0.78 4.8 0.88 \u22127.45 0.098 \u22120.829 10, 10, 3, 48 Cyg X-3 2.4+2.1 \u22121.1 \u2020 7.4 \u00b1 1.1 \u2014 \u2014 \u2014 4.36 \u2212269.15 11, 12, 13 GX 339-4 10.08+1.81 \u22121.80 8.4 \u00b1 0.9 > 0.97 \u2014 \u2014 0.00178 \u22120.8128 14, 15, 13, 49 XTE J1859+226 6.55 \u00b1 1.35 6 \u221211 \u223c0.6 \u2014 \u2014 0.151 \u22120.199 16, 17ab, 13, 50 H 1743-322 11.21+1.65 \u22121.96 8.5 \u00b1 0.8 < 0.7 4.8 0.12 \u22122.37 0.05 \u22120.984 18, 19, 20, 19 IGR J17091-3624 10.6 \u221212.3 11 \u221217 < 0.27 5.5 0.17 \u22122.41 0.18 \u22122.52 21, 22, 22, 51 4U 1630-472 10.0 \u00b1 0.1 11.5 \u00b1 0.3 \u223c0.98 4.86 \u22a1 1.4 \u00b1 0.3 1.08 \u22121.98 23, 24, 25, 52ab 4.80 \u22a0 2.6 \u00b1 0.3 MAXI J1535-571 6.47+1.36 \u22121.33 4.1+0.6 \u22120.5 \u223c0.99 5.5 0.18 \u2212377.20 0.02 \u221241.74 26, 27, 28, 53 MAXI J1348-630 11 \u00b1 2 2.2+0.5 \u22120.6 \u2014 5.5 3.4 \u00b1 0.2 0.108 29, 30, 31 MAXI J1820+070 5.73 \u22128.34 2.96 \u00b1 0.33 \u223c0.2 4.7 62 \u00b1 4 3.06 32, 33, 34, 54 V404 Cyg 9.0+0.2 \u22120.6 2.39 \u00b1 0.14 > 0.92 4.98 \u229e 0.141 \u22120.680 0.005 \u22120.023 35, 36, 37, 55 Swift J1357.2-0933 > 9.3 2.3 \u22126.3 \u2014 5.5 \u2014 0.0043 \u22120.033 38, 39ab, 40 MAXI J0637-430 8.0\u2020 10.0 \u2014 5.5 0.066 \u00b1 0.015 0.043 41, 42, 43 References: 1: Orosz (2003), 2: Park et al. (2004), 3: G\u00a8 ultekin et al. (2019), 4: Orosz et al. (2011a), 5: Reid et al. (2011), 6: Greene, Bailyn, & Orosz (2001), 7: Jonker & Nelemans (2004), 8: Reid et al. (2014), 9: McClintock et al. (2001), 10: Orosz et al. (2011b), 11: Zdziarski, Mikolajewska, & Belczynski (2013), 12: McCollough, Corrales, & Dunham (2016), 13: Merloni, Heinz, & di Matteo (2003), 14: Sreehari et al. (2019), 15: Parker et al. (2016), 16: Nandi et al. (2018), 17a: Hynes et al. (2002), 17b: Zurita et al. (2002), 18: Molla et al. (2017), 19: Steiner, McClintock, & Reid (2012), 20: Corbel et al. (2005), 21: Iyer, Nandi, & Mandal (2015), 22: Rodriguez et al. (2011), 23: Sei\ufb01na, Titarchuk, & Shaposhnikov (2014), 24: Kalemci, Maccarone, & Tomsick (2018), 25: Hjellming et al. (1999), 26: Sreehari et al. (2019), 27: Chauhan et al. (2019), 28: Russell et al. (2019a), 29: Lamer et al. (2020), 30: Chauhan et al. (2021), 31: Russell et al. (2019b), 32: Torres et al. (2020), 33: Atri et al. (2020), 34: Trushkin et al. (2018), 35: Khargharia, Froning, & Robinson (2010), 36: Miller-Jones et al. (2009), 37: Plotkin et al. (2019), 38: Corral-Santana et al. (2016), 39a: Mata S\u00b4 anchez et al. (2015), 39b: Shahbaz et al. (2013), 40: Paice et al. (2019), 41: Baby et al. (2021), 42: Tetarenko et al. (2021), 43: Russell et al. (2019), 44: Shafee et al. (2006), 45a: Zhao et al. (2021), 45b: Kushwaha, Agrawal, & Nandi (2021), 46: Stuchl\u00b4 \u0131k & Kolo\u02c7 s (2016), 47: Sreehari et al. (2020), 48: Miller et al. (2009), 49: Ludlam, Miller, & Cackett (2015), 50: Steiner, McClintock, & Narayan (2013), 51: Wang et al. (2018), 52a: King et al. (2014), 52b: Pahari et al. (2018), 53: Miller et al. (2018), 54: Guan et al. (2021), 55: Walton et al. (2017) \u2020: Mass estimate of these sources are uncertain, till date. \u22a1: VLA observation; \u22a0: ATCA observation; \u229e: VLBA observation in 2014. Note: References for black hole mass (MBH), distance (D), F\u03bd or LR, and spin (ak) are given in column 8 in sequential order. Data are complied based on the recent \ufb01ndings (see also Merloni, Heinz, & di Matteo (2003); G\u00a8 ultekin et al. (2019)). LHS (Belloni et al. 2005; Nandi et al. 2012) is generally coupled with the core radio emission (Fender, Belloni, & Gallo 2004) from the sources. Because of this, we include the observation of compact radio emission at \u223c5 GHz to calculate the radio luminosity while excluding the transient radio emissions (i.e., relativistic jets) commonly observed in soft-intermediate state (SIMS) (see Fender, Belloni, & Gallo 2004; Fender, Homan, & Belloni 2009; Radhika & Nandi 2014; Radhika et al. 2016, and references therein). It may be noted that the core radio luminosity of some of these sources are observed at di\ufb00erent frequency bands (such as 15 GHz). For Cyg X-1, 15 GHz radio luminosity was converted to 5 GHz radio luminosity assuming a \ufb02at spectrum (Fender et al. 2000), whereas for XTE J1118+480, we convert the 15 GHz radio luminosity to 5 GHz radio luminosity using a radio spectral index of \u03b1 = +0.5 considering F\u03bd = \u03bd\u03b1 (Fender et al. 2001). For these sources, we calculate 5 GHz radio luminosity using the relation LR \u2261\u03bdL\u03bd = MNRAS 000, 1\u201315 (0000) \f8 Das et al. 4\u03c0\u03bdF5D2 (see G\u00a8 ultekin et al. 2019), where \u03bd \u223c5 GHz, F5 are the \u223c5 GHz \ufb02ux, and D is the distance of the source, respectively. It may be noted that our BH-XRB source samples di\ufb00er from Merloni, Heinz, & di Matteo (2003) and G\u00a8 ultekin et al. (2019) because of the fact that we use most recent and re\ufb01ned estimates of mass and distance of the sources under consideration, and accordingly we calculate their radio luminosity. Further, we exclude the source LS 5039 from Table 2 as it is recently identi\ufb01ed as NS-Plusar source (Yoneda et al. 2020). In Table 2, we summarize the details of the selected sources, where columns 1 \u22128 represent source name, mass, distance, spin, observation frequency (\u03bd), radio \ufb02ux (F5), core radio luminosity (LR) and relevant references, respectively. 5.2 Source Selection: SMBH in AGN We consider a group of AGN sources following G\u00a8 ultekin et al. (2019) (hereafter G19) that includes both Seyferts and LLAGNs. For these sources, G\u00a8 ultekin et al. (2019) carried out the image analysis to extract the core radio \ufb02ux (F\u03bd) that eventually renders their core radio luminosity (LR). Here, we adopt a source selection criteria as (a) MBH > 105M\u2299and (b) source observations at radio frequency \u03bd \u223c5 GHz, that all together yields 61 source samples. Subsequently, we calculate the core radio luminosity of these sources as LR = 4\u03c0\u03bdF5D2, where F5 denote the core radio luminosity at \u03bd = 5 GHz frequency and obtain LR = 1032.5 \u22121040.8 erg s\u22121. Next, we use the catalog of Rakshit, Stalin, & Kotilainen (2020) (hereafter R20) to include Supermassive black holes (SMBHs) in our sample sources. The R20 catalog contains spectral properties of \u223c500, 000 quasars up to redshift factor (z) \u223c5 covering a wide range of black hole masses 107 \u22121010M\u2299. The mass of the SMBHs in the catalog is obtained by employing the Virial relation where the size of the broad line region can be estimated from the AGN luminosity and the velocity of the cloud can be calculated using the width of the emission line. Accordingly, the corresponding relation for the estimation of SMBH mass is given by (Kaspi et al. 2000), log \u0012MBH M\u2299 \u0013 = a + b log \u0012 \u03bbL\u03bb 1044erg s\u22121 \u0013 + 2 log \u0012 \u2206V km s\u22121 \u0013 , (11) where L\u03bb is the monochromatic continuum luminosity at wavelength \u03bb and \u2206V is the FWHM of the emission line. The coe\ufb03cients a and b are empirically calibrated based on the size-luminosity relation either from the reverberation mapping observations (Kaspi et al. 2000) or internally calibrated based on the di\ufb00erent emission lines (Vestergaard & Peterson 2006). Depending on the redshift, various combinations of emission line (H\u03b2, Mg II, C IV) and continuum luminosity (L5100, L3000, L1350) are used. A detailed description of the mass measurement method is described in R20. The majority of AGN in R20 sample have MBH > 108M\u2299. As the low-luminosity AGNs (LLAGNs) with mass MBH < 107M\u2299are not included in R20 sample, we explore the low-luminosity AGN catalog of Liu et al. (2019) (hereafter L19). It may be noted that in L19, the black hole mass is estimated by taking the average of the two masses obtained independently from the H\u03b1 and H\u03b2 lines. In order to \ufb01nd the radio-counterpart and to estimate the associated radio luminosity, we cross-match both catalogs (i.e., L19 and R20) with 1.4 GHz FIRST survey (White et al. 1997) within a search radius of 2 arc sec. The radio-detection fraction is 3.4% for R20 and 11.7% for L19 AGN samples. We note that the present analysis deals with core radio emissions of black hole sources and many AGNs show powerful relativistic jets which could be launched due to Blandford-Znajek (BZ) process (Blandford & Znajek 1977) instead of accretion \ufb02ow. Meanwhile, Rusinek et al. (2020) reported that the jet production e\ufb03ciency of radio loud AGNs (RL-AGNs) is 10% of the accretion disc radiative e\ufb03ciency, while this is only 0.02% in the case of radio quiet AGNs (RQ-AGNs) suggesting that the collimated, relativistic jets ought to be produced by the BZ mechanism rather than the accretion \ufb02ow. Subsequently, we calculate the radio-loudness parameter (R, de\ufb01ned by the ratio of FIRST 1.4 GHz to optical g-band \ufb02ux) and restrict our source samples for radio-quiet (R < 19; see Komossa et al. 2006) AGNs. As some radio sources are present in both catalogs (i.e., L19 and R20), we exclude common sources from R20. With this, we \ufb01nd 1207 and 911 radio-quiet AGNs in the R20 and L19 AGN sample, respectively. Accordingly, the \ufb01nal sample contains 2118 AGNs with black hole mass in the range 105.1 < (MBH/M\u2299) < 1010.3. The FIRST catalog provides 1.4 GHz integrated radio \ufb02ux (F1.4), which is further converted to the luminosity L1.4 (in watt/Hz) at 1.4 GHz using the following equation as, L1.4 = 4\u03c0 \u00d7 10\u22127 \u00d7 D2 L (1 + z)(1+\u03b1) \u00d7 F1.4, (12a) where we set the spectral index \u03b1 = \u22120.8 considering F\u03bd = \u03bd\u03b1 (Condon 1992) and DL refers the luminosity distance. Thereafter, we obtain the core radio luminosity LR at 5 GHz adopting the relation (Yuan et al. 2018) given by, log LR = (20.9 \u00b1 2.1) + (0.77 \u00b1 0.08) log L1.4. (12b) where LR is expressed in erg s\u22121. The radio luminosity at 5 GHz of our AGN sample has a range of LR = 1036.2 \u22121041.2 erg s\u22121. Following Rusinek et al. (2020), we further calculate the mean jet production e\ufb03ciency of our sample and it is found to be only \u223c0.02% compared to the disc radiative e\ufb03ciency. Such a low jet production e\ufb03ciency suggests that the production of the jets in our sample is possibly due to accretion \ufb02ow rather than the BZ process. Moreover, we calculate the 0.2 \u221212 keV X-ray luminosity (Lx) from the XMM-Newton data (Rosen et al. 2016, 3XMM-DR7) for 119 AGNs having both X-ray and radio \ufb02ux measurements. The Lx ranges from 1 \u00d7 1041 \u22122 \u00d7 1046 erg s\u22121 with a median of 1044 erg s\u22121 . The ratio of X-ray (0.2 \u221212 KeV) luminosity to radio luminosity (LR at 1.4 GHz) has a range of Lx/LR \u223c1.5 \u00d7 102 \u22126.6 \u00d7 105 with a median of 2.6 \u00d7 104. 5.3 Comparison of Lshock with Observed Core Radio Emission (LR) of BH-XRBs and AGNs In Fig. 4, we compare the shock luminosity (equivalently loss of kinetic power) obtained due to the energy dissipation at MNRAS 000, 1\u201315 (0000) \fCore radio emissions from black hole sources 9 Figure 4. Plot of kinetic power Lshock (in erg s\u22121) released through the upper and lower surface of the disc due to the energy dissipation at the accretion shock as function of the central black hole mass (MBH). The same is compared with the observed core radio emission (LR) of BH-XRBs and AGNs source samples. Shaded region (light-green) represents the model estimate of Lshock obtained for accretion rates 10\u22125 \u2272\u02d9 m \u22721 and 0 \u2264ak < 1. Open circles denote BH-XRBs, whereas open diamonds, red dots and blue dots represent the AGN samples taken from G\u00a8 ultekin et al. (2019), Liu et al. (2019) and Rakshit, Stalin, & Kotilainen (2020), respectively. Open squares and open triangles illustrate LR for IMBH sources. Solid, dotted, dot-dashed and dashed lines indicate the results obtained from liner regression for AGNs (L19), AGNs (R20), AGNs (G19), and BH-XRBs, respectively. See text for details. the shock with the observed core radio luminosities of central black hole sources of masses in the range \u223c3\u22121010M\u2299. The chosen source samples contain several BH-XRBs and a large number of AGNs. In the \ufb01gure, the black hole mass (in units of M\u2299) is varied along the x-axis, observed core radio luminosity (LR) is varied along y-axis (left side) and shock luminosity (Lshock) is varied along the y-axis (right side), respectively. We use \u2206E max calculated for black holes having spin range 0 \u2264ak \u22640.99 (see Fig. 3), to compute the shock luminosity Lshock which is analogous to the core radio luminosity (LR) of the central black hole sources. Here, the radio core is assumed to remain con\ufb01ned around the disk equatorial plane (\u03b8 \u223c\u03c0/2) in the region r \u2264rs. We vary the accretion rate in the range 10\u22125 \u2264\u02d9 m \u22641 to include both gas-pressure and radiation pressure dominated disc (Kadowaki, de Gouveia Dal Pino, & Singh 2015, and references therein) and obtain the kinetic power Lshock that is depicted using light-green color shade in Fig. 4. The open green circles correspond to the core radio emission from the 16 BH-XRBs while the dots and diamonds represent the same for AGNs. The black diamonds represent 61 AGN source samples adopted from G\u00a8 ultekin et al. (2019). The red dots (908 samples) denote the low-luminosity AGNs (LLAGNs) (Liu et al. 2019) and the blue dots (1207 samples) represent the quasars (Rakshit, Stalin, & Kotilainen 2020). At the inset, these three sets of AGN source samples are marked as AGNs (G19), AGNs (L19) and AGNs (R20), respectively. It is to be noted that we exclude Cyg X-3 from this analysis due to the uncertainty of its mass estimate and in the \ufb01gure, we mark this source using red asterisk inside open circle. We carry out the linear regression analysis for (a) BH-XRBs, (b) AGNs (G19), (c) AGNs (L19), and (d) AGNs (R20) and estimate the correlation between the mass (MBH) and the core radio luminosity (LR) of the black hole sources. We \ufb01nd that LR \u223cM 1.5 BH for BH-XRBs (dashed line), LR \u223cM 0.98 BH for AGNs (G19) (dot-dashed line), LR \u223cM 0.38 BH for AGNs (L19) (solid line), and LR \u223cM 0.54 BH for AGNs (R20) (dotted line), respectively. Fig. 4 clearly indicates that the kinetic power released because of the energy dissipation at the shock seems to be capable of explaining the core radio emission from the MNRAS 000, 1\u201315 (0000) \f10 Das et al. central black holes. In particular, the results obtained from the present formalism suggest that for \u02d9 m \u22721, only a fraction of the released kinetic power at the shock perhaps viable to cater the energy budget required to account the core radio emission for supermassive black holes although LR for stellar mass black holes coarsely follows shock luminosity (Lshock). It is noteworthy to mention that the radio luminosity of AGNs from G19 are in general lower compared to the same for sources from R20 and L19 catalogs. In reality, AGNs from L19 and R20 are mostly distant unresolved sources where it remains challenging to separate the core radio \ufb02ux from the lobe regions. Hence, a fraction of the lobe contribution is likely to be present in the estimation of their LR values even for radio quiet AGNs. Nonetheless, we infer that the inclusion of the L19 and R20 sources will not alter the present \ufb01ndings of our analysis at least qualitatively. 5.4 LR for Intermediate Mass Black Holes The recent discovery by the LIGO collaboration resolves the long pending uncertainty of the possible existence of the intermediate mass black holes (IMBHs) (Abbott et al. 2020). They reported the detection of IMBH of mass 142 M\u2299which is formed through the merger of two smaller mass black holes. This remarkable discovery establishes the missing link between the stellar mass black holes (MBH \u227220M\u2299) and the Supermassive black holes (MBH \u2273106M\u2299). Due to limited radio observations of the IMBH sources, model comparison with observation becomes unfeasible. Knowing this constrain, however, there remains a scope to predict the radio \ufb02ux for these sources by knowing the disc X-ray luminosity (LX), source distance (D), and possible range of the source mass (MBH). Following Merloni, Heinz, & di Matteo (2003), we obtain the radio \ufb02ux (F5) at 5 GHz using the relation given by, F5 =10 \u00d7 \u0012 LX 3 \u00d7 1031 erg s\u22121 \u00130.6 \u00d7 \u0012 MBH 100M\u2299 \u00130.78 \u00d7 \u0012 D 10 kpc \u0013\u22122 \u00b5Jy. (13) Thereafter, using equation (13), we calculate LR = 4\u03c0\u03bdF5D2 (see Table 3). As a case study, we choose two IMBH sources whose LX and D are known from the literature and examine the variation of LR in terms of the source mass (MBH). Since the mass of IC 342 X-1 source possibly lie in the range of 50 \u2272MBH/M\u2299\u2272103 (Cseh et al. 2012; Agrawal & Nandi 2015), we obtain the corresponding LR values which is depicted by the open squares joined with straight line in Fig. 4. Similarly, we estimate LR for M82 X-1 source by varying MBH in the range \u223c250 \u2212500 M\u2299 (Pasham, Strohmayer, & Mushotzky 2014) and the results are presented by open triangles joined with straight line in Fig. 4. Needless to mention that the predicted LR for these sources reside below the model estimates. With this, we argue that the present model formalism is perhaps adequate to explain the energetics of the core radio emissions of IMBH sources. 6 DISCUSSION AND" + }, + { + "url": "http://arxiv.org/abs/2108.02973v1", + "title": "Relativistic viscous accretion flow model for ULX sources: A case study for IC 342 X-1", + "abstract": "In this letter, we develop a model formalism to study the structure of a\nrelativistic, viscous, optically thin, advective accretion flow around a\nrotating black hole in presence of radiative coolings. We use this model to\nexamine the physical parameters of the Ultra-luminous X-ray sources (ULXs),\nnamely mass ($M_{\\rm BH}$), spin ($a_{\\rm k}$) and accretion rate (${\\dot m}$),\nrespectively. While doing this, we adopt a recently developed effective\npotential to mimic the spacetime geometry around the rotating black holes. We\nsolve the governing equations to obtain the shock induced global accretion\nsolutions in terms of ${\\dot m}$ and viscosity parameter ($\\alpha$). Using\nshock properties, we compute the Quasi-periodic Oscillation (QPO) frequency\n($\\nu_{\\rm QPO}$) of the post-shock matter (equivalently post-shock corona,\nhereafter PSC) pragmatically, when the shock front exhibits Quasi-periodic\nvariations. We also calculate the luminosity of the entire disc for these shock\nsolutions. Employing our results, we find that the present formalism is\npotentially promising to account the observed $\\nu_{\\rm QPO}$ and bolometric\nluminosity ($L_{\\rm bol}$) of a well studied ULX source IC 342 X-1. Our\nfindings further imply that the central source of IC 342 X-1 seems to be\nrapidly rotating and accretes matter at super-Eddington accretion rate provided\nIC 342 X-1 harbors a massive stellar mass black hole ($M_{\\rm BH} < 100\nM_\\odot$) as indicated by the previous studies.", + "authors": "Santabrata Das, Anuj Nandi, Vivek K. Agrawal, Indu Kalpa Dihingia, Seshadri Majumder", + "published": "2021-08-06", + "updated": "2021-08-06", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Since discovery, ULXs draw signi\ufb01cant attention among the researchers due to its exceedingly high luminosity \u223c1039\u221240 erg s\u22121 (Fabbiano 1989). The true nature of the central accretor of ULXs and the exact physical mechanism responsible for such a high luminosity still remain elusive. Meanwhile, di\ufb00erent competing ideas gain popularity to elucidate this. First possibility assumes the ULXs to harbor stellar mass black holes that accrete at super-Eddington rate (Fabrika & Mescheryakov 2001; Poutanen et al. 2007). Second possibility considers stellar mass black hole X-ray binaries (XRBs) accreting at sub-Eddington rate with beamed emission (Reynolds et al. 1997; King 2002), although the observational evidence of beaming e\ufb00ect is not well understood (Feng & Soria 2011). The third alternative scenario presumes the central source to be the intermediate mass black \u22c6E-mail: sbdas@iitg.ac.in (SD) \u2020 E-mail: anuj@ursc.gov.in (AN) holes (IMBHs) of mass 103 \u2212105M\u2299(Colbert & Mushotzky 1999; Makishima et al. 2000) that accrete at sub-Eddington accretion rate while emitting high luminosity. Needless to mention that all these models are in contrast and therefore, remain inconclusive. So far, numerous e\ufb00orts were made to constrain the mass of the ULX sources through the spectral and timing studies (Watarai et al. 2001; Dewangan et al. 2006; Pasham et al. 2015; Agrawal & Nandi 2015; Kaaret et al. 2017; Mondal et al. 2020; Ghosh & Rana 2021). Furthermore, the presence of super-Eddington accretion rate for several ULXs is also reported (Gladstone et al. 2009) which implies a new accretion state named the ultraluminous state. In parallel, e\ufb00orts were also given in the theoretical front, where models are developed to examine the observational signature of ULXs (Middleton et al. 2015; Mondal & Mukhopadhyay 2019; Middleton et al. 2019, and references therein). Indeed, the investigation of the physical parameters (mass and spin) of the central sources remain unexplored in these works. Motivating with this, in this letter, we investigate Lbol \u00a9 0000 The Authors \f2 Das et al. and \u03bdQPO of ULXs adopting a relativistic, viscous, advection dominated accretion \ufb02ow model around the rotating black holes in presence of cooling. To validate our model formalism, we consider a ULX source IC 342 X-1 for the purpose of representation, and compute the possible ranges of MBH, ak and \u02d9 m that yields the observed \u03bdQPO and Lbol, simultaneously. The letter is organized as follows. In \u00a72, we present the underlying assumptions and model equations that describe the \ufb02ow motion. In \u00a73, we discuss the accretion solutions and compute the observables. In \u00a74, we present the observational features of IC 342 X-1 source and constrain the physical parameters of the source using our model formalism. Finally, we conclude with discussion in \u00a75. 2 ASSUMPTIONS AND MODEL EQUATIONS We consider a relativistic, steady, viscous, optically thin, advective accretion disc around a ULX source. To describe the spacetime geometry around the central object, we adopt a newly formulated e\ufb00ective potential (Dihingia et al. 2018). We express the \ufb02ow variables in dimensionless unit by considering an unit system G = MBH = c = 1, where G, MBH, and c are the gravitational constant, black hole mass, and speed of light, respectively. In this unit system, radial distance is expressed in unit of rg = GMBH/c2. We use cylindrical coordinate system keeping the central source at the origin. We develop a model of accretion \ufb02ow where the governing equations that describe the \ufb02ow structure are given by (Chakrabarti 1996), udu dr + 1 h\u03c1 dP dr + \u2202\u03a6e\ufb00 \u2202r = 0, (1) ud\u03bb dr + 1 \u03a3x d dr \u0000r2Wr\u03c6 \u0001 = 0, (2) \u02d9 M = 2\u03c0u\u03a3 \u221a \u2206, (3) \u03a3uT ds dr = Hu \u0393 \u22121 \u0012dP dr \u2212\u0393P \u03c1 d\u03c1 dr \u0013 = Q+ \u2212Q\u2212, (4) where r, u, h, P, and \u03c1 are the radial coordinate, radial velocity, speci\ufb01c enthalpy, isotropic pressure, and density, respectively. The e\ufb00ective potential is given by, \u03a6e\ufb00= 1 2 ln \u0014 r\u2206 a2 k(r+2)\u22124ak\u03bb+r3\u2212\u03bb2(r\u22122) \u0015 (Dihingia et al. 2018), where \u03bb is the speci\ufb01c angular momentum of the \ufb02ow, ak is the Kerr parameter, and \u2206= r2 \u22122r + a2 k. In Eq. (2), the viscous stress Wr\u03c6 = \u03b1(W + \u03a3u2) (Chakrabarti & Das 2004), where \u03b1 refers viscosity parameter, W is the vertically integrated pressure, \u03a3 (= \u03c1H) is the surface mass density, and the vertical disc height H = p PF/\u03c1, F = (1 \u2212\u2126\u03bb) r3 \u0002 (r2 + a2 k)2 \u22122\u2206a2 k \u0003 \u0002 (r2 + a2 k)2 + 2\u2206a2 k \u0003\u22121, with \u2126 being the angular velocity of the \ufb02ow (Ri\ufb00ert & Herold 1995; Peitz & Appl 1997). In Eq. (3), \u02d9 M denotes accretion rate which is expressed in dimensionless form as \u02d9 m = \u02d9 M/ \u02d9 MEdd, where \u02d9 MEdd = 1.44 \u00d7 1017 \u0010 MBH M\u2299 \u0011 g s\u22121. In Eq. (4), s is the speci\ufb01c entropy, T is the temperature, Q+ \u0002 = \u2212\u03b1r \u0000W + \u03a3u2\u0001 d\u2126 dr \u0003 (Chakrabarti & Das 2004) is the heating due to viscous dissipation, and Q\u2212 [= Qb + Qcs + Qmc] (Mandal & Chakrabarti 2005) is the energy loss through radiative coolings, where Qb, Qcs, and Qmc are for bremsstrahlung, cyclosynchrotron, and Comptonization processes. Following Chattopadhyay & Chakrabarti (2000), we compute electron temperature as Te = p me/mpT , where me and mp are the masses of electron and ion. In this work, we employ equipartition to calculate magnetic \ufb01elds (B) for simplicity and obtain as B = \u221a8\u03c0\u03b2P, where \u03b2 = 0.1 is assumed (Mandal & Chakrabarti 2005). Governing equations (1-4) are closed with an equation of state (EoS), which we choose for relativistic \ufb02ow as e = nemef = \u03c1f/\u03c4 (Chattopadhyay & Ryu 2009), where f = (2 \u2212\u03be) h 1 + \u0398 \u0010 9\u0398+3 3\u0398+2 \u0011i + \u03be h 1 \u03c7 + \u0398 \u0010 9\u0398+3/\u03c7 3\u0398+2/\u03c7 \u0011i , \u03c4 = [2\u2212\u03be(1\u22121/\u03c7)], \u03c7 = me/mp, \u03be = np/ne, \u0398 = kBT/mec2, kB is the Boltzmann constant, and ne (np) denotes the number density of the electron (ion). With this, we express the polytropic index N = 1 2 d f d\u0398, the ratio of speci\ufb01c heats \u0393 = 1 + 1 N and the sound speed a2 s = \u0393p/ (e + p) = 2\u0393\u0398/ (f + 2\u0398). Here, we assume \u03be = 1 unless stated otherwise. We study the global accretion solutions around a ULX source following the methodology described in Chakrabarti & Das (2004), where the basic equations (1-4) are simultaneously integrated for a speci\ufb01ed set of \ufb02ow parameters. To do this, we treat \u03b1, \u02d9 m, and MBH as global parameters. Because of the transonic nature of the equations, at the inner critical point rin, we choose the boundary values of angular momentum (\u03bbin) and energy (equivalently Bernoulli parameter Ein = \u0002 u2/2 + log h + \u03a6e\ufb00 \u0003 rin) of the \ufb02ow as local parameters (Dihingia et al. 2020). Using these parameters, equations (1-4) are integrated starting from rin once inward up to just outside the horizon and then outward up to a large distance (equivalently \u2018outer edge of the disc\u2019) to get the complete accretion solution. Depending on the input parameters, accretion \ufb02ow may possess multiple critical points (Das et al. 2001) and also experience shock transitions provided the shock conditions are satis\ufb01ed (Landau & Lifshitz 1959; Fukue 1987, 2019a,b). We compute the shock induced global accretion solutions as these solutions are potentially viable to explain the observational \ufb01ndings of black hole X-ray sources (Chakrabarti & Titarchuk 1995; Iyer et al. 2015; Sreehari et al. 2019, and references therein). 3 ACCRETION SOLUTIONS AND OBSERVABLES In Fig. 1, we present a typical accretion solution containing shock around a rapidly rotating black hole (ak = 0.99). Here, we \ufb01x the global \ufb02ow parameters, namely viscosity parameter \u03b1 = 0.01 and accretion rate \u02d9 m = 0.5, respectively, and choose the local \ufb02ow parameters at the inner critical point (rin = 1.43356) as Ein = 1.00741, angular momentum \u03bbin = 1.99. In the \ufb01gure, we show the variation of (a) Mach number (M = u/as), (b) velocity (u), (c) density (log \u03c1), and (d) temperature (log T , in Kelvin) of the \ufb02ow as function of radial coordinate (r), where outer critical point (rout = 144.66855) and shock location (rs = 30.80356) are marked. Because of the shock, in\ufb02owing matter undergoes discontinuous transition from supersonic to subsonic branch that yields the jump of density (\u03c1) and temperature (T ) in the post-shock region (i.e., PSC). Note that after crossing rout, \ufb02ow may eventually enter into the black hole following MNRAS 000, 1\u20136 (0000) \fModel for IC 342 X-1 source 3 Figure 1. Typical accretion solution where the variation of (a) Mach number (M = u/as), (b) velocity (u), (c) density (log \u03c1), and temperature (log T) are plotted as function of radial distance (r). See text for details. the dotted curve provided shock conditions are not favorable. In each panel, we indicate the overall direction of the \ufb02ow motion using arrows. In order to explain the observables of ULX sources, we calculate the disc luminosity for a given accretion solution considering gravitational red-shift (G) as L = 4\u03c0 R rf ri GQ\u2212rHdr, where ri refers to the location just outside the horizon (rh), rf stands for the outer edge of the disc (\u2273rout), and Q\u2212denotes the total cooling rates expressed in units of erg cm\u22123 s\u22121. Here, following Shapiro & Teukolsky (1986), we coarsely approximate G = 1 \u2212 2r (r2+a2 k) for simplicity. Further, we examine the QPO features that may originate due to the modulation of the shock front at infall time scales, where infall time is estimated as the time required to accrete the infalling material on to the gravitating object from the shock front. As Molteni et al. (1996) pointed out that the post-shock \ufb02ow can exhibit non-steady behavior because of the resonance oscillation that happens when the infall timescale is comparable to the cooling time scale of the post-shock \ufb02ow (i.e., PSC). Since the modulation of PSC in general exhibits Quasi-periodic variations, we estimate the frequency of such modulation as \u03bdQPO \u223c1/tinfall, where tinfall = R ri rs u\u22121 + dr, u+ is the postshock velocity (Aktar et al. 2015; Dihingia et al. 2019). Employing the above considerations, we calculate the disk luminosity and oscillation frequency for the accretion solution presented in Fig. 1 and obtain as L = 1.45 \u00d7 1033 \u0010 MBH M\u2299 \u00113 erg s\u22121, and \u03bdQP O = 499.23 \u0010 M\u2299 MBH \u0011 Hz, respectively for \u02d9 m = 0.5 and \u03b1 = 0.01. Next, we examine the role of \u02d9 m in determining the shock location (rs), QPO frequency (\u03bdQPO), and disk luminosity (L). The obtained results are depicted in Fig. 2, where we \ufb01x ak = 0.99, \u03b1 = 0.01, and Ein = 1.00741. In Fig. 2a, we present the variation of rs with \u02d9 m where solid (green), dotted (orange) and dashed (purple) curves are obtained for Figure 2. Variation of (a) shock location (rs), (b) QPO frequency (\u03bdQPO), and (c) disk luminosity (L) as function of \u02d9 m. See text for details. \u03bbin = 1.97, 1.99, and 2.01, respectively. We observe that shocks are formed for a wide range of \u02d9 m and generally they settle down at larger radii for \ufb02ows with higher \u03bbin. In Fig. 2b, we present \u03bdQPO which is computed using the results presented in the upper panel. As shocks are formed further out for higher \u03bbin, the corresponding \u03bdQPO are yielded at lower values. In Fig. 2c, we show the variation of L with \u02d9 m for the same solutions depicted in the upper panel. We \ufb01nd that for a given \u03bbin, L strongly depends on \u02d9 m whereas the response of \u03bbin on L is relatively weak for a \ufb01xed \u02d9 m. With this, we perceive that the present formalism is capable to cater \u03bdQPO and L for their wide range of values. Hence, we employ the present model formalism to examine \u03bdQPO and Lbol of a well studied ULX source IC 342 X-1, and attempt to constrain MBH, ak, and \u02d9 m of the source, respectively. 4 ASTROPHYSICAL IMPLICATION: IC 342 X-1 4.1 Observational features We analyze the quasi-simultaneous observations of IC 342 X-1 carried out on 11 August 2012 with XMM-Newton and NuSTAR observatories. We follow the procedures described in Agrawal & Nandi (2015) to generate the lightcurve, spectrum and auxiliary (background, response) \ufb01les. We use 0.3 \u221210 keV XMM-Newton/EPIC-pn lightcurve with bin size of 0.22 s to construct the power density spectrum (PDS). We compute the PDS for intervals of 256 bins and average them over a single frame. We rebinned the \ufb01nal PDS by a geometric factor of 1.04 in the frequency space. The PDS exhibits a Lorentzian feature at \u223c645 mHz. We \ufb01t the PDS using a power-law (\u221d\u03bd\u2212\u03b1, \u03b1 being the index), a constant (to represent the Poisson noise) and a Lorentzian (to represent the QPO). Fig. 3 (left) shows the PDS of IC 342 X-1 along with the \ufb01tted model. The centroid frequency of QPO MNRAS 000, 1\u20136 (0000) \f4 Das et al. 0.1 1 2 Leahy Power Frequency (Hz) IC 342 X\u22121 10\u22127 10\u22126 10\u22125 10\u22124 Photons cm\u22122 s\u22121 keV\u22121 IC 342 X\u22121 1 10 \u22122 0 2 \u03c7 Energy (keV) Figure 3. The PDS (left) of EPIC-pn observation taken during 11 August 2012. The PDS is \ufb01tted with a constant, a power-law and a Lorentzian centered at \u223c645 mHz. The unfolded energy spectrum (right) of combined \ufb01t to the quasi-simultaneous data of NuSTARFPMA and XMM-Netwon/EPIC-pn. The combined spectrum is \ufb01tted with TBabs \u00d7 (compTT + diskbb) model. See text for details. Table 1. Model \ufb01tted temporal and spectral parameters for IC 342 X-1. Fbol and Lbol are computed in 0.1 \u2212100 keV energy range. Features Parameters Values Timing \u03bdQPO (mHz) 645 \u00b1 20 Q (\u03bdQPO/FWHM) 11 \u03c3\u2020 3.8 Spectral nH (1022 atoms/cm2) 0.65 \u00b1 0.05 kTe (keV) 3.3 \u00b1 0.18 \u03c4 13.45 \u00b1 0.65 K (\u00d710\u22124) 3.51 \u00b1 0.3 kTin (keV) 0.23 \u00b1 0.02 Ndisk 23+26 \u221211 \u03c72/dof 638/637 Estimated Fbol (\u00d710\u221212 ergs/s/cm2) 5.36 \u00b1 0.32 Lbol (\u00d71039 ergs/s) 7.59 \u00b1 0.57 \u2020 The QPO signi\ufb01cance (\u03c3) is computed as the ratio of Lorentzian normalization to its negative error (see Sreehari et al. 2019, and references therein). (\u03bdQPO) is obtained as 645 \u00b1 20 mHz with Q factor \u223c11 and signi\ufb01cance \u223c3.8\u03c3 (see Table 1). We carry out the spectral analysis using the quasisimultaneous XMM-Newton (0.3 \u221210 keV) and NuSTAR (3 \u221230 keV) data. The combined spectrum is \ufb01tted with various model combinations available in XSPEC. We proceed with physically motivated Comptonized model i.e., TBabs \u00d7 (compTT + diskbb) to extract the spectral parameters. Details of spectral modeling were presented in Agrawal & Nandi (2015). In Fig. 3 (right), the unfolded energy spectrum is shown along with the residuals (bottom panel). Considering the recent measurement of the source distance D \u223c3.45 Mpc (Wu et al. 2014) and \ufb02ux estimation (see Table 1), we calculate the bolometric luminosity (Lbol = 4\u03c0D2Fbol) as (7.59\u00b10.57)\u00d71039 ergs s\u22121, where Fbol being the bolometric \ufb02ux. The model \ufb01tted (both temporal and spectral) and computed parameters are summarized in Table 1. 1.92 1.94 1.96 1.98 2.00 2.02 2.04 \u03bbin 0.99 1.01 1.03 1.05 1.07 \ue231in MBH (M \u2299) (a) ak = 0.99 \u0307 m = 0.2 \u03b1 = 0.06 \u03b1 = 0.04 \u03b1 = 0.02 \u03b1 = 0.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 \u0307 m ( \u0307 Medd) 102 103 MBH (M \u2299) (b) --MBH = (315.91 \u221248.83ak \u2212171.33a2 k) \u0307 m\u22123\u03074 Lbol = 7.59\u2299x\u22991039\u2299erg\u2299s\u22121 \u03bdQPO = 645\u2299mHz \u03b1 = 0.01 ak = 0.0 ak = 0.99 318 363 408 453 497 Figure 4. (a) Variation of Ein with \u03bbin that results observed Lbol and \u03bdQPO for IC 342 X-1 source for di\ufb00erent mass range as indicated using colorabar. Here, we choose ak = 0.99 and \u02d9 m = 0.2, and \u03b1 values are marked. (b) Correlation between \u02d9 m and MBH for di\ufb00erent ak. Regions shaded using orange and cyan color are for ak = 0.0, and 0.99, respectively. Dashed curves denote the \ufb01tted function as marked in the \ufb01gure. See text for details. 4.2 Constraining mass, spin and accretion rate To infer Lbol and \u03bdQPO of IC 342 X-1, we employ the present model formalism. While doing this, we freely vary the \ufb02ow parameters, namely \u03bbin and Ein, to compute the shocked accretion solutions for a given set of parameters (ak, \u02d9 m, \u03b1, MBH), and obtain the solution that yields the observed Lbol and \u03bdQP O for IC 342 X-1 source. The obtained MNRAS 000, 1\u20136 (0000) \fModel for IC 342 X-1 source 5 results are depicted in Fig. 4 (a), where for a given viscosity parameter \u03b1, we show the interplay among \u03bbin, Ein and MBH that provides the observed Lbol = 7.59 \u00d7 1039 ergs s\u22121 and \u03bdQPO = 645 mHz of IC 342 X-1 source. Here, we choose ak = 0.99, and \u02d9 m = 0.2. In the \ufb01gure, we mark the \u03b1 values and indicate the mass (MBH) range using the colorbar. It is clear that as \u03b1 is increased, \u03bbin is shifted to the lower values whereas Ein moved to the higher energy domain. In Fig. 4 (b), we show the correlation between the source mass (MBH) and the accretion rate ( \u02d9 m) for \u03b1 = 0.01 that delineate Lbol and \u03bdQPO for IC 342 X-1 source. Here, we compare the results for non-rotating (ak = 0.0) and rapidly rotating (ak = 0.99) black hole. For a given \u02d9 m and ak, we \ufb01nd a range of \u03bbin, Ein, and MBH that provides the observed Lbol and \u03bdQPO for IC 342 X-1 source. Using these results, we empirically obtain a functional form of MBH = (315.91\u221248.83ak\u2212171.33a2 k) \u02d9 m\u22123/4, which is characterized as seemingly exponential with the accretion rate ( \u02d9 m) shown by the dashed curves. We observe that for ak = 0.0, IC 342 X-1 seems to accrete matter both at suband super-Eddington limits depending on its mass (174 \u2272 MBH M\u2299 \u22721783). Similarly, when ak = 0.99, we obtain the corresponding mass range of the source as 55 \u2272MBH M\u2299\u22721198 for \u02d9 m \u22722. 5 DISCUSSION AND" + }, + { + "url": "http://arxiv.org/abs/1807.11417v1", + "title": "Standing shocks in magnetized advection accretion flows onto a rotating black hole", + "abstract": "We present the global structure of magnetized advective accretion flow around\nthe rotating black holes in presence of dissipation. By considering accretion\nflow to be threaded by toroidal magnetic fields and by assuming synchrotron\nradiative mechanism to be the dominant cooling process, we obtain global\ntransonic accretion solutions in terms of dissipation parameters, such as\nviscosity ($\\alpha_B$), accretion rate (${\\dot m}$) and plasma-$\\beta$,\nrespectively. In the rotating magnetized accretion flow, centrifugal barrier is\ndeveloped in the nearby region of the black hole that triggers the\ndiscontinuous shock transition in the flow variables. Evidently, the shock\nproperties and the dynamics of the post-shock flow (hereafter post-shock corona\n(PSC)) are being governed by the flow parameters. We study the role of\ndissipation parameters in the formation of standing shock wave and find that\nglobal shocked accretion solutions exist both in gas pressure dominated flows\nand in magnetic pressure dominated flows. In addition, we observe that standing\nshock continues to form around the rapidly rotating black holes as well. We\nidentify the range of dissipation parameters that permits shocked accretion\nsolutions and find that standing shocks continue to form even in presence of\nhigh dissipation limit, although the likelihood of shock formation diminishes\nwith the increase of dissipation. Further, we compute the critical accretion\nrate (${\\dot m}^{\\rm cri}$) that admits shock and observe that standing shock\nexists in a magnetically dominated accretion flow when the accretion rate lies\nin general in the sub-Eddington domain. At the end, we calculate the maximum\ndissipated energy that may be escaped from the PSC and indicate its possible\nimplication in the astrophysical context.", + "authors": "Santabrata Das, Biplob Sarkar", + "published": "2018-07-30", + "updated": "2018-07-30", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Magnetic \ufb01elds are in general considered to be indispensable in the astrophysical environment and therefore, their presence in the accretion disc is by all means inevitable (Balbus & Hawley 1998). In a magnetized accretion disc, magnetic \ufb01elds play an important role in guiding the infalling matter around black holes. Meanwhile, Blandford & Payne (1982) revealed that when a Keplerian disc is threaded by large scale magnetic \ufb01elds, angular momentum can be removed through the torque exerted by the magnetic \ufb01elds. Similarly, the large scale poloidal magnetic \ufb01elds anchored in the surrounding accretion disc are indeed capable of transferring energy and angular momentum and also instigate the generation of powerful magnetic jets (Blandford & Znajek 1977; Komissarov & McKinney 2007). \u22c6E-mail: sbdas@iitg.ernet.in; biplob@iitg.ernet.in Further, Balbus & Hawley (1991, 1998) showed that the accretion disc becomes unstable in presence of di\ufb00erential rotation when the accreting plasma is threaded by weak vertical magnetic \ufb01elds. This instability causes the turbulence leading to the e\ufb03cient angular momentum transport as well as energy dissipation that enables the accretion possible. In the modeling of the standard advection-dominated accretion \ufb02ows around black holes, Narayan & Yi (1995) considered the magnetic \ufb01elds which are stochastic in nature. However, since the \ufb02ow experiences di\ufb00erential rotation while accreting onto a black hole, the magnetic \ufb01elds present in the disc are expected to be structured in reality and the large scale \ufb01elds seem to be dominated by its toroidal component. This consideration in general holds irrespective to the initial con\ufb01guration of the \ufb01elds (i.e., toroidal or poloidal). Furthermore, the existence of toroidal magnetic \ufb01eld has been observationally con\ufb01rmed in the exterior regions of the discs of young stellar objects (Aitken et al. c \u20ddRAS \f2 Santabrata Das, Biplob Sarkar 1993; Wright, Aitken, & Smith 1993) as well as in the Galactic center (Chuss et al. 2003; Novak et al. 2003). Meanwhile, signi\ufb01cant e\ufb00orts were given to examine the accretion disc properties around black holes including toroidal magnetic \ufb01elds (Akizuki & Fukue 2006; Khesali & Faghei 2008, 2009; Mosallanezhad, Abbassi, & Beiranvand 2014; Mosallanezhad, Bu, & Yuan 2016; Oda et al. 2007, 2010, 2012; Samadi, Abbassi, & Khajavi 2014; Sarkar & Das 2015, 2016; Sarkar, Das, & Mandal 2018; Sarkar & Das 2018). Following the above cognizance, in the present work, we consider the accretion \ufb02ow to be threaded by toroidal magnetic \ufb01eld lines as well. Further, while developing the present formalism, we consider rotating matter that experiences centrifugal repulsion as it accretes towards the black hole and due to this, infalling matter is being piled up in the vicinity of the black hole. In reality, such accumulation of matter can not be continued inde\ufb01nitely and ultimately, at its limit, the centrifugal barrier triggers the discontinuous transition of the \ufb02ow variables which is commonly called as shock transition. It may be noted that the global accretion solutions including shock waves are potentially favored as it owns large amount of entropy (Becker & Kazanas 2001). In the theoretical front, the shock induced global accretion solution around black hole and its implications are extensively studied by the numerous groups of workers (Fukue 1987; Chakrabarti 1989, 1996b; Lu, Gu, & Yuan 1999; Gu & Lu 2001; Das et al. 2001b; Gu & Lu 2004; Fukumura & Tsuruta 2004; Chakrabarti & Das 2004; Mondal & Chakrabarti 2006; Das 2007; Becker, Das, & Le 2008; Das, Becker, & Le 2009; Das, Chakrabarti, & Mondal 2010; Sarkar & Das 2015; Aktar, Das, & Nandi 2015; Sarkar & Das 2016; Aktar et al. 2017; Sarkar & Das 2018; Sarkar, Das, & Mandal 2018). In addition, the existence of shock in accretion \ufb02ow is also examined numerically considering hydrodynamics (Chakrabarti & Molteni 1993; Molteni, Lanzafame, & Chakrabarti 1994; Ryu, Chakrabarti, & Molteni 1997; Okuda 2014; Okuda & Das 2015; Sukov\u00b4 a & Janiuk 2015; Sukov\u00b4 a, Charzy\u00b4 nski, & Janiuk 2017) as well as magnetohydrodynamic (MHD) environment (Nishikawa et al. 2005; Takahashi et al. 2006; Fukumura, Takahashi, & Tsuruta 2007; Fukumura et al. 2016). Motivated with the above studies, in this work, we examine the magnetically supported accretion \ufb02ow around rotating black hole that possesses standing shock. While doing this, we assume that the characteristics of the magnetic pressure is synoptic to the gas pressure and their combined e\ufb00ects therefore supports the vertical structure of the infalling matter against the gravitational pull. Moreover, recalling the success of the seminal \u03b1-viscosity prescription (Shakura & Sunyaev 1973), we consider the Maxwell stress to be proportional to the total pressure (Machida, Nakamura, & Matsumoto 2006) that evidently demonstrates that the outward transport of angular momentum would certainly be enhanced as the magnetic activity inside the disc is increased. Furthermore, we consider the heating of the \ufb02ow to be regulated by the magnetic energy dissipation mechanism while the in\ufb02owing matter is being cooled via synchrotron emission process (Chattopadhyay & Chakrabarti 2000; Das 2007; Sarkar, Das, & Mandal 2018). In addition, for simplicity, we adopt a pseudo potential introduced by Chakrabarti & Mondal (2006) that successfully mimics the space-time geometry around the rotating black hole having spin ak \u22720.8. Considering all these, we self-consistently solve all the governing equations that describe the magnetized accretion \ufb02ow around rotating black hole and obtain the global accretion solutions including shock waves. We study the properties of standing shock waves in terms of \ufb02ow parameters and observe that shock formation takes place for an ample range of parameters both around weakly rotating (ak \u21920) as well as rapidly rotating black holes (ak \u223c0.8). We also calculate the critical accretion rate ( \u02d9 mcri) for standing shocks in magnetized accretion \ufb02ow. It may be noted that \u02d9 mcri does not bear any universal value, rather it is largely dependent on the in\ufb02ow parameters. We continue our study considering the fact that standing accretion shocks are dissipative by nature and calculate the maximum energy that can be extracted from the PSC. In reality, this available energy could be utilized in powering the jets (Sarkar & Das 2016, reference therein) as they seem to originate from PSC regions (Aktar et al. 2017, reference therein). We organize the paper as follows. In \u00a72, we write the model equations and carry out the analysis of transonic conditions. In \u00a73, we display our results where shocked accretion solutions for magnetized \ufb02ow and its properties are discussed. Moreover, we determine the critical in\ufb02ow parameters for standing shock as well. We further study the characteristics of dissipative standing shock. Finally, in \u00a74, concluding remarks are presented. 2 ACCRETION FLOW MODEL To take into consideration of the magnetic \ufb01elds structure in an accretion disc, we rely on the numerical simulation results of global and local MHD accretion \ufb02ow around black hole. These simulations have revealed that magnetic \ufb01elds inside the accretion disc are turbulent and primarily dominated by the azimuthal component (Hirose, Krolik, & Stone 2006; Machida, Nakamura, & Matsumoto 2006; Johansen & Levin 2008). Following the \ufb01ndings of these simulations, we separate the magnetic \ufb01elds into mean \ufb01elds, denoted by B = (0, < B\u03c6 >, 0), and the \ufb02uctuating \ufb01elds, indicated as \u03b4B = (\u03b4Br, \u03b4B\u03c6, \u03b4Bz). Here, we express the azimuthal average by \u2018<>\u2019 and upon azimuthal averaging, the \ufb02uctuating components of the magnetic \ufb01elds eventually disappear (< \u03b4B >= 0). Moreover, the radial and vertical components of the magnetic \ufb01eld are assumed to be negligible when compared with the azimuthal component, |< B\u03c6 > +\u03b4B\u03c6 |\u226b| \u03b4Br | and | \u03b4Bz |. This ultimately renders the azimuthally averaged magnetic \ufb01elds which is given by < B >= \u02c6 \u03c6 < B\u03c6 > (Oda et al. 2007). 2.1 Model Equations In this work, a thin, axisymmetric, magnetized accretion \ufb02ow onto a rotating black hole is considered and the accretion disc is assumed to lie on the black hole equatorial plane. Moreover, we employ the cylindrical polar coordinate (x, \u03c6, z) to study the properties of accretion \ufb02ow, where black hole is placed at its origin. In order to express the c \u20ddRAS, MNRAS 000, 1\u2013?? \fShocks in magnetized accretion \ufb02ows 3 \ufb02ow variables, we choose an unit system as MBH = c = G = 1, where MBH is the mass of the black hole, c represents the speed of light and G denotes the gravitational constant, respectively. Accordingly, length, angular momentum and time are measured in units of GMBH/c2, GMBH/c and GMBH/c3, respectively. In the subsequent sections, we choose MBH = 10M\u2299as a reference value. Considering steady state scenario, the governing equations of motion that describe the magnetized accreting matter are obtained as follows: (i) Equation for radial momentum: v dv dx + 1 \u03c1 dP dx + d\u03a8e\ufb00 dx + B2 \u03c6 \u000b 4\u03c0x\u03c1 = 0, (1) where v and \u03c1 stand for the radial velocity and density of the \ufb02ow and P represents total pressure which we take into account as P = pgas + pmag where, pgas and pmag denote the gas pressure and the magnetic pressure of the \ufb02ow. We obtain the gas pressure inside the disc as pgas = R\u03c1T/\u00b5, where R, T and \u00b5, respectively, represent the gas constant, the temperature and the mean molecular weight. Here, we use \u00b5 = 0.5 for fully ionized hydrogen. Further, the magnetic pressure is obtained as pmag =< B2 \u03c6 > /8\u03c0. We de\ufb01ne \u03b2 = pgas/pmag and using this, we attain total pressure as P = pgas(1 + \u03b2)/\u03b2. Moreover, in equation (1), \u03a8e\ufb00denotes the e\ufb00ective pseudo potential around a rotating black hole (Chakrabarti & Mondal 2006) and is given by, \u03a8e\ufb00= \u2212Q + \u221a Q2 \u22124PR 2P , where P = \u01eb2\u03bb2 2x2 , Q = \u22121 + \u01eb2\u03c9\u03bbr2 x2 + 2ak\u03bb r2x , R = 1 \u2212 1 r \u2212x0 + 2ak\u03c9 x + \u01eb2\u03c92r4 2x2 . Here, x represents the cylindrical radial distance and r speci\ufb01es spherical radial distance, respectively. Also, \u03bb stands for the speci\ufb01c angular momentum of the \ufb02ow. In addition, we write x0 = 0.04+0.97ak +0.085a2 k, \u03c9 = 2ak/(x3+a2 kx+2a2 k) and \u01eb2 = (x2 \u22122x+a2 k)/(x2 +a2 k +2a2 k/x), where \u01eb refers the redshift factor and ak denotes the spin of the black hole. It is to be noted that the adopted pseudo potential satisfactorily mimics the space-time geometry around rotating black hole for ak \u22720.8 (Chakrabarti & Mondal 2006). (ii) Mass \ufb02ux conservation equation: \u02d9 M = 2\u03c0xv\u03a3, (2) where \u02d9 M speci\ufb01es the accretion rate which we treat as global constant all throught and \u03a3 represents the vertically integrated density (Matsumoto et al. 1984). It may be noted that in this work, the direction of the inward radial velocity is considered as positive always. (iii) Azimuthal momentum conservation equation: v d\u03bb(x) dx + 1 \u03a3x d dx(x2Tx\u03c6) = 0. (3) Here, we assume the vertically integrated total stress to be dominated by the x\u03c6 component of the Maxwell stress Tx\u03c6. For the accretion \ufb02ow with large radial velocity, Tx\u03c6 comes out to be (Chakrabarti & Das 2004; Machida, Nakamura, & Matsumoto 2006) Tx\u03c6 = < BxB\u03c6 > 4\u03c0 h = \u2212\u03b1B(W + \u03a3v2), (4) where h, \u03b1B and W , respectively, represent the local disc height, the proportionality constant and the vertically integrated pressure of the \ufb02ow (Matsumoto et al. 1984). Following the work of Shakura & Sunyaev (1973), we regard \u03b1B as a global constant all throughout of the \ufb02ow. Note that when v is signi\ufb01cantly small, as in the case of Keplerian disc, equation (4) reduces to \u2018\u03b1-model\u2019 (Shakura & Sunyaev 1973). We consider thin disc approximation where infalling matter maintains hydrostatic equilibrium in the vertical direction and calculate the disc height (h) as, h = a p x/(\u03b3\u03a8 \u2032 r) where \u03a8 \u2032 r = \u0010 \u2202\u03a8eff \u2202r \u0011 z< 4\u03c0 xhd\u2126 dx = \u2212\u03b1B(W + \u03a3v2)xd\u2126 dx , (6) where \u2126stands for the angular velocity of the \ufb02ow. Usually, the accretion \ufb02ow experiences heat loss as the consequences of the variety of cooling mechanisms, such as bremsstrahlung, synchrotron and Comptonization of bremsstrahlung as well as synchrotron photons. However, in the present study, as the infalling matter is magnetized in nature, we therefore consider only the synchrotron radiative mechanism as dominant cooling process and the corresponding cooling rate is obtained as (Shapiro & Teukolsky 1983), Q\u2212= Sa5\u03c1h v r \u03a8 \u2032 r x3 \u03b22 (1 + \u03b2)3 , (7) with, S = 1.4827 \u00d7 1018 \u02d9 m\u00b52e4 Inm3 e\u03b35/2 1 GM\u2299c3 , where e and me represent the charge and mass of the electron and \u02d9 m denotes the accretion rate expressed in units of Eddington rate ( \u02d9 MEdd = 1.39 \u00d7 1017 \u00d7 MBH/M\u2299 gm s\u22121). Also, In = (2nn!)2/(2n + 1)! and n represents the polytropic index of the \ufb02ow which is related to the adiabatic index as n = 1/(\u03b3 \u22121). We estimate the electron c \u20ddRAS, MNRAS 000, 1\u2013?? \f4 Santabrata Das, Biplob Sarkar temperature employing the relation Te = ( p me/mp)Tp, where the coupling between ion and electron is neglected (Chattopadhyay & Chakrabarti 2002). Here, mp and Tp refer the mass and temperature of the ion. Note that in this work, we ignore the bremsstrahlung emission process as it is an ine\ufb03cient cooling process for stellar mass black hole system (Chattopadhyay & Chakrabarti 2002). Moreover, we also disregard the inverse Comptonization process as well although its contribution may not be negligible especially at the inner part of the disc. Nevertheless, we make this assumption simply because the framework of single temperature accretion \ufb02ow does not allow one to study the Componization process as it requires the consideration of twotemperature \ufb02ow. However, we infer that when both synchrotron and Compton processes are present, the accretion \ufb02ow will experience more dissipation and therefore, the results we present in the subsequent sections are expected to modify quantitatively although the overall conclusions perhaps be remain qualitatively unaltered. (v) The advection equation of toroidal magnetic \ufb02ux: Following induction equation, the advection rate of toroidal magnetic \ufb02ux is obtained as, \u2202< B\u03c6 > \u02c6 \u03c6 \u2202t = \u2207\u00d7 \u0012 \u20d7 v\u00d7 < B\u03c6 > \u02c6 \u03c6 \u22124\u03c0 c \u03b7\u20d7 j \u0013 , (8) where \u20d7 v, \u20d7 j and \u03b7, respectively, represent the velocity vector, the current density and the resistivity of the \ufb02ow. It may be noted that equation (8) is azimuthally averaged. For an accretion disc, since the Reynold number is generally very large, we ignore the magnetic-di\ufb00usion terms because of large length scale. Furthermore, here we ignore dynamo term as well. Considering steady state, the obtained equation is further vertically integrated employing the assumption that the azimuthally averaged toroidal magnetic \ufb01elds disappear at disc surface. Based on these considerations, the toroidal magnetic \ufb02ux advection rate is calculated as, \u02d9 \u03a6 = \u2212 \u221a 4\u03c0vhB0(x), (9) where B0(x) = \u27e8B\u03c6\u27e9(x; z = 0) = 25/4\u03c01/4(RT/\u00b5)1/2\u03a31/2h\u22121/2\u03b2\u22121/2 denotes azimuthally averaged toroidal magnetic \ufb01eld resided at the equatorial plane of the accretion disc (Oda et al. 2007). Inside the accretion disc, if the magnetic \ufb02ux is dissipated by the magnetic reconnection or escapes from the disc due to buoyancy, \u02d9 \u03a6 will not be conserved. Besides, when MRI driven dynamo augments the toroidal magnetic \ufb02ux, \u02d9 \u03a6 may vary with radial coordinate. Keeping these \ufb01ndings in mind, we thus consider \u02d9 \u03a6 \u221dx\u2212\u03b6 (Oda et al. 2007), where \u03b6 stands for a parameter describing the magnetic \ufb02ux advection rate. Therefore, we have the following parametric relation as \u02d9 \u03a6 \u0010 x; \u03b6, \u02d9 M \u0011 \u2261\u02d9 \u03a6edge \u0012 x xedge \u0013\u2212\u03b6 , (10) where \u02d9 \u03a6edge indicates the advection rate of the toroidal magnetic \ufb01eld at a large distance, usually the disc outer edge (xedge). For \u03b6 = 0, radial magnetic \ufb02ux remains conserved whereas, for \u03b6 > 0, the magnetic \ufb02ux is increased with the decrease of x. However, for representation, in this study, we choose \u03b6 = 1 all throughout unless stated otherwise. 2.2 Analysis of transonic conditions During the course of accretion, matter from the outer edge of the disc (xedge) proceeds towards the black hole under the in\ufb02uence of gravity. In reality, in\ufb02owing matter possesses negligible radial velocity at xedge in contrast with the local sound speed and enters into the black hole with velocity equivalent to c. This \ufb01ndings evidently demand the transonic nature of the accreting matter. The radial coordinate where the accretion \ufb02ow smoothly changes its sonic character from subsonic to supersonic state is commonly called as critical point. In order to analyze the transonic conditions, we simultaneously solve equations (1), (2), (3), (5), (9) and (10) and obtain the wind equation (Das 2007, and references therein) which is given by, dv dx = N D , (11) where the numerator (N) is calculated as, N = Sa5 v r \u03a8 \u2032 r x3 \u03b22 (1 + \u03b2)3 + 2\u03b12 BIn(a2g + \u03b3v2)2 \u03b32xv + \u0014[3 + \u03b2(\u03b3 + 1)]v (\u03b3 \u22121)(1 + \u03b2) \u22124\u03b12 BgIn(a2g + \u03b3v2) \u03b3v \u0015 \u0012d\u03a8e\ufb00 dx \u0013 + \u0014 va2(2\u03b2\u03b3 + 4) 2\u03b3(\u03b3 \u22121)(1 + \u03b2) \u22122\u03b12 BIna2g(a2g + \u03b3v2) \u03b32v \u0015 dln\u03a8 \u2032 r dx ! +2{3 + \u03b2(\u03b3 + 1)}a2v \u03b3x(\u03b3 \u22121)(1 + \u03b2)2 \u2212 3a2v(2\u03b3\u03b2 + 3) 2\u03b3x(1 + \u03b2)(\u03b3 \u22121) +6\u03b12 BIna2g(a2g + \u03b3v2) \u03b32vx \u22128\u03b12 BIna2g(a2g + \u03b3v2) \u03b32v(1 + \u03b2)x \u2212 a2v(4\u03b6 \u22121) 2\u03b3(1 + \u03b2)(\u03b3 \u22121)x \u22124\u03bb\u03b1BIn(a2g + \u03b3v2) \u03b3x2 (11a) and the denominator (D) is calculated as, D = 2a2(2 + \u03b3\u03b2) \u03b3(\u03b3 \u22121)(1 + \u03b2) \u2212{3 + \u03b2(\u03b3 + 1)}v2 (1 + \u03b2)(\u03b3 \u22121) +2\u03b12 BIn(a2g + \u03b3v2) \u03b3 \u0014 (2g \u22121) \u2212a2g \u03b3v2 \u0015 . (11b) In the above analysis, we de\ufb01ne g = In+1/In. Next, we calculate the derivative of a, \u03bb and \u03b2 with respect to x as, da dx = \u2212 \u0010\u03b3v a \u2212a v \u0011 dv dx + 3a 2x \u2212a 2 dln\u03a8 \u2032 r dx ! \u2212\u03b3 a \u0012d\u03a8e\ufb00 dx \u0013 \u2212 2a (1 + \u03b2)x (12) d\u03bb dx = \u2212\u03b1Bx(a2g \u2212\u03b3v2) \u03b3v2 dv dx + 2\u03b1Baxg \u03b3v da dx c \u20ddRAS, MNRAS 000, 1\u2013?? \fShocks in magnetized accretion \ufb02ows 5 +\u03b1B(a2g + \u03b3v2) \u03b3v (13) d\u03b2 dx = \u00144(1 + \u03b2) v \u22123\u03b3v(1 + \u03b2) a2 \u0015 dv dx + 9(1 + \u03b2) 2x \u22122(1 + \u03b2) dln\u03a8 \u2032 r dx ! \u22123\u03b3(1 + \u03b2) a2 d\u03a8e\ufb00 dx \u22126 x + (1 + \u03b2)(4\u03b6 \u22121) 2x (14) Since the accretion solutions must be smooth along the streamline, the radial velocity gradient (dv/dx) will be inevitably real and \ufb01nite at every radial coordinate. Nevertheless, equation (11b) is revealed the fact that between xedge and the black hole horizon, there is a possibility where the denominator (D) may vanish at some point. In order for maintaining the \ufb02ow to become smooth always, it is therefore necessary that the location where D goes to zero, N also must vanish there. The location where N and D simultaneously disappears has a special signi\ufb01cance and such location is termed as critical point (xc). It is to be noted that accretion \ufb02ow becomes transonic at xc and accordingly, we have two conditions at xc which are obtained by setting N = 0 and D = 0, respectively. Using D = 0, we calculate the Mach number (de\ufb01ned as the ratio of radial velocity to the sound speed, M = v/a) at xc as, Mc = s \u2212m2 \u2212 p m2 2 \u22124m1m3 2m1 , (15) where m1 = 2\u03b12 BIn\u03b32(\u03b3 \u22121)(2g \u22121)(1 + \u03b2c) \u2212\u03b32{3 + (\u03b3 + 1)\u03b2c}, m2 = 2\u03b3(2 + \u03b3\u03b2c) + 4\u03b12 BIn\u03b3g(g \u22121)(\u03b3 \u22121)(1 + \u03b2c), m3 = \u22122\u03b12 BIng2(\u03b3 \u22121)(1 + \u03b2c). Setting N = 0, we obtain a cubic equation of sound speed (ac) at xc as, Aa3 c + Ba2 c + Cac + D = 0, (16) where A = S s \u03a8 \u2032 r x3 c \u03b22 c (1 + \u03b2c)3 , B = 2\u03b12 BIn(g + \u03b3M 2 c )2 \u03b32xc + M 2 c (2\u03b3\u03b2c + 4) 2\u03b3(\u03b3 \u22121)(1 + \u03b2c) dln\u03a8 \u2032 r dx ! \u22122\u03b12 BIng(g + \u03b3M 2 c ) \u03b32 dln\u03a8 \u2032 r dx ! +2{3 + \u03b2c(\u03b3 + 1)}M 2 c \u03b3xc(\u03b3 \u22121)(1 + \u03b2c)2 \u2212 3M 2 c (2\u03b3\u03b2c + 3) 2\u03b3(\u03b3 \u22121)(1 + \u03b2c)xc + 6\u03b12 BIng(g + \u03b3M 2 c ) \u03b32xc \u22128\u03b12 BIng(g + \u03b3M 2 c ) \u03b32(1 + \u03b2c)xc \u2212 (4\u03b6 \u22121)M 2 c 2\u03b3(\u03b3 \u22121)(1 + \u03b2c)xc , C = \u22124\u03bbc\u03b1BInMc(g + \u03b3M 2 c ) \u03b3x2 c , D = \u0014[3 + \u03b2c(\u03b3 + 1)]M 2 c (1 + \u03b2c)(\u03b3 \u22121) \u22124\u03b12 BgIn(g + \u03b3M 2 c ) \u03b3 \u0015 \u00d7 \u0012d\u03a8e\ufb00 dx \u0013 . Here, the \ufb02ow variables speci\ufb01ed using subscript \u2018c\u2019 denote their values evaluated at xc. Now, using the accretion \ufb02ow parameters, we solve equation (16) to obtain the sound speed (ac) at xc and subsequently calculate vc using equation (15). By employing the values of vc and ac in Eq. (11), we examine the characteristics of the critical points. At the critical point, we get (dv/dx) = 0/0 and thus, we use l \u2032Hospital rule for obtaining the value of (dv/dx) at xc (hereafter, (dv/dx)c). Usually, (dv/dx)c owns two values; one for accretion and the other for wind. When the values of (dv/dx)c are real and of opposite sign, the critical point is known as saddle type (Chakrabarti & Das 2004) and this type of critical point is particularly important due to the fact that transonic solution can cross it smoothly. In the present study, since our motivation is to investigate the structure of the magnetized accretion \ufb02ow, we therefore focus into the accretion solutions only in the subsequent analysis. 3 RESULTS AND DISCUSSIONS 3.1 Transonic Global Solutions In this work, we intend to obtain the global magnetized transonic accretion solution that delineates a smooth connection between horizon and the disc edge. With this aim, we simultaneously solve the equations (11-14) for a speci\ufb01ed set of \ufb02ow parameters. While doing this, we treat \u02d9 m, \u03b1B, and \u03b3 as global parameters of the \ufb02ow. Moreover, one requires ak value and the boundary values of \u03bb and \u03b2 at a given x as local parameters to solve these equations. Note that we express angular momentum (\u03bb) in terms of Keplerian angular momentum \u03bbK (\u2261 p x3/(x \u22122)2) all throughout the paper. Since the black hole accretion solutions are necessarily transonic in nature, \ufb02ow must pass through at least one critical point and therefore, it is reasonable to choose the boundary values of the \ufb02ow at the critical point. With this, we hereby integrate equations (11-14) starting from the critical point once inwards up to just outside the black hole horizon and then outward up to a large distance (equivalently \u2018outer edge of the disc\u2019). Ultimately, these two parts of are joined to obtain a complete global transonic accretion solution. Depending on the input parameters, accretion \ufb02ow may possess single or multiple critical points (Das, Chattopadhyay, & Chakrabarti 2001a; Sarkar & Das 2013). These critical points are classi\ufb01ed as inner (xin) or outer (xout) critical points depending on whether they form close to or far away from the black hole horizon. c \u20ddRAS, MNRAS 000, 1\u2013?? \f6 Santabrata Das, Biplob Sarkar 3.2 Global Accretion Solutions with Shock When the accretion \ufb02ow containing multiple critical points accretes on to a black hole, it \ufb01rst passes through the outer critical point (xout) to become supersonic and keeps on accreting further inwards. Meanwhile, \ufb02ow starts experiencing centrifugal repulsion resulting the accumulation of matter in the nearby region of the black hole that ultimately induces the shock transition when the density threshold is reached. With this, an e\ufb00ective virtual barrier is formed around the black hole. At shock, supersonic \ufb02ow jumps in to the subsonic branch that makes the post-shock \ufb02ow hot as the kinetic energy of the \ufb02ow is converted to the thermal energy. Moreover, across the shock, \ufb02ow undergoes shock compression that ultimately causes the post-shock \ufb02ow to become dense. Interestingly, 2nd law of thermodynamics suggests that shocked accretion solutions are favorable as the entropy of the post-shock matter is comparatively higher than the pre-shock matter (Becker & Kazanas 2001). We calculate the entropy of the \ufb02ow which is expressed as (Chakrabarti 1996a), \u02d9 M(x) = vxa2n+1 \u0010 \u03b2 1+\u03b2 \u0011n q x \u03b3\u03a8\u2032 r . In the dissipation free limit, \u02d9 M remains constant all throughout expect at the shock transition. What is more is that at the discontinuous transition, the conservation of mass \ufb02ux, momentum \ufb02ux, energy \ufb02ux and magnetic \ufb02ux are held in order to satisfy the standing shock conditions (Sarkar & Das 2016, and reference therein) and hence, these conservation laws across the shock front can be written as the continuity of (a) mass \ufb02ux ( \u02d9 M\u2212= \u02d9 M+) (b) the momentum \ufb02ux (W\u2212+ \u03a3\u2212\u03c52 \u2212= W+ + \u03a3+\u03c52 +) (c) the energy \ufb02ux (E\u2212= E+) and (d) the magnetic \ufb02ux ( \u02d9 \u03a6\u2212= \u02d9 \u03a6+), respectively. In this work, we consider the shock to be thin and non-dissipative and the \ufb02ow variables with subscripts \u2018\u2212\u2019 and \u2018+\u2019 represent their values just before and after the shock. Following Fukue (1990); Samadi, Abbassi, & Khajavi (2014), we calculate the local energy of the magnetized dissipative accretion \ufb02ow as E(x) = \u03c52/2+a2/(\u03b3\u22121)+\u03a8e\ufb00+ < B2 \u03c6 > /(4\u03c0\u03c1), where all the above quantities bear their usual meaning. In the subsequent analysis, upon employing the above set of shock conditions, we compute the shock position and its diverse properties knowing the input parameters of the accretion \ufb02ow. In Fig. 1, we show the result obtained from one representative case where the variation of Mach number (M = \u03c5/a) with the logarithmic radial distance is depicted. We choose the injection parameters of the \ufb02ow at the outer edge (xedge = 1000) as Eedge = 1.0793 \u00d7 10\u22123, \u03b2edge = 1.6 \u00d7 105, \u03b1B = 0.02 and \u02d9 m = 0.05, respectively. In Fig. 1(a), we consider the black hole to be slowly rotating having ak = 0.32 and the \ufb02ow is injected with angular momentum, \u03bbedge = 0.124\u03bbK. Here, \ufb02ow is subsonic at the outer edge and becomes supersonic after crossing the outer critical point located at xout = 530.90. The supersonic \ufb02ow proceeds further inwards and encounters shock transition at xs = 16.20 while jumping in to the subsonic branch. In the \ufb01gure, shock position is shown using vertical arrow. Gradually \ufb02ow velocity is increased as it moves inward and then it passes xin smoothly at 4.1777 before crossing the horizon. Here, we show the direction of the \ufb02ow motion using arrows and mark the inner and outer critical points with \ufb01lled circles. Next, we intend to examine the role of black hole spin in deciding the shock Figure 1. (a-b) Plot of Mach number with logarithmic radial distance. Flow is injected with xedge = 1000, \u03bbedge = 0.124\u03bbK, Eedge = 1.0793 \u00d7 10\u22123, \u03b2edge = 1.6 \u00d7 105, \u03b1B = 0.02 and \u02d9 m = 0.05, respectively. We choose ak = 0.32 in panel (a) and ak = 0.52 in panel (b). (c-d) Logarithmic variation of plasma-\u03b2 corresponding to solutions (a) and (b). In each panel, xin and xout are indicated using \ufb01lled circles and shock transition is shown by vertical arrow. See text for details. transition and hence we inject matter on to a moderately rotating black hole (ak = 0.52) keeping all the \ufb02ow parameters same as in Fig. 1a. It may be noted that for this chosen set of \ufb02ow parameters, standing shock solution ceases to exist when ak > 0.52. The result is depicted in Fig. 1b, where the outer critical point, shock location and inner critical point are obtained as xout = 531.43, xs = 100.62 and xin = 3.2502, respectively. Since all the \ufb02ow parameters at the outer edge of the disc are kept \ufb01xed including angular momentum, we observe that shock forms at larger radial distance for ak = 0.52. In reality, a spinning black hole distorts the space-time fabric in its vicinity, allowing matter to orbit at a closer distance as compared to a non-rotating one. Due to the e\ufb00ect of frame dragging, the \ufb02uid angular momentum is a\ufb00ected by the rotation of the black hole. It is known that the shock formation in accretion \ufb02ow happens as a result of the competition between the gravitational pull and the centrifugal repulsion. When \ufb02ow is injected from the outer edge of the disc with \ufb01xed boundary conditions, because of the spin-orbit coupling term in the Kerr geometry, the increase of spin parameter (ak) modi\ufb01es the angular momentum pro\ufb01le of the \ufb02ow and the shock front is pushed away from the horizon as is observed in Fig. 1. This \ufb01nding is consistent with the results of Aktar, Das, & Nandi (2015). Overall, we see that the standing shock in magnetized \ufb02ow is continue to exist around the rotating black hole and when ak is increased, shock transition occurs for relatively low angular momentum \ufb02ow and vice versa. Further, in panel (c) and (d), we show the variation of plasma-\u03b2 with log x corresponding to solutions presented in panel (a) and (b), respectively. In both the cases, we \ufb01nd that plasma-\u03b2 steadily c \u20ddRAS, MNRAS 000, 1\u2013?? \fShocks in magnetized accretion \ufb02ows 7 Figure 2. Shock location (xs) variation with \u03b2edge. Here, the in\ufb02ow parameters are chosen as xedge = 1000, Eedge = 1.0793 \u00d7 10\u22123, \u03b1B = 0.02 and \u02d9 m = 0.05, respectively. In every panel, spin of the black hole (ak) is marked. In panel (a), results plotted with solid, dotted and dashed curves are obtained for \u03bbedge = 0.12845\u03bbK, 0.12791\u03bbK and 0.12737\u03bbK. And in panel (b), results depicted with solid, dotted and dashed curves are for \u03bbedge = 0.11443\u03bbK, 0.11390\u03bbK and 0.11336\u03bbK, respectively. See text for details. decreases with the decrease of radial coordinate. This clearly indicates that the magnetic activity inside the disc increases as the \ufb02ow accretes towards the horizon. 3.3 Properties of Standing Shocks One of the pertinent aspect in understanding the magnetically supported accreting \ufb02ow around the rotating black holes is to study the dependence of the shock position (xs) on the \u03b2 values. Accordingly, we calculate xs in terms of \u03b2edge for \ufb02ows with \ufb01xed outer boundary values accreting on to a given black hole. For that, we choose the outer boundary parameters as xedge = 1000, Eedge = 1.0793 \u00d7 10\u22123, \u03b1B = 0.02 and \u02d9 m = 0.05. In Fig. 2(a), we display the results obtained for ak = 0.4, where solid, dotted and dashed curves are for \u03bbedge = 0.12845\u03bbK, 0.12791\u03bbK and 0.12737\u03bbK, respectively. We notice that the shock front proceeds towards the horizon with the decrease of \u03b2edge irrespective to the values of \u03bbedge. This happens because when \u03b2edge is decreased, the e\ufb03ciency of synchrotron cooling is enhanced due to the increase of magnetic activity inside the disc. The e\ufb00ect becomes more prominent at the inner part of the disc (i.e., PSC) as, due to shock transition, both density and temperature are relatively higher there compared to the pre-shock \ufb02ow. This renders the thermal pressure to drop down in the PSC region and ultimately shock front moves inward to maintain the pressure balance across it. Incidentally, keeping the all the boundary \ufb02ow parameters \ufb01xed, one can not reduce \u03b2edge inde\ufb01nitely as shock ceases to exist when \u03b2edge < \u03b2cri edge (shock conditions fail to satisfy there). Figure 3. Variation of the shock location (xs) as function of \u03b2out (lower axis) and \u03b2in (upper axis). In each panel, ak is marked. See text for details. It may be noted that \u03b2cri edge does not have a universal value, instead it depends on the \ufb02ow parameters \ufb01xed at the outer edge of the disc. Further, we depict the results for ak = 0.8 in Fig. 2b, where solid, dotted and dashed curves represent results corresponding to \u03bbedge = 0.11443\u03bbK, 0.11390\u03bbK and 0.11336\u03bbK, respectively. Here, we keep all the other \ufb02ow parameters same as in Fig. 2a. We \ufb01nd that the shock location proceeds towards the horizon with the decrease of \u03b2edge in all cases as observed in Fig. 2a. Next, we examine the correlation of \u03b2 values between the inner and outer critical points for shock induced global accretion solutions. While doing this, we choose two cases where in\ufb02owing matters are accreted on to rotating black holes having di\ufb00erent spin parameters as ak = 0.4 and 0.8, respectively. For ak = 0.4, we consider the result depicted in Fig. 3a corresponding to \u03bbedge = 0.12845\u03bbK and show the variation of shock location as function of both \u03b2out (lower horizontal axis) and \u03b2in (upper horizontal axis). The other \ufb02ow parameters are considered same as in Fig. 2. Here, \u03b2in and \u03b2out refer \u03b2 values measured at xin and xout, respectively. We see that xs decreases when the magnetic activity is increased (\u03b2 is decreased) inside the disc. We continue our study choosing the result presented in Fig. 3b for \u03bbedge = 0.11551\u03bbK and show the variation of xs in terms of \u03b2out as well as \u03b2in in Fig. 3b. We observe that in all cases, \u03b2in < \u03b2out all throughout. This \ufb01nding is not surprising because, in our model, the advection of magnetic \ufb02ux increases as the in\ufb02owing matter approaches towards the horizon and eventually, \u03b2 is reduced towards the inner part of the disc. Moreover, we \ufb01nd that shock solutions exist even for \u03b2in < 1 irrespective to the choice of ak value. This evidently indicates that global transonic accretion solutions harbor standing shock waves both in gas pressure dominated as well as in magnetic pressure dominated \ufb02ows for a wide range of ak values. It is worthy to explore the e\ufb00ect of cooling on the c \u20ddRAS, MNRAS 000, 1\u2013?? \f8 Santabrata Das, Biplob Sarkar Figure 4. Variation of the shock location (xs) as function of \u02d9 m. Flow parameters at the outer edge of the disc is chosen as xedge = 1000, Eedge = 1.0793 \u00d7 10\u22123, \u03b1B = 0.02 and \u03b2edge = 105, respectively. Results depicted in top and bottom panels are for ak = 0.4 and 0.8. In (a), solid, dotted and dashed curves are obtained for \u03bbedge = 0.12845\u03bbK, 0.12737\u03bbK and 0.12630\u03bbK whereas in (b), solid, dotted and dashed curves represents results for \u03bbedge = 0.11443\u03bbK, 0.11336\u03bbK and 0.11229\u03bbK. See text for details. formation of shock wave in an accretion \ufb02ow and therefore, in Fig. 4, we study the variation of shock location (xs) with accretion rate ( \u02d9 m). Towards this, we consider the \ufb02ow injection parameters as xedge = 1000, \u03b2edge = 105, Eedge = 1.0793 \u00d7 10\u22123 and \u03b1B = 0.02, respectively. As before, in Fig. 4a, we chose ak = 0.4 and the pro\ufb01le of shock location (xs) is presented for various values of \u03bbedge. Here, solid, dotted and dashed curves represent \ufb02ows injected with \u03bbedge = 0.12845\u03bbK, 0.12737\u03bbK and 0.12630\u03bbK, respectively. From the \ufb01gure, it is clear that large range of \u02d9 m admits standing shock in magnetized accretion \ufb02ow. Moreover, we \ufb01nd that for a given \u03bbedge, xs moves inwards as \u02d9 m is increased. In reality, enhanced accretion rate boosts the e\ufb03ciency of the radiative cooling that causes the \ufb02ow to lose energy during accretion. Since PSC is hot and dense, the effect of cooling at PSC becomes profound that evidently decreases the post-shock thermal pressure. Consequently, this compels the shock front to settle down at some smaller distance to ful\ufb01ll the shock conditions. Unfortunately, \u02d9 m can not be increased inde\ufb01nitely due to the fact that when \u02d9 m exceeds its critical value ( \u02d9 mcri), standing shocks are no longer feasible as the shock conditions fail to satisfy there. Clearly, \u02d9 mcri does not retain a global value, rather it depends on the \ufb02ow parameters. It is also apparent that the possibility of standing shock formation reduces with the increase of \u02d9 m. Furthermore, it is intriguing to understand what happens to the \ufb02ow when standing shock conditions fail to satisfy. Interestingly, in that case, inner part of the accretion \ufb02ow may start to modulate exhibiting the feature of oscillatory shock (Das & Aktar 2015, and references therein). UnforFigure 5. Variation of the shock location (xs) as function of \u03b1B. Accreting matter is supplied with in\ufb02ow parameters as xedge = 1000, with Eedge = 1.0793 \u00d7 10\u22123, \u02d9 m = 0.05 and \u03b2edge = 105, respectively. In each panel, ak is marked. In top panel (a), the results corresponding to \u03bbedge = 0.12845\u03bbK, 0.12791\u03bbK and 0.12737\u03bbK are represented using solid, dotted and dashed line style. The same line style is used to denote the results for \u03bbedge = 0.11443\u03bbK, 0.11390\u03bbK and 0.11336\u03bbK in lower panel (b). See text for details. tunately, the investigation of non-steady shock properties is beyond the scope of the present paper. In addition, we \ufb01nd that for a given \u02d9 m, shock front recedes away from the black hole when \u03bbedge is increased. In reality, the discontinuous shock transition is essentially the manifestation of the competition between centrifugal repulsion and gravity. When \u03bbedge is higher, accretion \ufb02ow possesses higher angular momentum that causes the enhanced centrifugal repulsion against gravity. Because of this, shock front is pushed further out when \u03bbedge is increased. This \ufb01ndings establishes the fact that shocks are centrifugally driven. In Fig. 4(b), we present the result corresponding to ak = 0.8, where solid, dotted and dashed curves represent results obtained for \u03bbedge = 0.11443\u03bbK, 0.11336\u03bbK and 0.11229\u03bbK, respectively. Here also, we observe that the formation of shock and its dependence on \u02d9 m and \u03bbedge are in general similar to the results shown in Fig. 4(a). For completeness, we investigate the variation of shock location in terms of viscosity (\u03b1B) for \ufb02ows having \ufb01xed outer edge boundary parameters. Here, we choose the \ufb02ow injection parameters as xedge = 1000, Eedge = 1.0793\u00d710\u22123, \u03b2edge = 105 and \u02d9 m = 0.05, respectively. In Fig. 5a, we show the obtained results for ak = 0.4, where solid, dotted and dashed curves are for \u03bbedge = 0.12845\u03bbK, 0.12791\u03bbK and 0.12737\u03bbK, respectively. Notice that shocked accretion solutions exist for a wide range of \u03b1B and shock location shifts towards the horizon with the increase of \u03b1B for all cases having di\ufb00erent \u03bbedge values. In reality, as \u03b1B is increased, angular momentum transport in the outward direction becomes more e\ufb03cient that causes the weakening of centrifugal repulc \u20ddRAS, MNRAS 000, 1\u2013?? \fShocks in magnetized accretion \ufb02ows 9 Figure 6. Variation of critical accretion rate ( \u02d9 mcri) as a function of viscosity parameter (\u03b1B) for various ak. Here, we choose \u03b2in = 10. Long-dashed, dashed and dot-dashed curves are obtained for ak = 0, 0.4 and 0.8 and the region bounded by them in \u03b1B \u2212\u02d9 mcri plane provides closed accretion solutions passing through the inner sonic point. In addition, solid, dotted and shortlong-dashed curves represent the e\ufb00ective region corresponding to ak = 0, 0.4 and 0.8 that admits standing shock solutions. In the inset, examples of closed (marked with C) and shocked solutions (marked with S) are presented. See text for details. sion and hence, shock front is driven inward. When \u03b1B exceeds its critical limit (\u03b1cri B ), shock conditions do not remain favorable and as a result, standing shock disappears. Again, it may be noted that \u03b1cri B largely depends on the accretion \ufb02ow parameters. Further, in Fig. 5b, we display the result for ak = 0.8, where solid, dotted and dashed curves denote results for \u03bbedge = 0.11443\u03bbK, 0.11390\u03bbK and 0.11336\u03bbK, respectively. Here, we \ufb01nd that shock location decreases with the increase of \u03b1B around rotating black holes as well. 3.4 Parameter Space for Shock We have already mentioned that during the course of accretion, in\ufb02owing matter may contain shock wave provided it possesses multiple critical points. Interestingly, one can obtain standing shock solution, if the standing shock conditions are satis\ufb01ed (see \u00a73.2). But, when shock conditions are not favorable and the entropy content at the inner critical point is higher than the outer critical point, the shock formation never remains steady as the shock location becomes imaginary (Das, Chattopadhyay, & Chakrabarti 2001a) and therefore, shock starts to execute continuous back and forth movements that seems to exhibit the quasi-periodic oscillation phenomenon (Das, Chattopadhyay, & Chakrabarti 2001a). In this case, accretion solution passing through the inner critical point fails to connect the black hole horizon to the outer edge of the disc as it becomes closed in the range xin < x < xout with M(x) = Mc (Chakrabarti & Das 2004). Needless to mention that it is not possible to examine the characteristics of the non-steady shock solution in the framework of the present paper, however, we estimate the critical accretion rate ( \u02d9 mcri) that provides accretion solutions containing standing shocks and/or closed topologies. While doing this, we \ufb01x \u03b2in = 10, and for a given ak, we calculate \u02d9 mcri as function of \u03b1B, where xin and \u03bbin are varied freely. Accordingly, in Fig. 6, we classify the parameter space spanned by \u03b1B and \u02d9 mcri that provides closed topologies and standing shocks, respectively. Examples of closed topology (marked as C) and standing shock solution (marked as S) are displayed in the small boxes, where the variation of Mach number with radial coordinate is plotted. In the \ufb01gure, longdashed, short-dashed and dot-dashed curves are obtained for ak = 0, 0.4 and 0.8 that separate the \u03b1B \u2212\u02d9 mcri plane where left-bottom region allows closed topologies. Similarly, solid, dotted and short-long-dashed curves separate the standing shock parameter space for ak = 0, 0.4 and 0.8, respectively. We observe that the shock parameter space appears to be the subset of parameter space for closed topology all throughout. This is expected as the region of closed topologies includes the region of standing as well as oscillating shocks. Meanwhile, Das & Chakrabarti (2008) showed that for \ufb01xed ak, the e\ufb00ective region of standing shock parameter space shrinks with the increase of accretion rate for an inviscid \ufb02ow. Actually, when the accretion rate is enhanced, cooling becomes more e\ufb00ective and hence, in\ufb02owing matter loses energy during accretion. On the other hand, viscosity enhances the \ufb02ow energy as it accretes due to viscous heating. Interestingly, when both dissipation processes, namely, viscosity and synchrotron cooling, are present in the \ufb02ow, viscous dissipation e\ufb00ectively compensates a part of the energy loss happens due to cooling. Here, in a way, viscosity and cooling act oppositely in deciding the shock parameter space. However, as synchrotron cooling and viscous heating depend di\ufb00erently on the \ufb02ow variables, one does not cancel the other e\ufb00ect completely (Das 2007). Overall, for a given ak, standing shock continues to form until an optimum combination of (\u03b1B, \u02d9 mcri) is reached which evidently exhibits as a peak in the \u03b1B\u2212\u02d9 mcri plane. In general, the \ufb02ow is dominated by cooling in the left side of the peak whereas viscous heating dominates on the other side. As expected, shock disappears when viscosity exceeds its critical limit (Chakrabarti & Das 2004). In addition, in case of a rapidly rotating black hole, shock forms in a relatively low angular momentum accretion \ufb02ow (Aktar, Das, & Nandi 2015) that e\ufb00ectively causes the weak centrifugal repulsion and therefore, standing shock settles down at a smaller length scale. Moreover, when the level of dissipation is increased (namely, the increase of \u03b1B and \u02d9 m), shock front is compelled to move towards the horizon (see Fig. 4-5). This clearly indicates that rapidly rotating black holes can sustain shocks for lower dissipation rates and we observe the similar \ufb01ndings in Fig. 6. Now, we intend to study the e\ufb00ect of magnetic \ufb01elds in deciding the e\ufb00ective region of parameter space in (\u03b1B, \u02d9 mcri) plane for standing shock. In Fig. 7, we present the obtained results, where shock parameter space is computed for rapidly rotating black hole (ak = 0.8) considering different \u03b2in values. In the \ufb01gure, the regions bounded with solid, dotted, short-dashed and long-dashed curves are obtained for \u03b2in = 5, 10, 50 and 100, respectively. We observe that the e\ufb00ective region of the parameter space for shock gradually reduces with the decrease of \u03b2in. This happens c \u20ddRAS, MNRAS 000, 1\u2013?? \f10 Santabrata Das, Biplob Sarkar Figure 7. Variation of critical accretion rate ( \u02d9 mcri) for standing accretion shock with viscosity parameter (\u03b1B) for di\ufb00erent \u03b2in. Here, we \ufb01x black hole spin as ak = 0.8. Solid, dotted, dashed and long-dashed curves denote results for \u03b2in = 5, 10, 50, and 100, respectively. See text for details. due to the fact that when \u03b2in is low, synchrotron cooling becomes very much e\ufb00ective and therefore, the level of dissipation experienced by the in\ufb02owing matter turns out to be signi\ufb01cant even with moderate accretion rates. Thus, the possibility of shock formation is eventually reduced as the magnetic activity is increased inside the disc. We carry out the analysis further to calculate the critical accretion rate ( \u02d9 mcri) of the \ufb02ow as function of \u03b2in that provides global accretion solutions containing standing shock. In Fig. 8, we compare the critical accretion rate ( \u02d9 mcri) where solid and dashed curves represent the results obtained for non-rotating (ak = 0) and rapidly rotating (ak = 0.8) black holes, respectively. Here, we choose the viscosity parameter as \u03b1B = 0.01. We \ufb01nd that standing shocks exist for a wide range of \u03b2in that e\ufb00ectively includes both gas pressure dominated \ufb02ows (\u03b2 > 1) as well as magnetic pressure dominated \ufb02ows (\u03b2 < 1). Since synchrotron process directly depends on the density and magnetic \ufb01elds of the \ufb02ow, one can achieve the desired cooling e\ufb03ciency by suitably adjusting the accretion rate and plasma \u03b2. In the \ufb01gure, we observe this \ufb01ndings for both the cases (for ak = 0 and 0.8) where the critical accretion rate ( \u02d9 mcri) for shock is found to be increased with \u03b2in. In reality, when \u03b2in < 1, the inner part of the disc is magnetically dominated and a tiny amount of accretion rate is su\ufb03cient to cool the \ufb02ow. On the other hand, as \u03b2in is gradually increased, the strength of magnetic \ufb01elds becomes weak and therefore, enhanced accretion rate is needed for the cooling of the \ufb02ow. Interestingly, when \u03b2in \u226b1, magnetic \ufb01elds becomes insigni\ufb01cant and \ufb02ow is capable of sustaining standing shocks even for super-Eddington accretion rates ( \u02d9 mcri > 1). Moreover, we \ufb01nd that for a given \u03b2in, \u02d9 mcri is smaller for higher ak. This clearly indicates that in\ufb02owing matter around rapidly rotatFigure 8. Comparison of critical accretion rate \u02d9 mcri for shock with \u03b2in. In the plot, \ufb01lled circles joined with solid line denote results for ak = 0 and \ufb01lled circles connected with dashed line represent results corresponding to ak = 0.8, respectively. Here \u03b1B = 0.01 is used. See text for details. Figure 9. Variation of critical accretion rate \u02d9 mcri with ak for shock. Here, we \ufb01x viscosity parameter as \u03b1B = 0.01. Results depicted with solid, dotted, dashed and big-dashed line style correspond to \u03b2in = 10, 50, 100 and 150. See text for details. ing black holes contain shocks for relatively lower accretion rates which is consistent with the \ufb01ndings of Fig. 6. In the context of the formation of standing shock in an magnetized accretion \ufb02ow, we now illustrate the dependence of the critical accretion rate ( \u02d9 mcri) on the spin of the black hole (ak) in Fig. 9. In order for that we \ufb01x the viscosity as \u03b1B = 0.01. Here, solid, dotted, dashed and long-dashed c \u20ddRAS, MNRAS 000, 1\u2013?? \fShocks in magnetized accretion \ufb02ows 11 Figure 10. Plot of maximum energy dissipation (\u2206Emax) at the shock with ak for three distinct values of \u03b2in. Here, we choose accretion rate as \u02d9 m = 0.05 and \ufb01x viscosity parameter as \u03b1B = 0.01. Solid, dotted and dashed curves are obtained for \u03b2in = 6, 10 and 1000, respectively. See text for details. curves are obtained for \u03b2in = 10, 50, 100 and 150, respectively. We observe that for a given \u03b2in, \u02d9 mcri decreases with the increase of ak in all cases. Moreover, here again we \ufb01nd that when \u03b2in is large, accretion \ufb02ow continues to sustain standing shock for higher accretion rate and vice versa. 3.5 Energy Extraction from PSC So far, we have carried out the investigation of standing shock properties for \ufb02ows accreting on to rotating black holes. While doing this, we consider the shock to be thin and non-dissipative and therefore, the speci\ufb01c energy remains essentially conserved across the shock front (Chakrabarti 1989). However, in reality, the nature of the shock can be dissipative as well and in that case, the available energy dissipated at shock escaped through the disc surfaces along the vertical direction. A part of this energy is then converted to hard radiations and the rest may be used in jet generation as jets seem to be originated from the PSC around rotating black holes (Aktar et al. 2017, and references therein). In e\ufb00ect, this cause the depletion of energy at PSC (Singh & Chakrabarti 2011). Moreover, Chakrabarti & Titarchuk (1995) pointed out that the dissipative energy at shock is likely to be regulated via thermal Comptonization process that ultimately reduces the thermal energy of the PSC. Based on the above insight, we model the dissipated energy to be proportional to the temperature di\ufb00erence between the immediate pre-shock and post-shock \ufb02ow. Following this, the energy loss (\u2206E) at the shock is estimated as (Das, Chakrabarti, & Mondal 2010), \u2206E = fn(a2 + \u2212a2 \u2212), (17) where a\u2212and a+ specify the sound speed just before and after the shock transition. Here, f refers the fractional value of thermal energy di\ufb00erence dissipated at shock and we treat it as free parameter (Das, Chakrabarti, & Mondal 2010; Sarkar & Das 2013; Kumar & Chattopadhyay 2013; Sarkar, Das, & Mandal 2018). For the purpose of representation, in this work, we choose f = 0.1 all throughout. In Fig. 10, we show how the maximum energy dissipated at shock (\u2206E max) is varied with ak. While doing this, we choose \u02d9 m = 0.05 and \u03b1B = 0.01, respectively and freely vary the other \ufb02ow parameters. In the plot, solid, dotted and dashed curves illustrate the results for \u03b2in = 6, 10 and 1000, respectively. We \ufb01nd that for given \u03b2in, \u2206E max increases with the increase of ak. In general, standing shock forms at a smaller radial coordinate when ak is increased (Aktar, Das, & Nandi 2015) and hence, the thermal energy content across the shock is also increased. Eventually, the accessible thermal energy likely to be dissipated at shock is also enhanced. Therefore, for a given \u03b2in, we \ufb01nd a positive correlation between \u2206E max and ak. On the other hand, as \u03b2in is reduced, synchrotron cooling turns out to be more compelling in the \ufb02ow due to the increase of magnetic \ufb01eld strength that ultimately reduces the thermal energy content in the PSC. Thus, \u2206E max diminishes with the decrease of \u03b2in for \ufb01xed ak. Finally, if the mass, spin and accretion rate of a given black hole candidate is known, the above formalism can be employed to estimate the maximum accessible energy in the PSC region and then this unbound energy could be compared with the observed radio jet kinetic power. Such a task is under progress and would be reported elsewhere. 4 SUMMARY In this paper, we study the magnetized advection accretion \ufb02ow around rotating black hole where viscosity and synchrotron cooling is considered as the dominant dissipation processes. We calculate the shock induced global accretion solutions and investigate the e\ufb00ect of dissipation parameters, such as \u02d9 m, \u03b1B and \u03b2, in deciding the formation of shock waves. The results are summarized below. We \ufb01nd that accreting matter continues to harbor standing shock waves for ak \u2a7d0.8 (see Fig. 1-5). It may be noted that we restrict the upper limit of ak below its maximum allowed value (i.e., ak \u21921), because the adopted potential satisfactorily mimics the space-time geometry around the rotating black hole for spin parameter ak \u22720.8 (Chakrabarti & Mondal 2006). Furthermore, we have realized that standing shocks in magnetized accretion \ufb02ow are quite common and they exist for a wide range of \ufb02ow parameters (see Fig. 2-5). Next, we quantify the range of dissipation parameters that admit the formation of standing shocks in magnetized accretion \ufb02ow around rotating black holes. We \ufb01nd that \ufb02ow can sustain shock waves even when the level of dissipation is very high. More importantly, we observe that radiative cooling acts oppositely in contrast with viscous dissipation in deciding the shock parameter space (see Fig. 6). However, the e\ufb00ect of cooling can not be mitigated completely by viscous heating as their dependencies on the \ufb02ow variables are di\ufb00erent. Further, we \ufb01nd that the possibility of shock formation always decreases with the increase of dissipation strength. Subsequently, we calculate the critical accretion rate ( \u02d9 mcri) for standing shock. When accretion rate c \u20ddRAS, MNRAS 000, 1\u2013?? \f12 Santabrata Das, Biplob Sarkar exceeds the critical limit, standing shock conditions are not satis\ufb01ed and consequently, standing shock disappears. We \ufb01nd that \u02d9 mcri strongly depends on viscosity (\u03b1B), magnetic \ufb01elds (\u03b2) and spin of the black hole (ak), respectively (see Fig. 6-9). What is more is that standing shock exists in a magnetically dominated accretion \ufb02ow when the accretion rate lies in general in the sub-Eddington domain ( \u02d9 m < 1) whereas for gas pressure dominated \ufb02ow, shock forms even for super-Eddington accretion rate ( \u02d9 m > 1) (see Fig. 7-9). Further, we obtain the standing shock solution for magnetized accretion \ufb02ow, where the shock is considered to be dissipative by nature. The available energy dissipated at shock (\u2206E) is usually escaped through the disc surface that is being utilized to power the jets/out\ufb02ows (Le & Becker 2004, 2005; Das, Becker, & Le 2009). Towards this, we compute the maximum energy dissipated at shock (\u2206E max) and \ufb01nd that \u2206E max increases with ak although its dependence on \u03b2in is very much conspicuous. Finally, we would like to mention that the present formalism is developed by adopting a simpli\ufb01ed pseudo potential to delineate the gravitational e\ufb00ect around a rotating black hole. Incidentally, while studying the non-linear shock solutions, this approach allow us to avoid the mathematical complexity of general theory of relativity and at the same time it retains the salient features of space-time geometry around rotating black holes (Chakrabarti & Mondal 2006). In this regard, although the present formalism introduces a bit of imperfections, however, we believe that the basic \ufb01ndings of this work will qualitatively remain unaltered due to this approximation. ACKNOWLEDGMENTS Authors thank the anonymous referee for useful comments and constructive suggestions." + }, + { + "url": "http://arxiv.org/abs/1405.4415v1", + "title": "Periodic massloss from viscous accretion flows around black holes", + "abstract": "We investigate the behaviour of low angular momentum viscous accretion flows\naround black holes using Smooth Particle Hydrodynamics (SPH) method. Earlier,\nit has been observed that in a significant part of the energy and angular\nmomentum parameter space, rotating transonic accretion flow undergoes shock\ntransition before entering in to the black hole and a part of the post-shock\nmatter is ejected as bipolar outflows, which are supposed to be the precursor\nof relativistic jets. In this work, we simulate accretion flows having\ninjection parameters from the inviscid shock parameter space, and study the\nresponse of viscosity on them. With the increase of viscosity, shock becomes\ntime dependent and starts to oscillate when the viscosity parameter crosses its\ncritical value. As a result, the in falling matter inside the post-shock region\nexhibits quasi-periodic variations and causes periodic ejection of matter from\nthe inner disc as outflows. In addition, the same hot and dense post-shock\nmatter emits high energy radiation and the emanating photon flux also modulates\nquasi-periodically. Assuming a ten solar mass black hole, the corresponding\npower density spectrum peaks at the fundamental frequency of few Hz followed by\nmultiple harmonics. This feature is very common in several outbursting black\nhole candidates. We discuss the implications of such periodic variations.", + "authors": "Santabrata Das, Indranil Chattopadhyay, Anuj Nandi, Diego Molteni", + "published": "2014-05-17", + "updated": "2014-05-17", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Luminosities and spectra emanating from the microquasars and AGNs are best explained by the gravitational energy released due to accretion onto compact objects such as black holes. However, it has been established in recent years, that black hole candidates, be it stellar mass or super massive, emits powerful collimated out\ufb02ows or jets (Livio 1999). Since black holes do not have intrinsic atmosphere or hard surface, these jets have to be generated from the accretion disc itself. In a very interesting paper, Junor et al. (1999) had shown that jets originate from around a region less than 100rg (rg\u2261Schwarzschild radius) across the central object of the nearby active galaxy M87. Since the typical timescale of an AGN or a microquasar scales with mass, temporal behaviour of black hole candidates is studied with micro\u22c6E-mail: sbdas@iitg.ernet.in (SD); indra@aries.res.in (IC); anuj@isac.gov.in (AN); diego.molteni@unipa.it (DM) quasars (McHardy et al. 2006). After investigating the connection between accretion and ejection in ten microquasars, Gallo et al. (2003) concluded that mildly relativistic, quasi steady jets are generally ejected in the low hard spectral states (i.e., when electromagnetic spectra peak in the high energy power-law frequency range) of the accretion disc. It was also shown that jets tend to get stronger as the microquasar moves to the intermediate hard states, and truly relativistic ejections are observed during hard-intermediate to soft-intermediate transition, after which the microquasar enters canonical high soft state (i.e., when spectra peak in the thermal low energy range), which shows no jet activity (Rushton et al. 2010; Miller-Jones et al. 2012). All these evidences suggest that the generation or quenching of jets do depend on various states of the accretion disc, and that, the jet formation mechanism is linked with the processes that dominate at distances relatively closer to the black hole. It is well known that, spectra from microquasar change states between, the low hard state (LH) and high soft state \f2 Santabrata Das, Indranil Chattopadhyay, Anuj Nandi, Diego Molteni (HS), through a series of intermediate states. Interestingly, the hard power-law photons exhibit a quasi-periodic oscillations (QPO). The QPOs evolve along with the spectral states of the accretion disc, starting with low frequencies in LH, increasing as the luminosity increases, and reaches highest value before disappearing in the HS state (Chakrabarti et al. 2008; Shaposhnikov & Titarchuk 2009; Nandi et al. 2012, 2013). Interestingly, although QPO frequency increases as the accretion disc moves from LH to intermediate states, but no QPO was detected during ejection of relativistic jets (Nandi et al. 2013; Radhika & Nandi 2013) which suggests that, probably the part of the disc responsible for QPO is entirely ejected as relativistic jets. This conversely also suggests that, the inner part of the disc is responsible for QPOs and is also the base of the jet. Accretion disc models which are invoked to explain the accretionejection phenomena around black hole candidates, should at least address the connection between spectral states, QPO evolution and the evolution of jets, apart from matching the luminosities radiated by AGNs and microquasars. There are various accretion disc models in the literature. We know that, matter crosses the horizon with the speed of light (c) and circular orbits cannot exist within the marginally stable orbit (3rg). So the inner boundary condition for accretion onto black hole is necessarily transonic, as well as, sub-Keplerian, which implies that advection should be signi\ufb01cant at least close to the horizon. The very \ufb01rst model of accretion onto black holes was of course radial in\ufb02ow of matter, which was basically the general relativistic version of the Bondi solutions (Bondi 1952; Michel 1972). However, the infall time scale of radial accretion onto black holes is short, and therefore has very little time to produce the huge luminosities observed from AGNs and microquasars (Shapiro 1973). On the other hand, Shakura & Sunyaev (1973) considered a geometrically thin but optically thick accretion disc characterized by negligible advection, but by virtue of possessing Keplerian angular momentum distribution, the disc was rotation dominated. This disc model was radiatively e\ufb03cient and produced the multi-coloured blackbody part of the spectra or the \u2018blue bump\u2019 radiated by the AGNs. However, the presence of hard power-law tail in the spectra of the black hole candidates indicated the necessity of a hot Comptonizing cloud, which was neither present in Keplerian disc nor its origin could be identi\ufb01ed in any self-consistent manner from such a disc. Therefore, models with advection gained importance. Theoretically, it was shown that in a signi\ufb01cant range of energy and angular momentum, multiple sonic points may exist for rotating, transonic accretion \ufb02ows onto black holes, where the existence of the inner sonic point is purely due to the presence of gravity stronger than that due to the Newtonian variety (Liang & Thompson 1980). It has been shown numerically as well as analytically, that such transonic matter in the multiple sonic point regime, may undergo steady or non-steady shock transition. Shock in accretion may be pressure and centrifugally supported, if the \ufb02ow is rotating (Fukue 1987; Chakrabarti 1990; Molteni et al. 1994, 1996a,b; Chakrabarti & Das 2004; Molteni et al. 2006; Chattopadhyay & Das 2007; Das 2007; Das & Chattopadhyay 2008) or only be pressure supported if the \ufb02ow is spherical (Chang & Ostriker 1985; Kazanas & Ellison 1986; Babul et al. 1989). The most popular amongst accretion disc models with advection is the so-called advection dominated accretion \ufb02ow (ADAF), and it is characterized by a single sonic point close to the horizon (Narayan et al. 1997). It has been shown later, that ADAF type solution is a subset of a general viscous advective accretion solutions (Lu et al. 1999; Becker et al. 2008; Kumar & Chattopadhyay 2013). The shock in accretion disc around black holes has been shown to exist for multispecies \ufb02ows with variable adiabatic index (\u03b3) as well (Chattopadhyay 2008; Chattopadhyay & Chakrabarti 2011; Kumar et al. 2013; Chattopadhyay & Kumar 2013). Shock transition for accretion \ufb02ow are favourable mechanism to explain many of the observational features of black hole candidates. Hot electrons in the post-shock region, abbreviated as CENBOL (CENtrifugal pressure supported Boundary Layer), may explain the power-law tail of the spectrum from black hole candidates in hard states, while a weak or no shock solution produces a dearth of hot electrons which may cause the weaker power-law tail in the soft states (Chakrabarti & Titarchuk 1995; Mandal & Chakrabarti 2010). Moreover, a large number of authors have shown the formation of bipolar out\ufb02ows from the post-shock accretion \ufb02ow, both numerically (Molteni et al. 1994, 1996b) as well as analytically (Le & Becker 2005; Chattopadhyay & Das 2007; Fukumura & Kazanas 2007; Das & Chattopadhyay 2008; Das et al. 2009; Kumar & Chattopadhyay 2013; Kumar et al. 2013, 2014). It is also interesting to note that, by considering a simpli\ufb01ed inviscid accretion, and which has the right parameters to form a standing shock, Das et al. (2001) qualitatively showed that there would be no jets in no-shock or weak shock condition of the disc, or in other words, when the disc is in the soft spectral state. This indicates the conclusions of Gallo et al. (2003). Such a scheme of accretion-ejection solution is interesting because, the jet base is not the entire accretion disc but the inner part of the disc, as has been suggested by observations (Junor et al. 1999; Doeleman et al. 2012). Although, most of the e\ufb00orts have been undertaken theoretically to study steady shocks, perhaps transient shock formations may explain the transient events of the black hole candidates much better. Molteni et al. (1996b), considered bremsstrahlung cooling of an inviscid \ufb02ow, and reported there is signi\ufb01cant oscillation of the post-shock region. Since the post-shock \ufb02ow is of higher density and temperature compared to the pre-shock \ufb02ow, the cooling rates are higher. If the cooling timescale roughly matches with the infall timescale at the shock the resonance condition occurs and the post-shock \ufb02ow oscillates. Since the post-shock region is the source of hard X-rays (Chakrabarti & Titarchuk 1995), thus its oscillation would be re\ufb02ected in the oscillation of the emitted hard X-rays \u2014 a plausible explanation for QPOs (Chakrabarti & Manickam 2000). In this paper, we will focus on the oscillation of the shock front, but now due to viscosity instead of any cooling mechanism. Chakrabarti & Das (2004); Kumar & Chattopadhyay (2013) had shown that, with the increase of viscosity parameter, in the energy-angular momentum parameter space, the domain of shock decreases. We know viscosity transports angular momentum outwards, while the speci\ufb01c energy increases inwards. How does the general \ufb02ow properties of matter, which are being launched with same injection speed \fPeriodic massloss from viscous accretion \ufb02ows 3 and temperature, be a\ufb00ected with the increase in viscosity parameter? It has been shown from simulations and theoretical studies that post-shock matter is ejected as jets, however if the shock is weak then the jet should be of lower power! We would like to \ufb01nd the condition of the shocked disc that produces weak or strong jets. Disc instability due to viscous transport has been shown before and has been identi\ufb01ed with QPOs (Lanzafame et al. 1998, 2008; Lee et al. 2011), however, we would like to show how this instability might a\ufb00ect the shock induced bipolar out\ufb02ows. Moreover, it has been shown theoretically that the energy and angular momentum for which steady shock exists in inviscid \ufb02ow, will become unstable for viscous \ufb02ow (Chakrabarti & Das 2004; Kumar & Chattopadhyay 2013). We would like to see how the mass out\ufb02ow rate depend on unstable shock, or in other words, if there is any connection between QPOs and mass out\ufb02ow rate. In this paper, we would like to address these issues. In the next section, we present the governing equations and model assumptions. In section 3, we present the results, and in the last section we draw concluding remarks. 2 GOVERNING EQUATIONS We consider a non-steady accretion \ufb02ow around a nonrotating black hole. The space time geometry around a Schwarzschild black hole is modeled using the pseudoNewtonian potential introduced by Paczy\u00b4 nski & Wiita (1980). In this work, we use geometric units as 2G = MB = c = 1, where G, MB and c are the gravitational constant, the mass of the black hole and the speed of light, respectively. In this unit system, distance, velocity and time are measured in units of rg = 2GMB/c2, c and tg = 2GMB/c3 respectively, and the equations have been made dimensionless. The Lagrangian formulation of the two-dimensional \ufb02uid dynamics equations for SPH (Monaghan 1992) in cylindrical coordinate are given by (Lanzafame et al. 1998) \u2014 The mass conservation equation, D\u03c1 Dt = \u2212\u03c1\u2207.v, (1) where, D Dt denotes the co-moving time-derivative and \u03c1 is the density. The radial momentum equation is given by, Dvr Dt = \u22121 \u03c1 \u2202P \u2202r + gr + v2 \u03c6 r . (4a) The vertical momentum equation is, Dvz Dt = \u22121 \u03c1 \u2202P \u2202z + gz. (4b) The azimuthal momentum equation is, Dv\u03c6 Dt = \u2212v\u03c6vr r + 1 \u03c1 \u0014 1 r2 \u2202 \u2202r \u0000r2\u03c4r\u03c6 \u0001\u0015 , (4c) where, \u03c4r\u03c6 is the r \u2212\u03c6 component of viscous stress tensor and is given by, \u03c4r\u03c6 = \u03b7r \u2202\u2126 \u2202r , (4d) and the angular velocity is given by, \u2126= v\u03c6 r . (4e) In Eqs. 4(a-b), gr and gz are the components of gravitational force (Paczy\u00b4 nski & Wiita 1980) and are given by, gr = \u2212 1 2(R \u22121)2 r R, and gz = \u2212 1 2(R \u22121)2 z R, (4f) where R = \u221a r2 + z2. The form of dynamic viscosity parameter is given by (Shakura & Sunyaev 1973), \u03b7 = \u03bd\u03c1 = \u03b1\u03c1ah where, \u03bd is the kinematic viscosity, \u03b1 is the viscosity parameter, a (= p \u03b3P/\u03c1) is the sound speed, \u03c1 is the mass density, h [= q 2 \u03b3 ar1/2(r \u22121)] is the disc half height estimated using hydrostatic equilibrium (Chakrabarti & Das 2004) and v = \u221av2 r + v2 z. The conservation of energy is given by, D Dt \u0012 \u01eb + 1 2v2 \u0013 = \u2212P \u03c1 \u2207v + v\u00b7 \u0012Dv Dt \u0013 + 1 \u03c1\u2207(\u02dc \u03c4: v) , (4g) with \u0012Dv Dt \u0013 = \u22121 \u03c1\u2207P + g, where, P = (\u03b3 \u22121) \u03c1\u01eb is the equation of state of ideal gas, g is the gravitational acceleration and \u01eb is the internal energy, respectively. \u02dc \u03c4 : v is the vector resulting from the contraction of the stress tensor with the velocity vector. We include only \u03c4r\u03c6 (namely, the r \u2212\u03c6 component) since it is the dominant contributor to the viscous stress. A complete steady solution requires the equations of energy, angular momentum and mass conservation supplied by transonic conditions at the critical points and the Rankine-Hugoniot conditions at the shock. 3 RESULTS In order to obtain the time dependent axisymmetric, viscous accretion solution, we adopt Smooth Particle Hydrodynamics scheme. Here, we inject SPH particles with radial velocity vinj, angular momentum \u03bbinj and sound speed ainj at the injection radius rinj. Initially, the accreting matter is treated as inviscid in nature. At the injection radius, the disc height is estimated considering the fact that the \ufb02ow remain in hydrostatic equilibrium in the vertical direction and obtained as Hinj \u223cainjr1/2 inj (rinj \u22121). With the suitable choice of \ufb02ow parameters at the injection radius, accretion \ufb02ow may undergo shock transition. For a given set of \ufb02ow parameters like the Bernoulli parameter E = 0.00449 and the speci\ufb01c angular momentum \u03bb = 1.63, we plot the equatorial Mach number M = vr/a of the \ufb02ow with r at t = 317.39tg, and the transient shock is at rs = 7.5rg (Fig. 1a). The steady state is reached at t = 104tg, and the stationary shock settles at rs = 15rg (Fig. 1b). The steady state angular momentum distribution on the equatorial plane is shown in Fig. 1c. The position of the SPH particles in steady state is shown in Fig. 1d in the r \u2212z plane. Once the steady state is achieved in the inviscid limit, \f4 Santabrata Das, Indranil Chattopadhyay, Anuj Nandi, Diego Molteni Figure 1. (a) Mach number M = vr/a on the equatorial plane with r at t = 317.39 tg, (b) M on the equatorial plane with r, after the solution has reached steady state. (c) Speci\ufb01c angular momentum \u03bb on the equatorial plane with r, and (d) the distribution of SPH particles in steady state in r \u2212z plane. The injection parameters are injection velocity vinj = \u22120.06436, sound speed ainj = 0.06328 and angular momentum \u03bbinj = 1.63 at rinj = 50.4. See text for details. we turn on the viscosity. It must be pointed out that turning on the viscosity after obtaining the inviscid steady state shock solution, doesn\u2019t a\ufb00ect our conclusions since exactly the same result would be obtained if the viscosity is turned on initially. However, since turning on \u03b1 makes the numerical code slow, it would have taken much longer time to search the parameters for which the disc admits shock solution. The role of viscosity is to remove angular momentum outwards, and consequently it perturbs the shock front, and may render the stationary shock unstable. In Fig. 2a, we show the time evolution of the shock location for inviscid accretion \ufb02ow. The shock location is measured at the disc equatorial plane. Here, we use input parameters as rinj = 50.4, vinj = \u22120.06436, ainj = 0.06328, and \u03bbinj = 1.63 respectively. For \u03b1 = 0.005, we \ufb01nd stable shock at around 17rg depicted in Fig. 2b. When viscosity is increased further and reached its critical limit, namely \u03b1 = 0.011, shock front starts oscillating and the oscillation sustains forever, provided the injected \ufb02ow variables remain unaltered. This feature is shown in Fig. 2c. For further increase of viscosity, the oscillation becomes irregular and for even higher \u03b1, the shock oscillation is irrevocably unstable. We change the injection parameters to vinj = \u22120.06532, ainj = 0.06221, and \u03bbinj = 1.7 at rinj = 50.8, and plot the time evolution of the shock rs for inviscid \ufb02ow (Fig. 2d), \u03b1 = 0.003 (Fig. 2e) and \u03b1 = 0.007 (Fig. 2f). The mechanism of shock oscillation due to the presence of viscosity, may be understood in the following manner. We know viscosity transports angular momentum (\u03bb) outwards. Since the post-shock disc is hot, Figure 2. Time evolution of shock location. Viscosity parameter chosen are (a) \u03b1 = 0.0 (b) 0.005 and (c) 0.011, for injection parameters rinj = 50.4 rg with injection velocity vinj = \u22120.06436, sound speed ainj = 0.06328 and angular momentum \u03bbinj = 1.63. And the panels on the right are plotted for viscosity (d) \u03b1 = 0, (e) 0.003 and (f) 0.007, for injection parameters rinj = 50.8 rg with injection velocity vinj = \u22120.06532, sound speed ainj = 0.06221 and angular momentum \u03bbinj = 1.7. Here, the adiabatic index \u03b3 = 4/3. the angular momentum transport in the post-shock disc is more e\ufb03cient than the pre-shock disc. Accordingly, angular momentum piles up in the post shock region. On the other hand, a steady shock forms if the momentum \ufb02ux, energy \ufb02ux and mass \ufb02ux are conserved across the shock. Therefore, as the angular momentum piles up in the immediate post-shock region, extra centrifugal force will try to push the shock front outward. If this piling up of \u03bb is moderate then the expanded shock front will \ufb01nd equilibrium at some position to form steady shock (e.g., Figs. 2b, 2e). If this outward push is strong enough then the expanding shock front will overshoot a possible equilibrium position and in that case the shock front will oscillate (e.g., Figs. 2c, 2f). If the angular momentum piling results in too strong centrifugal barrier, it could drive the shock out of the computation domain. It may be noted that we are simulating the inner part of the disc, i.e., the inner few \u00d7 10rg of the disc. And, therefore the injection parameters are similar to the inner boundary conditions of an accretion disc. Since matter entering a black hole has angular momentum, typically 1 < \u223c\u03bb < \u223c2, our chosen angular momentum are also of the same order. However, at the outer edge angular momentum may reach very high value depending on \u03b1. For example, with injection parameters rinj = 50.4, vinj = \u22120.06436, ainj = 0.06328, and \u03bbinj = 1.63, one can integrate the e\ufb00ective one dimensional equations of motion to estimate the angular momentum at \fPeriodic massloss from viscous accretion \ufb02ows 5 Figure 3. Velocity vectors are in the r \u2212z plane. SPH particles having angular momentum \u03bbinj = 1.63 are injected supersonically from rinj = 50.4 with injection velocity vinj = \u22120.06436 and sound speed ainj = 0.06328, respectively. Viscosity parameter is chosen as \u03b1 = 0.011 and adiabatic index \u03b3 = 4/3. Shock location oscillates with time. Figure 3(a-h) represent the various snap shots of velocity distribution taken at equal interval within a complete period of shock oscillation. In Figure 3f, rT s and rs are indicated, and clearly there is a phase lag between the two. Here, Figure 3c denotes the case when shock location is closest from the black hole and Figure 3g represents the case when shock location is at its maximum value. Density contours (solid curves, red online) are plotted over the velocity vector \ufb01eld. \f6 Santabrata Das, Indranil Chattopadhyay, Anuj Nandi, Diego Molteni the outer boundary and it is typically around 45 rgc at rout \u223c4000 rg. From Figs. 2c-2e, it is clear that the shock can experience persistent oscillation for some critical viscosity and injection parameters. But the shock front is a surface and that too not a rigid one. Therefore, every part of the shock front will not oscillate in the same phase, resulting in a phase lag between the shock front on and around the equatorial plane and the top of the shock front (rT s ). For simplicity, we record the variation of shock location with time at the disc equatorial plane which is shown in top-left panel of Fig. 3. Note that there exists quasi-periodicity in the variation of shock location with time. We identify eight shock locations within a given oscillation period that are marked in open circles. The respective velocity \ufb01eld and density contours (solid, online red) of the \ufb02ow in the r \u2212z plane is shown in the rest of the panels of Figs. 3(a-h). Higher density and extra thermal gradient force in the CENBOL region causes a fraction of in falling matter to bounce-o\ufb00as out\ufb02ow. When shock front oscillates, post-shock volume also oscillates which induces a periodic variation of driving force responsible to vertically remove a part of the in falling matter. As the shock reaches its minimum (Fig. 3c), the thermal driving of out\ufb02ow is weak, so the spewed up matter falls back. The post-shock out\ufb02ow continues to fall as the shock expands to its maxima (Figs. 3d, e, f). However, as the shock reaches its maximum value the thermal driving also recovers (Fig. 3g). The extra thermal driving plus the squeeze of the shock front as it shrinks, spews strong out\ufb02ow (Fig. 3g). In Fig. 3f, we have indicated the shock location on the equatorial plane rs, and the top of the shock front rT s . The position of the shock front can easily be identi\ufb01ed from the clustering of the density contours connecting rs and rT s . From Figs. 3(a-h), it is clear that the mass out\ufb02ow rate is signi\ufb01cant when rT s > \u223crs. Due to shock transition, the post shock matter becomes hot and dense which would essentially be responsible to emit high energy radiation. At the critical viscosity, since the shock front exhibits regular oscillation, the inner part of the disc, i.e., CENBOL, also oscillates indicating the variation of photon \ufb02ux emanating from the disc. Thus, a correlation between the variation of shock front and emitted radiation seem to be viable. Usually, the bremsstrahlung emission is estimated as, EBrem = Z r2 r1 \u03c12T 1/2r2dr, where, r1 and r2 are the radii of interest within which radiation is being computed and T is the local temperature. In this work, we calculate the total bremsstrahlung emission for the matter from the CENBOL region. Also, we quantify the mass out\ufb02ow rate calculated assuming an annular cylinder at the injection radius which is concentric with the vertical axis. The thickness of the cylinder is considered to be twice the size of a SPH particle. This ensures at least one SPH particle lies within the cylindrical annulus. We identify particles leaving the computational domain as out\ufb02ow provided they have positive resultant velocity, i.e., vr > 0 and vz > 0 and they lie above the disc height at the injection radius. With this, we estimate the mass out\ufb02ow rate which is de\ufb01ned as R \u02d9 m = out\ufb02ow rate ( \u02d9 Mout)/accretion rate (| \u02d9 Min|) and observe its time evolution. Here, \u02d9 Min(out) = 2\u03c0\u03c1inj(out)vinj(out)xinj(out)Hinj(out). In Figure 4, we present the variation of shock location, corresponding bremsstrahlung \ufb02ux from the post-shock region and mass out\ufb02ow rate with time. Here, the radiative \ufb02ux is plotted in arbitrary unit. Assuming a 10M\u2299 black hole, the overall time evolution of \ufb01ve seconds (\u226150600 code time) is shown for representation. The input parameters are rinj = 50.4, vinj = \u22120.06436, ainj = 0.06328, \u03bbinj = 1.63 and \u03b1 = 0.011 respectively. Note that persistent shock oscillation takes place over a large time interval, with the oscillation amplitude \u223c3rg. This phenomenon exhibits the emission of non-steady radiative \ufb02ux which is nicely accounted as quasi-periodic variation commonly seen in many black hole candidates (Chakrabarti & Manickam 2000; Remillard & McClintock 2006). Subsequently, periodic mass ejection also results from the vicinity of the gravitating objects as a consequence of the modulation of the inner part of the disc due to shock oscillation. To understand the correlation between the shock oscillation and the emitted photon \ufb02ux from the inner part of the disc, we calculate the Fourier spectra of the quasiperiodic variation of the shock front and the power spectra of bremsstrahlung \ufb02ux for matter resides within the boundary of post-shock region as well as out\ufb02ow with resultant velocity v > 0. The obtained results are shown in Figure 5, where the top panel is for shock oscillation, middle panel is for photon \ufb02ux variation from post-shock disc and bottom panel is for photon \ufb02ux variation of out\ufb02owing matter, respectively. Here, the input parameters are same as Figure 4. We \ufb01nd that the quasi-periodic variation of the shock location and the photon \ufb02uxes from post-shock disc and out\ufb02ow are characterized by the fundamental frequency \u03bdfund = 3.7 Hz which is followed by multiple harmonics. The \ufb01rst few prominent harmonic frequencies are 7.4 Hz (\u223c2 \u00d7 \u03bdfund), 11.2 Hz (\u223c3 \u00d7 \u03bdfund) and 14.2 Hz (\u223c4 \u00d7 \u03bdfund). This suggests that the dynamics of the inner part of the disc i.e., the post-shock disc and emitted \ufb02uxes are tightly coupled. In order to understand the generic nature of the above \ufb01ndings, we carried out another simulation with di\ufb00erent input parameters. The results are shown in Figure 6, where we use rinj = 50.4 rg, vinj = \u22120.06436, ainj = 0.06328, \u03bbinj = 1.61 and \u03b1 = 0.013, respectively. The solutions are obtained similar to the previous case, i.e., \ufb01rst a steady state inviscid solution is obtained and then the viscosity is turned on. The corresponding Fourier spectra of shock oscillation and power spectra of radiative \ufb02uxes are presented in the top, middle and bottom panel, respectively. The obtained frequencies for quasi-periodic variations are 2.9 Hz (\u03bdfund), 5.6 Hz (\u223c2 \u00d7 \u03bdfund), 9.3 Hz (\u223c3 \u00d7 \u03bdfund) and 15 Hz (\u223c5 \u00d7 \u03bdfund). In both the cases, the obtained power density spectra (PDS) of emitted radiation has signi\ufb01cant similarity with number of observational results (Remillard & McClintock 2006; Nandi et al. 2012). The quasi-periodicity that we observed in the power spectra of simulated results seems to be generic in nature. Several Galactic black hole sources exhibit QPO in the Xray power spectra along with the harmonics. In Figure 7, we plotted one such observed X-ray power spectra of black hole source GX 339-4 of the 2010-11 outburst, which clearly shows the presence of fundamental QPO (\u223c2.42 Hz) and harmonics at \u223c4.88 Hz and \u223c7.20 Hz (Nandi et al. 2012). This observational \ufb01nding directly supports our simulation \fPeriodic massloss from viscous accretion \ufb02ows 7 Figure 4. Top panel: Variation of shock location with time. Middle panel: Variation of the bremsstrahlung emission in arbitrary units with time. Bottom panel: Variation of mass out\ufb02ow rate with time. Here, \u03b1 = 0.011 and MB = 10M\u2299. Other parameters are same as Figure 3. results and perhaps establishes the fact that the origin of such photon \ufb02ux variation seems to be due to the hydrodynamic modulation of the inner part of the disc in terms of shock oscillation. Recently, Nandi et al. (2013) reported the possible association of QPOs in X-rays and jets in the form of radio \ufb02ares in outbursting black hole sources through the accretion \ufb02ow dynamics. Here also we \ufb01nd that the dynamics of the post-shock disc region plays a major role for the jet generation and the emitted radiation. In other words, postshock disc seems to be the precursor of jets as well as QPOs according to our present study. 4 DISCUSSION AND CONCLUDING REMARKS We have studied the dynamics of the viscous accretion \ufb02ow around black holes using time dependent numerical simulation. While, accreting matter slows down against gravity as it experiences a barrier due to centrifugal force and eventually enter in to the black hole after triggering shock transition. Usually, post-shock \ufb02ow is hot and compressed causing a thermal pressure gradient across the shock. As a result, it de\ufb02ects part of the accreting matter as bipolar jet in a direction perpendicular to the disc equatorial plane. When viscosity is increased, shock becomes non-steady and ultimately starts oscillating when the viscosity reached its critical limit. Consequently, the out\ufb02owing matter also starts demonstrating quasi-periodic variation. Since the inner disc is hot and dense, high energy radiations must emit from the vicinity of the black holes. When the inner disc vibrates in radial direction, the emitted photon \ufb02ux is also modulated. We compute the power density spectra of such be10\u22124 10\u22123 0.01 0.1 1 10 100 1000 Power (Arbitrary Units) Shock location 104 105 106 107 108 109 1010 1011 Power (Arbitrary Units) CENBOL Flux 0.1 1 10 100 10 100 1000 104 105 106 107 Power (Arbitrary Units) Frequency (Hz) Outflow Flux Figure 5. Top panel: Fourier spectra of shock location variation at the disc equatorial plane. Power spectra of bremsstrahlung \ufb02ux variation calculated for SPH particles resides within the boundary of CENBOL (middle panel) and within the out\ufb02ow region (bottom panel), respectively. Here, \u03bbinj = 1.63 and \u03b1 = 0.011. In this case, we have consider simulated data of \u223c20 sec. Other parameters are same as Figure 1. Fundamental QPO frequency is obtained in both the cases \u223c3.7 Hz. However, signi\ufb01cant di\ufb00erences in bremsstrahlung \ufb02ux of CENBOL and out\ufb02ow is observed. haviour and obtain fundamental peak at few Hz. We \ufb01nd that some of the harmonics are very prominent as seen in the observational results of several black hole candidates. The highlight of this paper is to show that the oscillation of shocked accretion \ufb02ow shows QPOs with fundamentals as well as harmonics (Figs. 5, 6) as is seen from observations (Fig. 7). Interestingly, the bipolar out\ufb02ow shows at least the fundamental frequency in its PDS for the case depicted in \f8 Santabrata Das, Indranil Chattopadhyay, Anuj Nandi, Diego Molteni 10\u22124 10\u22123 0.01 0.1 1 10 100 1000 Power (Arbitrary Units) Shock location 105 106 107 108 109 1010 1011 Power (Arbitrary Units) CENBOL Flux 0.1 1 10 100 105 106 107 108 109 Power (Arbitrary Units) Frequency (Hz) Outflow Flux Figure 6. Top panel: Fourier spectra of shock location variation at the disc equatorial plane for \u03bbinj = 1.61 and \u03b1 = 0.013. The other parameters are same as Figure 1. Power spectra of bremsstrahlung \ufb02ux variation calculated for SPH particles resides within the boundary of CENBOL (middle panel) and within the out\ufb02ow region (bottom panel), respectively. In this case, we have consider simulated data of \u223c15 sec. Fundamental QPO frequency is obtained in both the cases \u223c2.9 Hz. Figs. 5, however, the fundamental and harmonics are fairly weak in Figs. 6. So, this result suggests that photons from the out\ufb02ows and jets would at least show the fundamental frequency, but probably no harmonics. Moreover, does this mean that if we happen to \u2018see\u2019 down the length of a jet, we would see quasi-periodic oscillations of photons in some jets (e.g., Figs. 5) and in some other jets the QPO signature would be washed out (e.g., Figs. 6)? And indeed in most blazars QPOs have not been detected, but in few QPO was found (Lachowicz et al. 2009). This issue need further in0.01 0.1 1 10 10\u22124 10\u22123 0.01 Power Frequency (Hz) GX 339\u22124: Galactic Black Hole Source Figure 7. Signature of the multiple QPO frequencies (\u223c2.42 Hz, 4.88 Hz & 7.20 Hz) observed in the power spectra of the galactic black hole source GX339-4. vestigation. Furthermore, while hot, dissipative \ufb02ows show single shock, low energy dissipative \ufb02ow showed multiple shocks (Lanzafame et al. 2008; Lee et al. 2011), however, the e\ufb00ect of high \u03b1 has not been investigated. The out\ufb02ow also shows quasi-periodicity, however, blobs of matter are being ejected persistently with the oscillation of inner part of the disc, and therefore, such persistent activity will eventually give rise to a stream of matter and therefore a quasi-steady mildly relativistic jet. These ejections are not the ballistic relativistic ejections observed during the transition of hard-intermediate spectral state to the soft-intermediate spectral state. It has been recently shown that the momentum deposited by the disc photons on to jets, makes the jets stronger as the disc moves from LS to hard-intermediate spectral state (Kumar et al. 2014), simulations of which will be communicated elsewhere. In this work, the mechanism studied for QPO generation is due to the perturbations induced by viscous dissipation and angular momentum transport. While it has been reported that QPOs can also be generated by cooling (Molteni et al. 1996b). In realistic disc, both processes are active, and both should produce shock oscillation. Interestingly though, viscosity can produce multiple shocks (for one spatial dimensional results see Lee et al. 2011), while no such thing has been reported with cooling processes, albeit investigations with cooling processes have not been done extensively. We would like to investigate the combined e\ufb00ect of cooling and viscous dissipation in future, to ascertain the viability of \u2018shock cascade\u2019 in much greater detail. It must be pointed out that this model of QPO and mass ejection (Nandi et al. 2001) can also be applied to the weakly magnetized accreting neutron stars. However, one has to change the inner boundary condition, i.e., put a hard surface as the inner boundary condition. The same methodology should also give rise to QPOs, and we are working on such a scenario, and would be reported elsewhere. \fPeriodic massloss from viscous accretion \ufb02ows 9 ACKNOWLEDGMENTS AN acknowledges Dr. Anil Agarwal, GD, SAG, Mr. Vasantha E. DD, CDA and Dr. S. K. Shivakumar, Director, ISAC for continuous support to carry out this research at ISAC, Bangalore. The authors also acknowledge the anonymous referee for fruitful suggestions to improve the quality of the paper." + }, + { + "url": "http://arxiv.org/abs/0909.5513v1", + "title": "Studies of dissipative standing shock waves around black holes", + "abstract": "We investigate the dynamical structure of advective accretion flow around\nstationary as well as rotating black holes. For a suitable choice of input\nparameters, such as, accretion rate ($\\dot {\\cal M}$) and angular momentum\n($\\lambda$), global accretion solution may include a shock wave. The post shock\nflow is located at few tens of Schwarzchild radius and it is generally very hot\nand dense. This successfully mimics the so called Compton cloud which is\nbelieved to be responsible for emitting hard radiations. Due to the radiative\nloss, a significant energy from the accreting matter is removed and the shock\nmoves forward towards the black hole in order to maintain the pressure balance\nacross it. We identify the effective area of the parameter space ($\\dot {\\cal\nM} - \\lambda$) which allows accretion flows to have some energy dissipation at\nthe shock $(\\Delta {\\cal E})$. As the dissipation is increased, the parameter\nspace is reduced and finally disappears when the dissipation is reached its\ncritical value. The dissipation has a profound effect on the dynamics of\npost-shock flow. By moving forward, an unstable shock whose oscillation causes\nQuasi-Periodic Oscillations (QPOs) in the emitted radiation, will produce\noscillations of high frequency. Such an evolution of QPOs has been observed in\nseveral black hole candidates during their outbursts.", + "authors": "Santabrata Das, Sandip K. Chakrabarti, Soumen Mondal", + "published": "2009-09-30", + "updated": "2009-09-30", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION In a signi\ufb01cant work on the prospect of shock formation in an accretion disk around a black hole, Chakrabarti & Das (2004) showed that in order to have a stable shock, the viscous dissipation inside a \ufb02ow must have an upper limit, beyond which the Rankine-Hugoniot conditions cannot be satis\ufb01ed. In Das & Chakrabarti (2004) and Das (2007), the bremsstrahlung and synchrotron cooling were also added to dissipate away the heat generated from viscous dissipation. However, it is well known that the post-shock region emits the hard X-rays in a black hole candidate (Chakrabarti & Titarchuk 1995) and some amount of energy is lost through radiation from the post-shock region. This radiative loss primarily comes from the thermal energy of the \ufb02ow and takes place via thermal Comptonization. In a self-consistent shock condition, this radiative loss must also be incorporated. In the present paper, we quantitatively show how the energy loss at the shock a\ufb00ects the location of the shock itself around stationary as well as rotating black holes. As energy dissipation is increased, the post-shock \ufb02ow pressure gets reduced causing the shock front to come closer to the black hole in order to maintain the pressure balance across it. Accordingly, the dynamical properties of standing shock waves would directly be related to the amount of energy discharge from the post-shock \ufb02ow. In addition, the mass out\ufb02ow rate which is believed to be generated from the post-shock region (Chakrabarti 1999; Das et al. 2001; Das & Chattopadhyay 2008), would also be a\ufb00ected by the energy discharge at the shock location. Therefore, it is pertinent to understand the response of energy dissipation on the formation of standing shock wave. In this paper, we precisely do this. The plan of our paper is the following: in the next Section, we present the equations governing the \ufb02ow and the procedure we adopted to solve these equations. In \u00a73, we show how the Rankine-Hugoniot conditions at the shocks must be modi\ufb01ed when energy dissipation is present. In \u00a74, we show the results of our computations. Finally in \u00a75, we present concluding remarks. \u22c6sbdas@canopus.cnu.ac.kr,sbdas@iitg.ernet.in \u2020 chakraba@bose.res.in \u2021 soumen@bose.res.in \fStudies of dissipative standing shock waves around black holes 3 2 GOVERNING EQUATIONS AND SONIC POINT ANALYSIS We start with a steady, thin, rotating, axisymmetric, accretion \ufb02ow around black holes. We assume smaller accretion rates, so that the \ufb02ow radiatively ine\ufb03cient and behaves essentially as a constant energy \ufb02ow as in Chakrabarti (1989). We assume a polytropic equation of state for the accreting matter, P = K\u03c1\u03b3, where, P and \u03c1 are the isotropic pressure and the matter density, respectively, \u03b3 is the adiabatic exponent considered to be constant throughout the \ufb02ow and K is a constant which measures the entropy of the \ufb02ow and can change only at the shock. Since we ignore viscous dissipation the angular momentum of the \ufb02ow \u03bb \u2261x\u03d1\u03b8 is also constant everywhere. However, we assume that the main dissipation is concentrated in the immediate vicinity of the post-shock \ufb02ow, which would be the case if the thermal Comptonization is the dominant process. The \ufb02ow height is determined from the condition of being in equilibrium in a direction perpendicular to the equatorial plane. Flow equations are made dimensionless considering unit of length, time and the mass as GMBH/c2, GMBH/c3 and MBH respectively, where G is the gravitational constant, MBH is the mass of the black hole and c is the velocity of light. In the steady state, the dimensionless energy equation at the disk equatorial plane is given by (Chakrabarti 1989), E = 1 2\u03d12 + a2 \u03b3 \u22121 + \u03a6, (1) where, E is the speci\ufb01c energy, \u03d1 is the radial velocity and a is the adiabatic sound speed de\ufb01ned as a = q \u03b3P/\u03c1. Here, e\ufb00ective potential due to black hole is denoted by \u03a6. In the present study, the e\ufb00ect of gravity is taken care of by two di\ufb00erent potentials. To represent Schwarzschild black hole, we use Paczy\u00b4 nski-Wiita (Paczynski & Wiita 1980) potential (\u03a6P W) and for the Kerr black hole, we consider pseudo-Kerr potential (\u03a6P K) introduced by Chakrabarti & Mondal (2006). It has been adequately shown that these potentials accurately mimic not only the geometry of the space-time, but also the dynamics of the \ufb02ow. In fact the error for not using full general relativistic treatment has been shown to be at the most a few percent (Chakrabarti & Mondal 2006). The expressions for Paczy\u00b4 nski-Wiita and pseudo-Kerr e\ufb00ective potential are respectively given by, \u03a6P W = \u03bb2 2x2 \u2212 1 2(R \u22121) and \f4 Santabrata Das, Sandip K. Chakrabarti and Soumen Mondal \u03a6P K = \u2212B + \u221a B2 \u22124AC 2A where, A = \u03b12\u03bb2 2x2 , B = \u22121 + \u03b12\u03c9\u03bbR2 x2 + 2ak\u03bb R2x C = 1 \u2212 1 R \u2212x0 + 2ak\u03c9 x + \u03b12\u03c92R4 2x2 . Here, x and R represent the cylindrical and spherical radial distance from the black hole when the black hole itself is considered to be located at the origin of the coordinate system. Here, x0 = (0.04+0.97ak+0.085a2 k)/2, \u03c9 = 2ak/(x3+a2 kx+2a2 k) and \u03b12 = (x2\u22122x+a2 k)/(x2+ a2 k +2a2 k/x), \u03b1 is the red shift factor. ak represents the black hole rotation parameter de\ufb01ned as the speci\ufb01c spin angular momentum of the black hole. The mass \ufb02ux conservation equation in the steady state apart from the geometric factor is given by, \u02d9 M = \u03d1\u03c1xh(x), (2) where, \u02d9 M is the mass accretion rate considered to be constant, and h(x) represents the half-thickness of the \ufb02ow (Chakrabarti 1989) which is expressed as, h(x) = a s x \u03b3\u03a6 \u2032 R . (3) Here, \u03a6 \u2032 R = \u0010 \u2202\u03a6 \u2202R \u0011 z< 1 in general, ensures that magnetic \ufb01elds remain con\ufb01ned with the accreting plasma (Mandal & Chakrabarti 2005). The synchrotron emissivity for the stochastic magnetic \ufb01eld is given by (Shapiro & Teukolsky 1983; Das 2007), \u039b = Sa5 u r \u03a6 \u2032 R x3 , (8) with S = 15.36 \u00d7 1017 \u02d9 m\u00b52e4 \u03b2m3 e\u03b35/2 1 GM\u2299c3 , (9) where, e and me represent charge and mass of an electron respectively. Here, \u02d9 m is the accretion rate in units of Eddington rate that regulates the e\ufb03ciency of cooling. Following Mondal & Chakrabarti (2006), we use modi\ufb01ed polytropic index [n = (\u03b3 \u22121)\u22121] relation as n \u2192n + (0.3 \u22120.2ak) and choose \u03b2 = 10 throughout the paper, until otherwise stated. 3 SONIC POINT ANALYSIS We solve Eqs.(1-3, 4a) following the standard method of sonic point analysis (Chakrabarti 1989). We calculate the radial velocity gradient as: du dx = N D , (10) where, the numerator N is given by, N = Sa5 r \u03a6 \u2032 R x3 \u22123u2a2 x(\u03b3 \u22121)+u2 (\u03b3 + 1) (\u03b3 \u22121) d\u03a6e dx + u2a2 (\u03b3 \u22121)\u03a6 \u2032 R d\u03a6 \u2032 R dx (10a) and the denominator D is given by, D = 2a2u (\u03b3 \u22121) \u2212(\u03b3 + 1) (\u03b3 \u22121)u3. (10b) The gradient of sound speed is obtained as: da dx = \u0010 a u \u2212\u03b3u a \u0011 du dx + 3a 2x \u2212\u03b3 a d\u03a6e dx \u2212 a 2\u03a6 \u2032 R d\u03a6 \u2032 R dx . (11) Since the matter is accreting onto the BH smoothly except at the shock location, the radial velocity gradient must always be real and \ufb01nite. However, eq. (10b) shows that there may be some points where denominator (D) vanishes. This indicates that the numerator (N) must also vanish there to keep du/dr \ufb01nite. These special points where both the numerator (N) and denominator (D) vanish simultaneously are called critical points or sonic points. Setting D = 0, one can easily obtain the expression for the Mach number (M = u/a) at the sonic point as, M(xc) = r 2 \u03b3 + 1. (12) We obtain an algebraic equation for the sound speed (ac) by using another sonic point condition N = 0 which is given by, F(Ec, \u03bbc, \u02d9 m) = Aa3 c + Bac + C = 0, (13) where, A = \" S(\u03b3 \u22121) r \u03a6 \u2032 R x3 # c , B = \u0014\u0012 1 \u03a6 \u2032 R d\u03a6 \u2032 R dx \u22123 x \u0013 M 2 \u0015 c , and C = h (\u03b3 + 1)M 2 d\u03a6e dx i c . The subscript \u2018c\u2019 denotes the quantities computed at the sonic point. We calculate sound speed at the sonic point by solving Eq. (13) analytically (Abramowitz & Stegun 1970). In general, a dissipative accretion \ufb02ow may have multiple sonic points depending on the \ufb02ow parameters. The nature of the sonic point is dictated by the sign and the numerical value of the velocity gradients at the sonic point. In reality du/dr may possesse two real values at the sonic point: one is for accretion branch and the other corresponds to the wind branch. When both the values of du/dr are real and of opposite signs, the sonic point is referred to as saddle \f4 Santabrata Das and Sandip K. Chakrabarti 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 Figure 1. Variation of Mach number (M = u/a) with logarithmic radial distance. Flow parameters are xin = 3.3753, \u03bb = 3.04, \u02d9 m = 0.0025 and ak = 0.5. Standing shock forms at xs = 48.57. type. A transonic \ufb02ow generally passes through the saddle type sonic points only and for a shock, the \ufb02ow crosses two saddle type sonic points, one before the shock and the other, after. The closest one from the BH horizon is called the inner sonic point and the furthest one is known as the outer sonic point. If the derivatives are real and of same sign, the sonic point is nodal type. When both the derivatives are complex, the sonic point is of spiral type which is unphysical as no physical solution can pass through it. 4 ACCRETION SOLUTION In order to obtain a complete accretion solution, we choose the inner sonic point location (xin) and the angular momentum (\u03bb) of the \ufb02ow as input parameters (Das 2007). From the inner sonic point, we integrate inward up to the BH horizon and outward till the outer edge and combine them to obtain a global transonic solution. 4.1 Shock Solution In Fig 1, we present a global solution with a standing shock around a rotating black hole. The variation of the Mach number with the logarithmic radial distance is plotted. The \ufb02ow parameters are xin = 3.3753, \u03bb = 3.04, \u02d9 m = 0.0025 and ak = 0.5 respectively. The \ufb01gure consists of two solutions. The one passing through the outer sonic point (O) connects the black hole horizon and the outer edge of the disk. The other passing through inner sonic point (I) is closed and is connected with the BH horizon only. Arrows indicate the direction of the \ufb02ow towards the BH. Matter starts accreting from the outer edge of the disk with a negligible velocity. Figure 2. Parameter space for multiple sonic points. The e\ufb00ective region bounded by the solid curve are for cooling free accretion \ufb02ow. Regions under dotted and dashed curves are obtained for \u02d9 m = 0.0025 and 0.0125 respectively. E\ufb00ective area of parameter space reduces with the increase of accretion rate ( \u02d9 m) as it enhances the cooling e\ufb03ciency. As the \ufb02ow accretes towards the BH, the \ufb02ow gains its radial velocity due to the attraction of the strong gravity. The \ufb02ow becomes super-sonic after crossing the outer sonic point (O) and continues to accrete towards the BH. The RHCs (Landau & Lifshitz 1959) in turn allows the \ufb02ow variables to make a discontinuous jump in the sub-sonic branch. This is indicated by a dotted vertical line and commonly known as the shock transition. In the post-shock region, the \ufb02ow momentarily slows down and subsequently picks up the radial velocity and enters into the BH supersonically after crossing the inner sonic point (I). In this particular case, the shock conditions are satis\ufb01ed at xs = 48.57. The entropy generated at the shock is eventually advected towards the BH to allow the \ufb02ow to pass through the inner sonic points. In this work, the shocks are considered to be thin and non-dissipative. 4.2 Parameter space for multiple sonic points So far, we have seen that the shock wave connects two solution branches\u2014one passing through the outer sonic point and the other passing through the inner sonic point. In particular, the \ufb02ow with multiple saddle type sonic points may undergo shock transitions. More importantly, for shock formation in dissipative accretion \ufb02ows, the solution passing through the inner sonic point has to be spiraling in (Das 2007). Therefore, it would be useful to study the parameter space for accretion \ufb02ows having spiral in solution passing through the inner sonic point. In Fig. 2, we show the classi\ufb01cations of parameter space as a function of accretion rate ( \u02d9 m) in the Ein \u2212\u03bb plane. Here, Ein denotes the energy of the \fDissipative accretion \ufb02ows around a rotating black hole 5 0.5 1 1.5 2 0 1 2 3 Figure 3. Mach number variation with logarithmic radial distances. Flows are injected sub-sonically from the outer edge xinj = 300 with identical energy Einj = 1.003163 and angular momentum \u03bb = 3.0. Di\ufb00erent accretion rates ( \u02d9 m) are used. Solid curve represents a solution including the shock wave (xs = 26.03) for cooling free accretion \ufb02ow. Other solutions are for [ \u02d9 m, xs]= [0.0125, 19.38] (dotted), [0.025, 15.85] (dashed) and [0.0375, 13.45] (dot-dashed). As \u02d9 m is increased, shock front precedes towards BH. \ufb02ow at the inner sonic point (xin). The BH rotation parameter is chosen as ak = 0.5. The region bounded by the solid curve is obtained for non-dissipative accretion \ufb02ows. The regions under the dotted and dashed curve is obtained for higher accretion rates \u02d9 m = 0.0025 and 0.0125 respectively. As accretion rate ( \u02d9 m) is increased, the parameter space for multiple sonic points shrinks. This indicates that nature of sonic points changes (from saddle to spiral) for \ufb02ow with identical input parameters for increasing \u02d9 m. 5 SHOCK PROPERTIES 5.1 Shock Dynamics In Fig. 3, we present the variation of shock locations with the accretion rate ( \u02d9 m). Logarithmic radial distance is varied along the horizontal axis and Mach number is plotted along vertical axis. The vertical lines represent the shock locations. Matter with identical outer boundary conditions is injected sub-sonically from the outer edge of the disk xinj = 300 on to a rotating BH with rotation parameter ak = 0.5. The local energy of the \ufb02ow at xinj is Einj \u2261E(xinj) = 1.003163 (including the rest mass) and the angular momentum is \u03bb = 3.0. The sub-sonic \ufb02ow crosses the outer sonic point to become super-sonic and makes a shock transition to the sub-sonic branch. The solid vertical lines represent the shock location (xs = 26.03) for non-dissipative \ufb02ow. As cooling is incorporated, shock front moves forward. In the postFigure 4. (a) Variation of the shock location with accretion rate ( \u02d9 m). Flows are injected with the same energy and angular momentum from the outer edge. Small dashed, big dashed, small-big dashed, dot-dashed, dotted and solid curves are drawn for angular momentum \u03bb = 2.94, 2.96, 2.98, 3.0, 3.02 and 3.04 respectively. (b) Variation of compression ratio (R = \u03a3+/\u03a3\u2212) with accretion rate for the same set of parameters as in (a). (c) Variation of shock strength (\u0398 = M\u2212/M+) with the accretion rate for the same set of parameters as in (a). Subscripts \u201c+\u201d and \u201c-\u201d denote quantities before and after the shock. shock region, cooling is more e\ufb00ective compared to the preshock \ufb02ow as the density as well as the temperature are very high in this region due to compression. Cooling reduces the post-shock pressure causing the shock front to move inward to maintain pressure balance across it. For higher \u02d9 m, shock front moves further inward. The dotted, dashed and dot-dashed vertical lines represent the shock locations xs = 19.38, 15.85 and 13.45 for \u02d9 m = 0.0125, 0.025 and 0.0375 respectively. In Fig. 4a, we show the variation of the shock location as a function of accretion rate ( \u02d9 m) for a set of angular momentum (\u03bb). In this particular \ufb01gure, the \ufb02ow is injected from the outer edge of the disk (xinj = 300). The BH rotation parameter is chosen as ak = 0.5. The angular momentum of the \ufb02ow is varied from \u03bb = 2.94 (small dashed) to 3.04 (solid) with an interval \u2206\u03bb = 0.02. At the injection point, the corresponding local energies of the \ufb02ow (from bottom to top) are Einj = 1.003174 (small dashed), 1.003170 (big dashed), 1.003167 (small-big dashed), 1.003163 (dotdashed), 1.003160 (dotted), and 1.003156 (solid) respectively. For a given accretion rate ( \u02d9 m), the shock forms further out for \ufb02ows with higher angular momentum. Here, the larger angular momentum increases the centrifugal pressure which pushes the shock front outside. Conversely, for a given angular momentum, the shock location decreases with the increase of the accretion rate as cooling reduces the postshock thermal pressure. Figure 4a shows that the standing \f6 Santabrata Das and Sandip K. Chakrabarti shocks are formed for a wide range of accretion rate ( \u02d9 m). In each angular momentum, the standing shocks disappear beyond a critical value of accretion rate ( \u02d9 mc) as the RHCs are not satis\ufb01ed here. Non-steady shocks may still exit, but an investigation of such phenomena is beyond the scope of the present work. As \u03bb increases, the critical accretion rate \ufb01rst increases, becomes maximum at some \u03bb (= 2.96, in this particular case) and then decreases. This clearly indicates that the parameter space for the standing shock shrinks in both the lower and higher angular momentum sides with the increase of the accretion rate. One of the important components in accretion disk physics is to study the density pro\ufb01le of matter since the cooling e\ufb03ciency as well as the emitted radiation directly depends on it. We compute the compression ratio R de\ufb01ned as the ratio of vertically averaged post-shock to pre-shock density and plot it in Fig. 4b as a function of accretion rate ( \u02d9 m) for the same set of \ufb02ow parameters as in Fig. 4a. For a given angular momentum (\u03bb), the compression ratio increases monotonically with higher \u02d9 m. As \u02d9 m increases, postshock \ufb02ow becomes more compressed to provide required pressure for holding the shock. In addition, for a given \u02d9 m, higher angular momentum \ufb02ow feels less compression in the post-shock region as centrifugal pressure resists the \ufb02ow to accrete. Note that, for each angular momentum, there is a cut-o\ufb00at a critical accretion rate limit as standing shock conditions are not satis\ufb01ed there. It is useful to study the another shock property called shock strength \u0398 (de\ufb01ned as the ratio of pre-shock to postshock Mach number of the \ufb02ow) as it is directly related to the temperature jump at the shock. In Fig. 4c, we show the variation of shock strength as a function of accretion rate ( \u02d9 m) for \ufb02ows with identical input parameters as in Fig. 4a. For a given angular momentum (\u03bb), the strength of the shock is the weakest in the dissipation-free limit and it becomes stronger as accretion rate ( \u02d9 m) is increased. Thus, a higher cooling causes the post-shock \ufb02ow to be hotter and radiations emitted from this region are expected to be harder. A similar result is reported by Mandal & Chakrabarti (2005). This clearly indicates that the observed spectra of the BH would strongly depend on the cooling. An important part of understanding a cooling dominated accreting \ufb02ow around a rotating BH is to study the shock properties as a function of BH rotation parameter ak. In Fig. 5a, we plot the variation of shock location as a function of ak. In this particular Figure, we inject matter from the outer edge xinj = 200 and the accretion rate is considered to be \u02d9 m = 0.0025. Solid, dotted and dashed curves are obtained for \ufb02ows with angular momentum \u03bb = 2.96, 3.05 and 3.14 respectively. The corresponding energies at the injection point are Einj = 1.003456, 1.003442 and 1.003422 respectively. Notice that, for a given \u03bb, shocks form for a particular range of ak. As ak increased, shock recedes from the BH horizon. Moreover, shocks exist around a weakly rotating BH when the \ufb02ow angular momentum is relatively higher and vice versa. This phenomenon is directly related to the spin-orbit coupling term in the Kerr geometry. In fact, since both the marginally bound and the marginally stable angular momenta (as well as their di\ufb00erence) go down when the Kerr parameter is increased, the relevant parameter region when the shocks form also goes down as ak is increased. In general, however, the shock location is generally small for Figure 5. Variation of (a) shock location, (b) compression ratio and (c) shock strength with BH rotation parameter ak. See text for more details. higher ak, as statistically the \ufb02ow with a smaller angular momentum (and therefore, the lesser centrifugal force) is accreted in a rapidly spinning black hole. Thus, for instance if we compared the shock locations having \u03bb = \u03bbms for all cases, the shock location for a rotating black hole would be closer for higher ak. In Fig. 5b, we show the variation of the compression ratio as a function of BH rotation parameter ak for the \ufb02ows with input parameters same as Fig 5a. The solid, dotted and dashed curves are obtained for \u03bb = 2.96, 3.05 and 3.14 respectively. The compression ratio R decreases with the increase of ak for \ufb02ow with identical \u03bb. In Fig. 5c, we plot the variation of shock strength \u0398 with ak for \ufb02ow with input parameters as in Fig. 5a. We obtain a similar variation of \u0398 with ak as in Fig. 5b. 5.2 Parameter Space for Shock Formation In Fig. 6, we identify the region of the parameter space that allows the formation of the standing shocks. The BH rotation parameter is considered to be ak = 0.5. The region bounded by the solid curve is obtained for non-dissipative accretion \ufb02ow. As accretion rate is enhanced, the e\ufb00ective region of parameter space for standing shocks shrinks in both the lower and higher angular momentum sides. Due to the cooling e\ufb00ect, the \ufb02ow loses its energy as it accretes and therefore, the parameter space is shifted to the lower energy domain for higher cooling. The regions under dotted, dashed and dot-dashed curves are obtained for accretion rates \u02d9 m = 0.0025, 0.005 and 0.01 respectively. It is clear that the standing shocks do not exist beyond a critical accretion rate when the synchrotron cooling is present. In Fig. 7, we classi\ufb01ed the entire parameter space \fDissipative accretion \ufb02ows around a rotating black hole 7 Figure 6. Variation of the e\ufb00ective region of parameter space which forms standing shocks as a function of the accretion rate ( \u02d9 m). See the text for more details. Figure 7. Classi\ufb01cation of parameter space according to the various solution topologies of BH accretion solution. See text for details. spanned by (Ein, \u03bb) according to the nature of solution topologies. As an example, we consider ak = 0.5 and \u02d9 m = 0.0025. We separate the parameter space into six regions marked by S, OS, O, I, CI and N. The dot-dashed line represents the rest mass energy of the \ufb02ow. At the bottom left of the parameter space, we plot solution topologies in the small boxes. In each box, Mach number of the \ufb02ow is plotted against the logarithmic radial distance. Each of these solutions are marked and drawn using the parameters from the corresponding region of the parameter space. The direction of the accreting \ufb02ow is indicated by the arrow. (The solutions without arrows are relevant for winds, discussions on which are beyond the scope of this paper.) The solutions from the regions marked \u2018S\u2019 and \u2019OS\u2019 have two X type sonic points and the entropy at the inner sonic point is higher than that at the outer sonic point. Flows from \u2018S\u2019 su\ufb00er a standing shock transition as RHCs are satis\ufb01ed. However, a solution from the region \u2018OS\u2019 does not pass through the standing shock as RHCs are not satis\ufb01ed here. A \ufb02ow with parameters from this region is unstable and causes periodic variation of emergent radiation from the inner part of the disk as it tries to make a shock transition but fails to do so. This is known from the numerical simulations of nondissipative \ufb02ows Ryu et al. (1997) and we anticipate that a similar behaviour would be seen in this case as well. The solutions from the region \u2018I\u2019 possess only the inner sonic point and the accreting solutions straight away pass through it before entering into the BH. A solution from the region \u2018O\u2019 has only one outer sonic point. The solution from region \u2018CI\u2019 has two sonic points\u2014 one \u2019X\u2019 type and other \u2019spiral\u2019 type. Solutions of this kind does not extend to the outer edge of the disk to produce a complete global solution and therefore, becomes unstable. It has been pointed out by Chakrabarti (1996a) that inclusion of viscosity should open up the topology to allow the \ufb02ow to reach a larger distance to join with a sub-sonic Keplerian \ufb02ow. The region marked \u2018N\u2019 is the forbidden region for a transonic \ufb02ow solution. 6 CONCLUDING REMARKS In this paper, we have studied the properties of cooling dominated accretion \ufb02ow around a rotating black hole by solving a set of equations that regulate the dynamical structure of the \ufb02ow. A special consideration is given to synchrotron cooling that strongly a\ufb00ects the disk properties as well as the emitted spectrum and luminosity that are observed. We obtain the global accretion solutions with and without the shocks in terms of a few \ufb02ow parameters, namely, energy, angular momentum, black hole rotation parameter and the accretion rate, which e\ufb00ectively acts as the cooling parameter. We \ufb01nd that the accreting matter experiences a centrifugal force which acts as a barrier, inducing a shock formation. We show that the global shocked accretion solution can be obtained for a signi\ufb01cant region of the parameter space even when the cooling is signi\ufb01cant. Using a conventional accretion disk model we expect the accretion to take place when the angular momentum is close to the marginally stable value. Our calculation shows that the region is actually broader, in terms of both the angular momentum and energy. The discussion regarding the nature of the sonic point has been reported in many occasions (Chakrabarti & Das 2004; Das & Chakrabarti 2004). However, a detailed analysis was not presented before for a cooling dominated \ufb02ow around a rotating black hole. Our present work suggests that a large region of the parameter space provides a stable saddle type sonic point. In Fig. 2, we demonstrated that the parameter space for the stable saddle type sonic point is gradually reduced with the increase of cooling e\ufb03ciency. \f8 Santabrata Das and Sandip K. Chakrabarti We show that the standing shocks form closer to a spinning black hole as the accretion rate is enhanced. At the post shock region, the density and the temperature is relatively high compared to the pre-shock \ufb02ow and thus cooling is more e\ufb03cient there. For a higher cooling, the post-shock matter cools faster reducing the thermal pressure drastically. This forces the shocks to move inward to maintain pressure balance across them. One of the aims of the present work was to study the e\ufb00ect of black hole rotation parameter on the dynamical structure of cooling dominated global solutions. We \ufb01nd that for \ufb02ows with identical outer boundary condition (e. g., same energy and angular momentum at the outer edge) shock recedes from the black hole horizon with the increase of black hole rotation parameter (ak). However if we choose the relevant angular momentum for each case, such as the marginally stable angular momentum the shock location moves in with the increase of ak. The range of ak for which the stationary shocks are formed is restricted for a \ufb02ow of given angular momentum. Shocks are possible around a rapidly rotating black hole when the \ufb02ow angular momentum is relatively low. Since that produces a very low centrifugal pressure, the shock can form very close to the black hole for a rapidly spinning black hole. We identify the region of parameter space for the formation of a standing shock. We \ufb01nd that the e\ufb00ective region of the parameter space for the stationary shock shrinks when the accretion rate is enhanced. This suggests that the possibility of shock formation decreases for higher accretion rate. In addition, we also separate a region where the RankineHugoniot relation is not satis\ufb01ed. In the context of invidcid \ufb02ows, it has been observed that the \ufb02ow parameters from such a region give rise to oscillating shocks (Ryu et al. 1997). The reason is that the higher entropy at the inner sonic point forces the \ufb02ow to pass through it by generating extra entropy at the shock. But since RHCs are not satis\ufb01ed the shock can not settle itself at a given location. Thus the cause of oscillation is su\ufb03ciently generic and we suspect that exactly the same thing will happen in the present case. Most importantly, since rotating black holes may have shocks very close to the horizon, the frequencies of such oscillations are expected to be higher. Our present \ufb01ndings suggest that shocks, standing or oscillating, do form around the spinning black holes and it may be an essential ingredient since shocks could successfully explain the observed stationary (Chakrabarti & Mandal 2006) as well as time dependent behaviour of the radiations from the black hole candidates (Chakrabarti et al. 2004; Okuda et al. 2007). We demonstrated that the shocks form closer to the black hole as cooling is increased. This will enhance the QPO frequency as it is proportional to the infall time scale (Chakrabarti & Manickam 2000; Molteni et al. 1996) and thus the QPO frequency may vary in a wide range starting from mHz to KHz depending on the accretion rate. This understanding also generally agrees with the observational results. Recent reporting of the outbursts of GRO 1655-40 showed a clear evidence of the QPO frequencies increasing monotonically from about 90mHz to 17Hz (Chakrabarti et al. 2005) in a matter of 15 days and this could be easily \ufb01tted using the shock propagating at a constant velocity (Chakrabarti et al. 2005). The formalism presented here does not include out\ufb02ows/jets which may be generated from the inner part of the disk as a result of de\ufb02ection of in\ufb02owing matter due to excess thermal pressure at the shock front (Das et al. 2001; Chattopadhyay & Das 2007; Das & Chattopadhyay 2008). Since the out\ufb02ows/jets are ejected evacuating the inner part of the disk, it will necessarily reduce the post-shock pressure and therefore, shock front has to move in to retain the pressure balance. This suggests that the result should be a\ufb00ected if the accretion-ejection mechanism is considered together. We plan to consider this study in a future work and it will be reported elsewhere. We have approximated the e\ufb00ect of general relativity using the pseudo-Kerr gravitational potential. This pseudoKerr potential has been successfully tested to retain most, if not all, of the salient features of the \ufb02ows in a Kerr metric. The use of this approach allows us to \ufb01nd out the non-linear shock solutions in a curved space-time geometry in a simpler way. We believe that our basic results would be qualitatively the same with fully general relativistic calculations, especially for ak < 0.8 for which the pseudo-Kerr potential was found to be satisfactory. ACKNOWLEDGMENTS SD was supported by KOSEF through Astrophysical Research Center for the Structure and Evolution of the Cosmos(ARCSEC). SKC thanks a visit to Abdus Salam International Centre for Theoretical Physics where part of this work was completed." + }, + { + "url": "http://arxiv.org/abs/0802.4136v1", + "title": "Computation of mass loss from viscous accretion disc in presence of cooling", + "abstract": "Rotating accretion flow may undergo centrifugal pressure mediated shock\ntransition even in presence of various dissipative processes, such as viscosity\nand cooling mechanism. The extra thermal gradient force along the vertical\ndirection in the post shock flow drives a part of the accreting matter as\nbipolar outflows which are believed to be the precursor of relativistic jets.\nWe compute mass loss rates from a viscous accretion disc in presence of\nsynchrotron cooling in terms of the inflow parameters. We show cooling\nsignificantly affects the mass outflow rate, to the extent that, jets may be\ngenerated from flows with higher viscosity. We discuss that our formalism may\nbe employed to explain observed jet power for a couple of black hole\ncandidates. We also indicate that using our formalism, it is possible to\nconnect the spectral properties of the disc with the rate of mass loss.", + "authors": "Santabrata Das, Indranil Chattopadhyay", + "published": "2008-02-28", + "updated": "2008-02-28", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction In recent years, it has been established that AGNs and Microquasars su\ufb00er mass loss in the form of jets and out\ufb02ows (Ferrari, 1998; Mirabel & Rodriguez, \u2217Corresponding author. Email addresses: sbdas@canopus.cnu.ac.kr (Santabrata Das), indra@canopus.cnu.ac.kr (Indranil Chattopadhyay ). Preprint submitted to New Astronomy 23 November 2018 \f1999). Generation of jets or out\ufb02ows around gravitating centres with hard boundaries (e.g., neutron stars, YSOs etc.) are quite natural, however, it is altogether a di\ufb00erent proposition to consider the same around a black hole. As black holes do not have either hard boundaries or intrinsic atmospheres, jets/out\ufb02ows have to originate from the accreting matter onto black holes, though there is no consensus about the exact mechanism of jet formation. One of the motivation of studying black hole accretion is therefore to understand the primary mechanism in the accretion process which may be responsible for the generation of jets. In addition, recent observations have established that, whatever be the exact mechanism behind the formation of jets/out\ufb02ows around black holes, the formation of jets is intrinsically linked with spectral states of the associated black hole candidates. In particular, Gallo et al. (2003) showed that quasi steady jets are generally ejected in the hard state, which suggests that the generation or quenching of jets do depend on various states of the accretion disc. Several theoretical attempts were made to explain the possible mechanisms of jet generation from accretion disc. Xu & Chen (1997) reported the formation of out\ufb02ows by considering self-similar solutions. Chakrabarti (1999); Das & Chakrabarti (1999) estimated mass out\ufb02ow rates in terms of in\ufb02ow parameters from an inviscid advective disc. In particular, these authors showed that the centrifugal barrier may produce shock, and the post-shock disc can generate bipolar out\ufb02ows. They also showed mass out\ufb02ow rates depend on the strength of the centrifugal barrier, as well as, its thermal driving. Das et al. (2001b) extended this work to show that such out\ufb02ows generated by accretion shock is compatible with the spectral state of the accretion disc. The shock induced relativistic out\ufb02ows could be obtained if various acceleration mechanism, namely, \ufb01rst order Fermi acceleration at the shock (Le & Becker, 2005), or radiation pressure (Chattopadhyay, 2005), are considered. Recently, Chattopadhyay & Das (2007) computed mass out\ufb02ow rates from a viscous advective disc and showed that the mass out\ufb02ow rate decreases with the increase of viscosity parameter. In realistic accretion disc, a variety of dissipative processes are expected to be present, and viscosity is just one of them. In absence of mass loss, Gu & Lu (2004) conjectured that cooling processes will not a\ufb00ect the nature of advective accretion solutions. However, Das (2007) explicitly showed that cooling processes play a crucial role in determining the \ufb02ow variables as well as the shock properties. Therefore, it will be worthwhile to investigate, how cooling would a\ufb00ect the mass out\ufb02ow rate from a viscous accretion disc. In presence of viscosity, as matter \ufb02ows inward angular momentum decreases while speci\ufb01c energy increases. A cooling process unlike viscosity, only reduces the energy of the \ufb02ow and leaves the angular momentum distribution un-a\ufb00ected. Thus the increase of \ufb02ow energy due to viscous heating may be abated by incorporating cooling mechanism. As cooling is more e\ufb03cient at the hotter and denser post-shock region (abbreviated as CENBOL \u2261CENtrifugal pressure supported BOundary Layer), the decrease 2 \fof CENBOL energy will be more pronounced compared to the pre-shock energy. In reality, more energetic \ufb02ows at the outer edge, which do not satisfy shock conditions in absence of cooling, may undergo shock transition in its presence. Consequently, more energetic CENBOL may be produced for \ufb02ows with higher cooling e\ufb03ciency, and hence there is a possibility of enhanced jet driving. In this paper, we would like to address these issues in detail. In the next section, we present the model assumptions and the governing equations. In Section 3, we discuss the methodology of computing self-consistent in\ufb02ow-out\ufb02ow solutions and present the solutions. In Section 4, we apply our formalism on two black hole candidates to compute the mass out\ufb02ow rate, and compare it with the observed jet power. In the last section we draw concluding remarks. 2 Model Assumptions and Equations of motion In a disc-jet system, there are two separate \ufb02ow geometries, namely, one for accretion \ufb02ows and the other for out\ufb02ows. Axis-symmetry and steady state conditions are assumed for the disc-jet system. In the present paper, we consider thin, viscous accretion \ufb02ow in presence of synchrotron cooling. Jets are assumed to be tenuous. Since jets are in general collimated, they should have less angular momentum and therefore less di\ufb00erential rotation compared to the accretion disc. Thus, we ignore the e\ufb00ect of viscosity in jets. As jets are believed to originate from the inner part of the disc, which in our model is the CENBOL, the jet base must be described by identical local accretion \ufb02ow variables (see section 3), i.e., the speci\ufb01c energy, the angular momentum etc of the CENBOL. Consequently, we neglect the torque between the disc and the jet at the jet base. It is to be remembered that, to keep the jets collimated, angular momentum will be reduced either by magnetic \ufb01eld (stochastic \ufb01elds, considered in the paper, are not e\ufb00ective in doing so), or by radiation [see, (Chattopadhyay, 2005)], however these processes have not been considered here. In reality, back reactions on the disc in the form of extra torque at the jet base and/or feedback e\ufb00ect from failed jets are not altogether ruled out. To study these e\ufb00ects, one requires to undertake numerical simulation, which is beyond the scope of the present frame work. Moreover, jets are supposed to be colder than the accretion discs. Therefore, we assume jets to be adiabatic, at least up to its critical point. We use pseudo-Newtonian potential introduced by Paczy\u00b4 nski & Wiita (1980) to approximate the space time geometry around a non-rotating black hole. A schematic structure of shocked advective accretion disc and the associated jet are presented in Fig. 1. Here, xco and xci are the outer and the inner critical points of the disc, respectively. The centrifugal pressure acts as a \u2018barrier\u2019 to 3 \fFig. 1. A schematic diagram of disc-jet system. The outer and inner critical points xco and xci are marked in the \ufb01gure. The shock is located at xs. The jet geometry is bounded by FW and CB. MM\u2032 = xF W and MM\u2032\u2032 = xCB (described in the text). the supersonic matter at xci < x < xco and a shock at xs is formed. The postshock disc is indicated in the \ufb01gure as CENBOL. At the shock, matter momentarily slows down and ultimately dives into the black hole supersonically through xci. Excess thermal driving in CENBOL drives a fraction of accreting matter as bipolar jet which \ufb02ows within two geometric surfaces called the Funnel Wall (FW) and the Centrifugal Barrier (CB) (Molteni et al. , 1994, 1996a). The system of units used in this paper is 2G = MBH = c = 1, where G, MBH and c are the universal gravitational constant, the mass of the black hole and the speed of light, respectively. Since we use the geometrical system of units, our formalism is applicable for both the galactic and the extra galactic black hole candidates. Two separate sets of hydrodynamic equations for accretion and jet, are presented bellow. The dimensionless hydrodynamic equations that govern the motion of accreting matter are (Chakrabarti, 1996; Das, 2007), the radial momentum equation : udu dx + 1 \u03c1 dP dx \u2212\u03bb2(x) x3 + 1 2(x \u22121)2 = 0, (1a) where, u, \u03c1, P, and \u03bb(x) are the radial \ufb02ow velocity, the local density, the isotropic pressure and the local speci\ufb01c angular momentum, respectively. Here x is the cylindrical radial coordinate. 4 \fThe baryon number conservation equation : \u02d9 M = 2\u03c0\u03a3ux, (1b) where, \u02d9 M and \u03a3 are the mass accretion rate and the vertically integrated density, respectively. In our model, the accretion rates in the pre shock and post shock regions are di\ufb00erent as some fraction of the accreting matter is ejected as out\ufb02ow. Actually, the post-shock matter is \ufb02own into two channels \u2014 one is the accreting part (falling onto black holes through xci) and the other is the out\ufb02owing part (Molteni et al. , 1994, 1996a; Chattopadhyay & Das, 2007). More speci\ufb01cally, the combination of accretion and out\ufb02ow rate in the post shock region remain conserved with the pre-shock accretion rate (see Eq. 3). The angular momentum conservation equation : ud\u03bb(x) dx + 1 \u03a3x d dx \u0010 x2Wx\u03c6 \u0011 = 0, (1c) where, Wx\u03c6(= \u2212\u03b1\u03a0) denotes the viscous stress, \u03b1 is the viscosity parameter and \u03a0 is the vertically integrated total (i.e., thermal+ram) pressure. The viscosity prescription employed in this paper was developed by Chakrabarti & Molteni (1995) and has been employed to study advective accretion disc by a group of workers (Chakrabarti, 1996; Chakrabarti & Das, 2004; Gu & Lu, 2004; Das, 2007; Chattopadhyay & Das, 2007). This viscosity prescription is more suitable for \ufb02ows with signi\ufb01cant radial velocity as it maintains angular momentum distribution continuous across the shock unlike Sakura-Sunyaev type viscosity prescription which was proposed for a Keplerian disc. And \ufb01nally, the entropy generation equation : uT ds dx = Q+ \u2212Q\u2212, (1d) where, s is the speci\ufb01c entropy of the \ufb02ow, T is the local temperature. Q+ and Q\u2212are the heat gained and lost by the \ufb02ow, and are given by (Chakrabarti, 1996; Das, 2007; Shapiro & Teukolsky, 1983), Q+ = \u2212\u03b1 \u03b3 x(ga2 + \u03b3u2)d\u2126 dx and Q\u2212= \u03b2Sia5 ux3/2(x \u22121). Here, g = In+1/In, n = 1/(\u03b3 \u22121), In = (2nn!)2/(2n + 1)! (Matsumoto et al. , 1984), and \u03b3(= 4/3) is the adiabatic index. Presently, we consider only synchrotron cooling. In the above equation, \u03b2 is the cooling parameter, and Si is 5 \fthe synchrotron cooling term which is independent of the \ufb02ow variables and is given by, Si = 32\u03b7 \u02d9 mi\u00b52e41.44\u00d71017 3 \u221a 2m3 e\u03b35/2 1 2GM\u2299c3, where, e is the electron charge, me is electron mass, \u02d9 mi is the accretion rate in units of Eddington rate, M\u2299is solar mass, and for fully ionized plasma \u00b5 = 0.5. The su\ufb03x \u2018i = \u2213\u2019 represents the quantities in the pre/post shock disc region. It is to be borne in mind that in absence of shock \u02d9 m+ = \u02d9 m\u2212, therefore S+ = S\u2212. Due to the uncertainties of the realistic magnetic \ufb01eld structure in the accretion disc, we have assumed stochastic magnetic \ufb01eld. The ratio between the magnetic pressure and the gas pressure is represented by \u03b7. The magnetic \ufb01eld strength is estimated by assuming partial equipartition (\u03b7\u22641) of the magnetic pressure with the gas pressure. In this paper, we have ignored bremsstrahlung cooling, since it is a very ine\ufb03cient cooling process (Chattopadhyay & Chakrabarti, 2000; Das & Chakrabarti, 2004). The expression for bremsstrahlung cooling (Rybicki & Lightman, 1979) in vertical equilibrium is given by, Q\u2212 B = Bi ux3/2(x \u22121), where Bi = 2.016\u00d710\u221210 4\u03c0m2 p (\u00b5mp 2kB )1/2 \u02d9 mi 2GM\u2299c, where, mp is the proton mass and kB is the Boltzmann constant. For identical accretion rates Si Bi = 3.26\u00d7107\u00d7\u03b7. Therefore, it is quite evident that the synchrotron cooling is much stronger than bremsstrahlung. However, bremsstrahlung photons may interact with the accreting gas itself and in that sense bremsstrahlung may be important. Such complicated situation is not addressed in the present paper. We have also not considered inverse-Compton, since that will require a proper two temperature solution which is also beyond the scope of the present e\ufb00ort. In the present paper, we have chosen \u02d9 m\u2212= 0.1 and \u03b7 = 0.1 as the representative case, until stated otherwise. Under the adiabatic assumption for the jet, the momentum balance equation can be represented in the following integrated form: Ej = 1 2v2 j + na2 j + \u03bb2 j 2x2 j \u2212 1 2(rj \u22121), (2a) where, Ej and \u03bbj are the speci\ufb01c energy and angular momentum of the jet, respectively. Other \ufb02ow variables are the jet velocity (vj) and sound speed 6 \f(aj). Furthermore, xj[= (xCB + xF W)/2] and rj[= (x2 j + y2 CB)1/2] are the cylindrical and spherical radius of the jet streamline. The functional form of the coordinates of CB and FW are [see, Chattopadhyay & Das (2007)], xCB = h 2\u03bb2 jrCB(rCB \u22121) i1/4 , x2 F W = \u03bb2 j (\u03bb2 j \u22122) + q (\u03bb2 j \u22122)2 \u22124(1 \u2212y2 CB) 2 , where, xCB and xF W are measured at the same height of jet streamline and is given by yCB = q (r2 CB \u2212x2 CB). The integrated form of mass-\ufb02ux conservation equation for the jet is given by, \u02d9 Mout = \u03c1jvjA, (2b) where, \u02d9 Mout is jet out\ufb02ow rate and \u03c1j is the local density of the jet. The jet cross-sectional area is given by, A = 2\u03c0(x2 CB \u2212x2 F W). 3 Accretion-Ejection solution It is well known that matter falling onto black holes have to cross one or more critical points depending on the absence or presence of shock transition (Chakrabarti, 1996; Chakrabarti & Das, 2004; Chattopadhyay & Das, 2007). If the \ufb02ow parameters allow shock transition then matter must cross the sonic horizon twice, once before the shock and then after the shock. The location of the latter is called the inner critical point (xci) and the former is known as outer critical point (xco). In absence of dissipation, the energy (E) and angular momentum (\u03bb) of the \ufb02ow is conserved, and therefore xci and/or xco are uniquely obtained in terms of E and \u03bb, and consequently all possible \ufb02ow solutions. E and \u03bb do not remain conserved along a dissipative \ufb02ow and therefore critical points cannot be determined uniquely. To obtain solutions of a dissipative accretion \ufb02ow in a simpler way, one needs to know at least one set of critical point parameters (e.g., xc, \u03bbc). Fortunately, the range of (xci,\u03bbci)s varies from (2rg < \u223cxci < \u223c4rg,1.5 < \u223c\u03bbci < \u223c\u03bbms), where \u03bbci, \u03bbms are the angular momentum at the inner critical point and the marginally stable orbit, respectively [e.g., Chakrabarti (1989, 1996); Chakrabarti & Das (2004)]. Here rg is the Schwarzschild radius. Therefore for a viscous \ufb02ow, it is easier to consider xci and \u03bbci as parameters for solving the \ufb02ow equations, along with the viscosity parameter \u03b1 (Chakrabarti & Das, 2004; Chattopadhyay & Das, 2007). In presence of cooling, one should also supply the accretion rate at xci in addition to (xci, \u03bbci, \u03b1). Presently, we \ufb01x accretion rate and vary \u03b2 to study the e\ufb00ect of cooling. Hence the existence of xco can be obtained only in presence of a shock. 7 \fIn this paper, we consider in\ufb01nitesimally thin adiabatic shock, generally expressed by the continuity of energy \ufb02ux, mass \ufb02ux and momentum \ufb02ux across the shock, and is generally called Rankine-Hugoniot (RH) shock conditions. Numerical simulations [e.g., Eggum et al. (1985); Molteni et al. (1994, 1996a)] have shown that thermally driven out\ufb02ows could originate from the hot inner part of the disc. When rotating matter accretes towards the black hole, centrifugal force acts as a barrier, inducing the formation of shock. At the shock, \ufb02ow temperature rises sharply as the kinetic energy of the \ufb02ow is converted into the thermal energy. This excess thermal energy may drive a signi\ufb01cant fraction of accreted material as out\ufb02ows. Thus bulk properties such as excess thermal driving along z direction is a legitimate process for mass ejections. The modi\ufb01ed Rankine-Hugoniot shock conditions in presence of mass loss are [Chattopadhyay & Das (2007), and references therein], E+ = E\u2212; \u02d9 M+ = \u02d9 M\u2212\u2212\u02d9 Mout = \u02d9 M\u2212(1 \u2212R \u02d9 m); \u03a0+ = \u03a0\u2212, (3) Assuming the jet to be launched with the same speci\ufb01c energy, angular momentum and density as the post-shock disc, the expression for relative mass out\ufb02ow rate is given by (Chattopadhyay & Das, 2007), R \u02d9 m = \u02d9 Mout/ \u02d9 M\u2212= Rvj(xs)A(xs) 4\u03c0 q 2 \u03b3x3/2 s (xs \u22121)a+u\u2212 , where, the compression ratio is de\ufb01ned as R = \u03a3+/\u03a3\u2212. Since, the information of R \u02d9 m is in the shock condition itself, we need to solve accretion-ejection equations simultaneously. The method to do so is as follows: (a) we assume R \u02d9 m = 0, ( \u02d9 m\u2212= \u02d9 m+), and with the supplied values of (xci, \u03bbci, \u03b1, \u03b2) we integrate Eqs. (1a-d) outwards along the sub-sonic branch of the post-shock region. Equation (3) is used to compute the pre-shock \ufb02ow quantities, which are employed to integrate outwards to \ufb01nd the location of xco. The location of the jump for which xco exists is the virtual shock location (x\u2032 s). (b) Once x\u2032 s is found out, we assign Ej = E(x\u2032 s) and \u03bbj = \u03bb(x\u2032 s) to solve the jet equations and compute the corresponding R \u02d9 m. (c) We use this value of R \u02d9 m in Eq. (3) and again calculate the shock location. (d) When the shock locations converge we have the actual shock location (xs), and the corresponding R \u02d9 m is the mass out\ufb02ow rate. In other words, we are launching jets with same E, \u03bb, and \u03c1 as that of the shock. Presently, we consider viscosity and synchrotron cooling process as the source of dissipation in the \ufb02ow. Viscosity reduces the angular momentum, while increases the energy as the \ufb02ow accretes towards the central object. Cooling process on the other hand, decreases the \ufb02ow energy inwards while leaving the angular momentum distribution una\ufb00ected. For proper understanding of 8 \fFig. 2. E(x) with x is plotted for \u03b2 = 0 (dashed), 0.01 (dotted) and 0.036 (solid). Other parameters are (Eci, \u03bbci)=(0.00182, 1.73) and \u03b1 = 0.001. the e\ufb00ect of viscosity and cooling on determining mass out\ufb02ow rates we need to \ufb01x (E, \u03bb) at some length-scale (around inner or outer boundary), and then vary \u03b1 and \u03b2. As xci is very close to the horizon, \ufb01xing (Eci, \u03bbci) at xci is almost equivalent to \ufb01xing the inner boundary \ufb02ow quantities. In Fig. 2, we plot E(x) with x for \u03b2 = 0 (dashed), 0.001 (dotted) and 0.0036 (solid), where the inner boundary \ufb02ow quantities are (Eci, \u03bbci)=(0.00182, 1.73) and \u03b1 = 0.001. For the cooling free solution (dashed), the energy of the \ufb02ow increases inwards due to viscosity. For solutions with signi\ufb01cant cooling (dotted, solid), the increase in energy due to viscous heating is completely over shadowed, causing the energy to decrease towards the black hole. Increase in cooling e\ufb03ciency signi\ufb01es, matter with higher energies at the outer boundary, falls into the black hole with identical Eci. If standing shocks form, then under these circumstances energy at the shock will increase with \u03b2. In the following, we discuss the role of viscous heating and synchrotron cooling in determining the mass out\ufb02ow rate. In Fig. 3, we present a global in\ufb02ow-out\ufb02ow solution. In the top panel, the Mach number M of the accretion \ufb02ow is plotted with log(x). The solid curve represents shock induced accretion solution. The in\ufb02ow parameters are xci = 2.444, \u03bbi = 1.75, \u03b1 = 0.005, and \u03b2 = 0.01 (for these parameters Eci = 0.0018). In the lower panel, the out\ufb02ow Mach number Mj is plotted with log(xj). In presence of mass loss, the shock forms at xs = 21.64 denoted by the vertical line in the top panel, and the out\ufb02ow is launched with energy and angular momentum at the shock (Es, \u03bbs = 0.00175, 1.766). The out\ufb02ow is plotted up to its sonic point (xjc = 68.83), and the corresponding relative mass out\ufb02ow rate is R \u02d9 m = 0.0816. 9 \fFig. 3. Upper panel: In\ufb02ow Mach number (M = u/a) with log(x). The in\ufb02ow parameters are xci = 2.444, \u03bbi = 1.75, \u03b1 = 0.005, and \u03b2 = 0.01 where, xs = 21.64, Es = 0.00175, \u03bbs = 1.766 xco = 166.57, \u03bbo = 1.799. The dotted curve is the shock free solution. Lower panel: Out\ufb02ow Mach number (Mj = vj/aj) with log(xj), the out\ufb02ow critical point xjc = 68.63 (rjc = 270.8), and the jet coordinates at the base is given by xjb = 12.2 (rjb = 21.24). The relative mass loss rate is R \u02d9 m = 0.0816. Fig. 4. Variation of R \u02d9 m with \u03b2 for \u03b1 = 0 \u2014 0.02 (left to right with d\u03b1 = 0.005). Eci = 0.0018 and \u03bbci = 1.75. 10 \fFig. 5. R \u02d9 m is plotted with \u03b2 for Eci = \u22120.001\u21920.003 (right to left, dEci = 0.001). Other parameters are \u03bbci = 1.73 and \u03b1 = 0.001. To present the global solution, Fig. 3 was obtained only for a set of input parameters, namely (Eci, \u03bbci, \u03b1, \u03b2). We would now proceed to \ufb01nd the explicit dependence of R \u02d9 m on these parameters. In Fig. 4, we plot the mass out\ufb02ow rates (R \u02d9 m) with the cooling parameter \u03b2, for \u03b1 = 0 \u2014 0.02 (left to right for d\u03b1 = 0.005). All the curves are drawn for Eci = 0.0018 and \u03bbci = 1.75. Figure 4 con\ufb01rms our earlier investigation that R \u02d9 m decreases with increasing viscosity parameter (Chattopadhyay & Das, 2007). However, it may be noticed that for \ufb01xed \u03b1, R \u02d9 m increases with \u03b2. For a given \u03b1, the energy at the shock increases with \u03b2 (e.g., Fig. 2), and since the post-shock region (i.e., CENBOL) is the base of the jet, the jets are launched with higher driving force. This causes R \u02d9 m to increase with \u03b2. It is to be noted, the two extreme curves (i.e., for \u03b1 = 0.015, 0.02) on the right show that, for \u03b2 = 0 there is no out\ufb02ow, but in presence of su\ufb03cient cooling steady jets reappear. As \u03b1 is increased, R \u02d9 m decreases due to the gradual reduction of su\ufb03cient driving at the jet base, and beyond a critical \u03b1 (say, \u03b1cri) out\ufb02ow rate vanishes (Chattopadhyay & Das, 2007). For \ufb02ows with \u03b1 > \u03b1cri, the required jet driving could be generated by considering su\ufb03ciently high \u03b2. In other words, to get steady out\ufb02ows in the realm \u03b1 > \u03b1cri, there is a non-zero minimum value of \u03b2 (say, \u03b2m) corresponding to each \u03b1. Furthermore, for each \u03b1 there is a cut-o\ufb00in R \u02d9 m at the higher end of \u03b2 (say, \u03b2cri), since standing shock conditions are not satis\ufb01ed there. Nonsteady shocks may still form in those regions, and the investigation of such phenomena will be reported elsewhere. In Fig. 5, R \u02d9 m is plotted with \u03b2 for Eci = \u22120.001 (solid), 0.0 (dotted), 0.001 (big dashed), 0.002 (small dashed) and 0.003 (dash-dotted). Other parameters are \u03bbci = 1.73 and \u03b1 = 0.001. For a given \u03b2, mass out\ufb02ow rate increases with Eci. Higher Eci corresponds to more energetic \ufb02ow, and if these \ufb02ows produce shock, we get higher R \u02d9 m. On the other hand, even for same Eci, higher shock 11 \fFig. 6. (a) Variation of R \u02d9 m with \u03b2 for \u03bbci = 1.73 (dotted) 1.75 (dashed) and 1.77 (solid). Eci = 0.0018 and \u03b1 = 0.001. (b) Variation of R \u02d9 m with Es, for parameters same as Fig. 6a. energy is ensured with the increase of \u03b2, and consequently higher R \u02d9 m are produced. The solutions corresponding to Eci = 0 (dotted) and Eci = \u22120.001 (solid) show that R \u02d9 m\u21920 as \u03b2\u21920. In other words, in presence of cooling, \ufb02ows with bound energies at xci may also produce out\ufb02ows. Thus it is clear that shock energy plays an important role in determining the rate of mass loss from the disc. Previous studies of computation of mass out\ufb02ow rates from inviscid and viscous disc showed that the angular momentum at the shock dictates the mass out\ufb02ow rates, because higher angular momentum produces higher centrifugal driving for the jet. This lead us to investigate the role of angular momentum of the disc in determining the mass out\ufb02ow rates, when cooling is present. In Fig. 6a, R \u02d9 m is plotted with \u03b2 for \u03bbci = 1.73 (dotted), 1.75 (dashed) and 1.77 (solid), where Eci = 0.00182, and \u03b1 = 0.001 are kept \ufb01xed for all the curves. For negligible cooling (\u03b2 \u223c0), higher angular momentum \ufb02ow generates higher R \u02d9 m. As the centrifugal pressure produces the shock, which in turn drives the jet, it is not surprising that \ufb02ows with larger angular momentum will produce higher R \u02d9 m. Similar trend is maintained for nonzero \u03b2. For a given \u03bbci, the energy at the shock (Es) increases with \u03b2. Thus the combined e\ufb00ects of centrifugal and thermal driving increase the mass out\ufb02ow rate. We do see that there is a cut-o\ufb00in R \u02d9 m corresponding to each angular momentum at \u03b2\u2265\u03b2cri. For lower angular momentum \ufb02ow \u03b2cri is higher. To illustrate the e\ufb00ects of thermal driving and centrifugal driving of the jet, in Fig. 6b, we have plotted R \u02d9 m with Es for \u03bbci = 1.73 (dotted), 1.75 (dashed) and 1.77 (solid), for the same 12 \fFig. 7. R \u02d9 m is plotted with Eci for \u03bbci = 1.73 (dotted), \u03bbci = 1.74 (dashed), \u03bbci = 1.75 (solid). Other parameters are \u03b1 = 0.001, and \u03b2 = 0.06. set of Eci and \u03b1 as in the previous \ufb01gure. It is to be remembered that Es is not a new parameter but is calculated at the shock for the same range of \u03b2 variation as in Fig. 6a. In the shaded region, R \u02d9 m is higher for higher \u03bbci. As long as the shock energy is similar, higher angular momentum results in greater centrifugal driving for the out\ufb02owing matter. However, lower angular momentum \ufb02ow can sustain higher energies across the shock [e.g., Fig. 3 of Das et al. (2001a)]. For high enough Es, the thermal driving starts to dominate over the centrifugal pressure, and results in higher R \u02d9 m even for lower angular momentum \ufb02ow. In Fig. 7, R \u02d9 m is plotted as a function of Eci, for various values of \u03bbci = 1.73 (dotted), \u03bbci = 1.74 (dashed), \u03bbci = 1.75 (solid). The other \ufb02ow parameters are \u03b1 = 0.001 and \u03b2 = 0.06. This \ufb01gure distinctly shows that even if the accreting \ufb02ow starts with unbound energy and produces shock induced out\ufb02ow, signi\ufb01cant cooling closer to the black hole turns the unbound energy to bound energy. 4 Astrophysical application In our solution procedure, we have employed three di\ufb00erent constant parameters \u03b2, \u03b7 and \u02d9 m to determine the cooling process. A cooling mechanism might depend on various other physical processes apart from its usual dependence on the \ufb02ow variables. In general, \u02d9 m regulates cooling, however to obtain a cooling free solution one needs to consider \u02d9 m = 0, which is meaningless. We have simpli\ufb01ed all such complications by introducing \u03b2 as a control-parameter for cooling. A simple inspection of Eq. (1d), shows that for a given set of (u, a, x), identical cooling rates may be obtained by rearranging the values of \u03b2, \u03b7 and 13 \f\u02d9 m. It must be noted that, introduction of \u03b2 and \u03b7 do not increase the parameters of our solution, instead these are used to control the cooling e\ufb03ciency and the magnetic \ufb01eld strength, about which there is no prior knowledge. In the previous section, we have \ufb01xed the values of \u02d9 m\u2212and \u03b7, and controlled the cooling term by \u03b2.In this section, we have \ufb01xed the value of \u03b2 to unity, and allowed physical parameters, such as \u02d9 m+ and \u03b7 to dictate the cooling term. It is a matter of interest to estimate how much matter, energy and angular momentum enter into the black hole. In the present paper, the amount of mass fed to the disc is given by \u02d9 m\u2212. The rate at which matter is being accreted into the black hole and the rate of mass loss are self-consistently computed as \u02d9 m+ and ( \u02d9 m\u2212\u2212\u02d9 m+). It has been shown in Chattopadhyay & Das (2007) that the speci\ufb01c angular momentum of the \ufb02ow close to the horizon, is almost same as \u03bbci. The actual value of E close to the black hole should be slightly higher than Eci. One has to quote the actual value of E close to the horizon. However, these numbers are obtained using pseudo-Newtonian potential and may not be consistent as general relativistic e\ufb00ects are important at such distances. We have applied our formalism to calculate the mass out\ufb02ow rates from two black hole candidates M87 and Sgr A\u2217. M87 is supposed to harbour a super massive black hole [MBH = 3\u00d7109M\u2299(Ford et al. , 1994)]. The estimated accretion rate is \u02d9 M\u2212\u223c0.13M\u2299yr\u22121 (Reynolds et al. , 1996). The mass of the central black hole and the accretion rate of Sgr A\u2217are MBH = 2.6\u00d7106M\u2299 (Schodel et al. , 2002) and \u02d9 M\u2212\u223c8.8\u00d710\u22127M\u2299yr\u22121 (Yuan et al. , 2002). The accretion disc around the black hole in Sgr A\u2217is supposed to be radiatively in-e\ufb03cient and of higher viscosity (Falcke , 1999). For both the cases we have set \u03b2 = 1, so the cooling mechanism is purely dictated by \u02d9 m and \u03b7. To simplify further, we have chosen \u03b7 = 0.01 for both the objects. The accretion rates (in terms of Eddington rate) for M87 is given by \u02d9 m\u2212= 1.89\u00d710\u22122 and that for Sgr A\u2217is \u02d9 m\u2212= 1.47\u00d710\u22124, therefore Sgr A\u2217is dimmer than M87. With proper choice of \u03b1 and xci, and \u03bbci (see Table 1), we compute R \u02d9 m (consequently \u02d9 m+) for both the objects mentioned above. The typical size of such a subKeplerian disc should be around a thousand Schwarzschild radii across the central object. Accordingly we have set the outer boundary at XT = 500rg, and have provided the typical value of angular momentum at such distance (\u03bbT) for both the objects. For M87, the computed values of mass out\ufb02ow rate and shock location are R \u02d9 m = 0.073 and xs = 40.57. In case of Sgr A\u2217, the estimated values of mass out\ufb02ow rate and shock location are R \u02d9 m = 0.1049 and xs = 14.415. Assuming the jet\u2019s luminosity is signi\ufb01cant only at the lobes (where, the jet energy is mostly dissipated), the maximum luminosities of M87 and Sgr A\u2217 jets, estimated from the computed values of respective R \u02d9 m, are given in Table 1. Considering 10% radiative e\ufb03ciency at the jet lobe the jet-luminosities for both M87 and Sgr A\u2217, agree well with the observed values (Reynolds et al. , 14 \fTable 1: Predicted values of R \u02d9 m and jet power for M87 and Sgr A\u2217. Object MBH \u02d9 M\u2212 \u03b1 xci \u03bbci \u02d9 m+ xs \u03bbT R \u02d9 m Lmax jet M\u2299 M\u2299/yr rg crg \u02d9 MEdd rg crg % erg/s M87 3.0 0.13 0.010 2.367 1.78 1.75 40.57 2.01 7.3 5.36 \u00d7109 \u00d710\u22122 \u00d71044 Sgr A\u2217 2.6 8.80 0.015 2.548 1.71 1.32 14.42 2.44 10.5 5.2 \u00d7106 \u00d710\u22127 \u00d710\u22124 \u00d71039 1996; Falcke & Biermann , 1999). Moreover, the size of the computed jet base for M87 is \u223c2xs\u223c80rg. Junor et al. (1999) and Biretta et al. (2002) have estimated the base of jet to be less than 100rg from the central black hole, and probably greater than 30rg. Evidently our estimate of the jet base agrees quite well with the observations. There is no stringent upper limit of the jet base for Sgr A\u2217, however, our computation gives a result which is acceptable in the literature (Falcke , 1999). We have also provided an estimate of angular momentum at XT. For Sgr A\u2217, our estimated \u03bbT is comparable with the result of Coker & Melia (1997). However, no reliable estimate of \u03bbT for M87 is currently available. In terms of physical units, various \ufb02ow variables for M87 are given by, \u02d9 Mout\u223c0.009M\u2299yr\u22121, \u02d9 M+\u223c0.119M\u2299yr\u22121, Eci = 3.1\u00d71017erg g\u22121, xs\u223c3.61\u00d71016cm, \u03bbci\u223c4.75\u00d71025cm2s\u22121, and \u03bbT\u223c5.36\u00d71025cm2s\u22121. Similarly for Sgr A\u2217, \u02d9 Mout\u223c9.1\u00d710\u22128M\u2299yr\u22121, \u02d9 M+\u223c7.77\u00d710\u22127M\u2299yr\u22121, Eci = 4\u00d71018erg g\u22121, xs\u223c1.11\u00d71013cm, \u03bbci\u223c3.96\u00d71022cm2s\u22121, and \u03bbT\u223c5.64\u00d71022cm2s\u22121. In this paper, only sub-Keplerian matter distribution is chosen for the accretion disc. However, Chakrabarti & Titarchuk (1995) and Chakrabarti & Mandal (2006) have shown that if a mixture of Keplerian and sub-Keplerian matter is chosen, then the spectral properties of the disc is better understood. These assertions have been rati\ufb01ed for several black hole candidates (Smith et al. , 2001, 2002). Since matter close to the black hole must be sub-Keplerian, therefore regardless of their origin, Keplerian and sub-Keplerian matter mixes to produce sub-Keplerian \ufb02ow before falling onto the black hole. Such transition from two component to single component \ufb02ow has been shown by various authors [e.g., Fig. 4b, of Das et al. (2001b)]. The region where such transition occurs may be called \u2018transition radius\u2019 (XT). It must be noted that, XT is treated as the \u2018outer edge\u2019 of the disc in our formalism described so far. The energy (ET) and angular momentum (\u03bbT) at XT can then easily be expressed in terms of the accretion rate of the Keplerian component ( \u02d9 MK) and the subKeplerian component ( \u02d9 MSK) (Das et al. , 2001b). Once XT, ET, \u03bbT is known and the net accretion rate being \u02d9 M = \u02d9 MSK + \u02d9 MK, it is easy to calculate R \u02d9 m following our formalism. Thus, it is possible to predict R \u02d9 m from the spectrum of the accretion disc, if formalism of Chakrabarti & Titarchuk (1995) is applied on our solutions. 15 \f5 Concluding Remarks The main goal of this paper was to study how dissipative processes a\ufb00ect the jet generation in an advective disc model. Chattopadhyay & Das (2007) have shown that mass out\ufb02ow rates decrease with increasing viscosity parameter. In the present paper, we have investigated how the mass out\ufb02ow rate responds to the synchrotron cooling. The general method of the solution (succinctly described in Section 3.) is to supply xci, \u03bbci, \u03b1, \u03b2 and then integrate outwards to \ufb01nd the shock location (and consequently the mass out\ufb02ow rate). Needless to say, once the above four parameters are \ufb01xed, the solution determines \ufb02ow with unique outer boundary (i.e., at XT). Of the four parameters, if \u03b1 is increased, the solution corresponds to \ufb02ow with higher angular momentum and lower energy at the outer boundary. On the contrary, when \u03b2 is increased then the solution corresponds to higher energy but identical angular momentum \ufb02ow at the outer boundary. Consequently, more energetic \ufb02ows are allowed to pass through standing shock for higher \u03b2, and hence stronger jets are produced. We have also shown that, if cooling e\ufb03ciency is increased, then it is possible to produce jets even for those \u03b1-s for which R \u02d9 m is zero (e.g., Fig. 3). Furthermore, it has been shown that the jets are primarily centrifugal pressure driven even in presence of cooling. We notice that standing shocks in higher angular momentum \ufb02ow do not exist for higher cooling e\ufb03ciency, therefore steady jets are not produced. However, for higher \u03b2, low angular momentum \ufb02ow can generate high enough relative mass out\ufb02ow rates. We have applied our formalism on a couple of black hole candidates, namely, Sgr A\u2217and M87. Using the available accretion parameters of the above two objects as inputs, we have shown that one can predict observational estimates of jet power. Moreover, the typical size of the jet base (\u223c2xs) also agrees well with observations. Le & Becker (2005) had dealt with these two particular objects, with their methodology which also involve shocked accretion disc. The methodologies of the present paper and the work of Le & Becker (2005) is quite di\ufb00erent in the sense that, Le & Becker (2005) dealt with isothermal shock while our model is based on the adiabatic shock scenario. In Le & Becker (2005), the focus was on calculating the number densities and energy densities around an isothermal shock of an hot tenuous adiabatic rotating \ufb02ow, by \ufb01rst order Fermi acceleration process. The energy lost at the isothermal shock, drives a small fraction of in falling gas to relativistic energies. With the given observational estimates of black hole mass, accretion rate etc of M87 and Sgr A\u2217, they estimated the Lorentz factors of the jet. We on the other hand, have computed the thermally driven out\ufb02ows from the post-shock disc, where the jets are launched with the local values (speci\ufb01c energy, angular momentum and density) of the disc \ufb02uid at the shock. With input values of black hole mass, accretion rate, and proper choice of viscosity parameter, inner sonic point etc we predict the shock location, the mass out\ufb02ow rate. We check whether the 16 \fpredicted values are within the accepted limits or not. We do not estimate the terminal bulk Lorentz factor, since we believe one has to recast the whole framework into the relativistic domain as well as employ other accelerating processes (e.g., magnetic \ufb01elds etc). One may wonder at the veracity of the two di\ufb00erent processes employed to explain the observational estimates of jet quantities of M87 and Sgr A\u2217, in other words, whether the jets are generated by post-shock thermal driving (we have not investigated magneto-thermal driving since this is only hydrodynamic investigation), or the jets are launched by particle acceleration processes. If one can observationally estimate the rate at which mass being ejected from the accretion disc, probably then one can ascertain the dominant e\ufb00ect behind jet generation. If it can be established that indeed the rate of mass loss is negligible compared to the accretion rate then probably the formalism of Le & Becker (2005) is the more realistic jet generation mechanism. However, su\ufb03ce is to say, various numerical simulation results do show (for non-dissipative as well as dissipative \ufb02ows) that post-shock \ufb02ow thermally drive bipolar out\ufb02ows, and our e\ufb00ort has been to investigate how dissipative processes a\ufb00ect the relative mass out\ufb02ow rates. In this paper we have only discussed formation of steady jets, since we have considered only stationary shocks. Molteni et al. (1996b) have shown that, the periodic breathing of the CENBOL starts when the post shock in-fall timescale matches with the Bremsstrahlung cooling timescale. Presently, we have considered dissipative processes which are more e\ufb00ective in determining shock properties compared to Bremsstrahlung. Therefore, the dissipative processes considered in this paper, may trigger comparable or di\ufb00erent shockinstabilities in the disc than that has been reported earlier (Molteni et al. , 1996b). Since, the jet formation is primarily controlled by the properties of the shock, any non-steady behaviour of the shock will leave its signature on the jet. In particular, a signi\ufb01cant oscillation of the shock (both in terms of the oscillation frequency and its amplitude) may produce periodic ejections. We are studying dynamical behaviour of the shock in presence of viscosity and synchrotron cooling using fully time dependent simulation and results will be reported elsewhere. Acknowledgements SD was supported by KOSEF through Astrophysical Research Center for the Structure and Evolution of the Cosmos (ARCSEC), and IC was supported by the KOSEF grant R01-2004-000-10005-0. The authors thank U. Mukherjee for suggesting improvements in the manuscript. 17" + } + ], + "Indranil Chattopadhyay": [ + { + "url": "http://arxiv.org/abs/1605.00752v1", + "title": "Estimation of mass outflow rates from viscous relativistic accretion discs around black holes", + "abstract": "We investigated flow in Schwarzschild metric, around a non-rotating black\nhole and obtained self-consistent accretion - ejection solution in full general\nrelativity. We covered the whole of parameter space in the advective regime to\nobtain shocked, as well as, shock-free accretion solution. We computed the jet\nstreamline using von - Zeipel surfaces and projected the jet equations of\nmotion on to the streamline and solved them simultaneously with the accretion\ndisc equations of motion. We found that steady shock cannot exist {for $\\alpha\n\\gsim0.06$} in the general relativistic prescription, but is lower if mass -\nloss is considered too. We showed that for fixed outer boundary, the shock\nmoves closer to the horizon with increasing viscosity parameter. The mass\noutflow rate increases as the shock moves closer to the black hole, but\neventually decreases, maximizing at some intermediate value of shock\n{location}. The jet terminal speed increases with stronger shocks,\nquantitatively speaking, the terminal speed of jets $v_{{\\rm j}\\infty} > 0.1$\nif $\\rsh < 20 \\rg$. The maximum of the outflow rate obtained in the general\nrelativistic regime is less than $6\\%$ of the mass accretion rate.", + "authors": "Indranil Chattopadhyay, Rajiv Kumar", + "published": "2016-05-03", + "updated": "2016-05-03", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION Large amount of radiation emitted by astrophysical objects like microquasars and active galactic nuclei (AGNs) favours the scenario that such energy output is due to the conversion of gravitational energy of matter into heat and radiation as it falls into extremely relativistic objects like black holes (BHs). Microquasars are essentially X-ray binaries and are supposed to harbour a stellar mass BH (MBH \u223c10M\u2299), while AGNs harbour supermassive BH i.e. MBH \u223c106\u22129M\u2299. The radiation emitted by these objects in general contains a relatively low energy multi-coloured blackbody component and one or more power-law components in the higher energy limit. When the accretion disc is in a state, from which the power emitted maximizes in the higher energy region and the luminosity is low, it is called the low/hard (LH) state. When the power maximizes in the lower energy level, the disc is luminous and produces multi-coloured blackbody radiation, it is called the high/soft state (HS). There are many intermediate states (IM) which connects the two. Along with energetic photons, AGNs and microquasars also eject highly energetic, collimated and relativistic bipolar jets. Observations of a large number of microquasars showed that the jets are seen only when the accretion is in the LH or IM, but the jet is not seen when the accretion disc is in canonical HS spectral state (Gallo et. al. 2003; Fender et. al. 2004; Fender & Gallo 2014), i.e.the jet states are correlated with the spectral states of the accretion disc. Such a correlation between spectral states and jet states cannot be made in AGNs, partly, because of the longer timescale associated with supermassive BHs and partly, due to possible lack of the periodic repetitions of the outer boundary condition of AGN accretion discs. However, the fact that timescales in AGNs and microquasars can be scaled by mass (McHardy et. al. 2006) tells us that the essential physics around super-massive and stellar mass BHs are similar. The \ufb01rst popular model of accretion disc around BH was proposed by Shakura & Sunyaev (1973) and Novikov & Thorne (1973), and is known as Keplerian disc or standard disc or Sakura-Sunyaev (SS) disc. It is characterized by matter rotating with local Keplerian angular velocity, with negligible infall velocity, and is geometrically thin but optically thick. Being optically thick, each annuli emits radiation which is thermalized with the matter. Each annulus has di\ufb00erent temperature and therefore the spectrum emitted is a sum of all the blackbody radiations from each of the annuli, i.e.multi-coloured blackbody spectrum. Indeed, the thermal radiation part of a BH candidate spectrum is well explained by a Keplerian disc. Although SS disc was very successful in explaining the thermal component of the spectrum c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 3 emitted by BH candidates, but it could not explain the hard powerlaw tail. The inner boundary condition of the SS disc is quite arbitrary and is chopped o\ufb00within the marginally stable orbit. The pressure gradient term and the advection term in SS disc are also poorly treated. It was realized that there should atleast be another component in the disc, which would behave like a Comptonizing cloud of hot electrons to produce the hard power-law tail (Sunyaev & Titarchuk 1980). Moreover, the inner boundary condition of BH dictates that matter crosses its horizon with the speed of light, and that the angular momentum of the \ufb02ow close to the horizon needs to be necessarily sub-Keplerian. Therefore, in addition to SS discs, investigations of accretion in sub-Keplerian regime also gained prominence, such as thick accretion discs (Paczy\u00b4 nski & Wiita 1980), advection-dominated accretion \ufb02ows or ADAF (Narayan et al. 1997), advective-transonic regime (Liang & Thompson 1980; Fukue 1987; Chakrabarti 1989). All these models start with exactly the same set of equations of motion i.e., Navier-Stokes equation in strong gravity, but di\ufb00er in boundary conditions. For example, if the radial advection term and the pressure gradient term are negligible, azimuthal shear is responsible for viscosity and the heat dissipated due to viscosity is thermalized locally and e\ufb03ciently radiated out, then the resulting disc is the SS disc. On the other hand, if only the advection term is negligible and the cooling is less e\ufb03cient, then the model is thick disc. The ADAF and the transonic regime are not subjected to such con\ufb01nement, infact, Lu et al. (1999) showed that global ADAF is indeed a subset of general transonic solutions. Recently, by playing around with the viscosity parameter and cooling e\ufb03ciency in the computational domain, Giri & Chakrabarti (2013) were able to generate both sub-Keplerian advective disc and Keplerian disc simultaneously. The Keplerian disc gives out soft photons, and subKeplerian \ufb02ow supplies hot electrons, if the disc has a shock transition. The post-shock disc behaves like a Comptonizing cloud, and produces the hard power-law photons. The transonic/advective disc has several advantages. It satis\ufb01es the inner boundary condition of the BH, i. e., matter crosses the horizon at the speed of light and therefore it is supersonic and sub-Keplerian. It implies that the existence of a single sonic point (the position where bulk velocity crosses the local sound speed) is guaranteed around a BH. However, depending on the angular momentum, there can be multiple sonic points. As a consequence, matter accelerated through the outer sonic point can be slowed down due to the presence of centrifugal barrier. This slowed down matter may impede the supersonic matter following it, and may cause shock transition (Fukue 1987; Chakrabarti 1989). Shock in BH accretion has been found to exist for inviscid \ufb02ow (Fukue 1987; Chakrabarti 1989; c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f4 Chattopadhyay & Kumar Aktar et al. 2015), dissipative \ufb02ow (Das 2007; Kumar & Chattopadhyay 2013), and has also been con\ufb01rmed in simulations (Molteni et al. 1996b; Lee et al. 2011; Das et al. 2014). The post-shock region of the disc (PSD), has some special properties. Apart from producing hard powerlaw photons, it was shown for an inviscid disc via numerical simulations, that the extra thermal gradient force in the PSD powers bipolar jets (Molteni et al. 1994, 1996a), and was later established for viscous disc as well (Lanzafame et al. 1998; Chattopadhyay & Das 2007; Das & Chattopadhyay 2008; Kumar & Chattopadhyay 2013; Das et al. 2014; Kumar et al. 2014). Moreover, since the jet originates from PSD (which extends from few to few tens of Schwarzschild radii) and not the entire disc, it satis\ufb01es the observational criteria that jets are generated from the inner part of the accretion disc (Junor et. al. 1999; Doeleman et. al. 2012). Most of the theoretical studies of accretion on to BHs have been in the domain of pseudoNewtonian potential (pNp) (Paczy\u00b4 nski & Wiita 1980) and \ufb01xed adiabatic index (\u0393) equation of state (EoS) of the \ufb02ow. Using pNp gravity potential instead of the Newtonian one has the advantage that, the Keplerian angular momentum distribution, the location of marginally stable orbit (rm), marginally bound orbit (rb), or, the photon orbit (rph) can be obtained exactly, as is obtained in general relativity (GR), but can still remain in the Newtonian regime of physics. However, according to relativity, matter cannot achieve the speed of light (c) outside the horizon, but, in pNp regime matter velocity exceed c outside the horizon. The e\ufb00ective potential of a rotating particle is zero on the horizon in GR, however, it is negative in\ufb01nity on the horizon if we use pNp. Moreover, in relativity the physics of \ufb02uid is di\ufb00erent from that of the particles. This arises because in relativistic equations of motions the thermal term, the angular momentum term etc, couples with the gravity. As a result, for conservative systems, the constants of motion are not the same in particles and \ufb02uids. While in pNp regime, the constants of motion in \ufb02uid and particles are identical. For viscous \ufb02ow, the shear tensor in relativity is much more complicated and contains many more terms when compared to the shear tensor in pNp regime. Therefore, solutions of relativistic equations for transonic accretion discs around BH have been few (for e.g. Liang & Thompson 1980; Lu 1985; Fukue 1987; Chakrabarti 1996) when compared with those in pNp regime and that too in the inviscid limit. The \ufb01rst consistent viscous advective accretion solution in pure GR was obtained by Peitz & Appl (1997). They derived the shear tensor from the \ufb01rst principle, and then approximated it with a simpler but accurate function. For inviscid \ufb02ow the constants of motions are the relativistic Bernoulli parameter (E = \u2212hut, h is the enthalpy and ut is the c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 5 covariant time component of the four velocity), the accretion rate, angular momentum and the entropy along a streamline. For viscous \ufb02ow, except the accretion rate, none of these are constant along the motion, and constants of motion need to be determined. The information of the constants of motion were not used at all by Peitz & Appl (1997), which resulted in a limited class of solutions. Moreover, they did not discuss the issue of massloss either. We would like to rectify that, i. e., to say we would like to obtain all possible accretion solutions using constants of motion and constants of integration, as well as, estimate the mass loss from the accretion solution. Another limitation of a large body of work on accretion-ejection solutions around compact objects is that, most of the work has been done assuming a \ufb01xed \u0393 equation of state (EoS), where, \u0393 is the adiabatic index. From classical \ufb02uid mechanics, we know that \u0393 is the ratio of speci\ufb01c heats, which turns out to be equal to the constant 5/3, if random motions of the constituent particles of the gas are negligible compared to c. However, if the random speeds of the particles is comparable to c, then \u0393 is not constant and the EoS becomes a combination of modi\ufb01ed Bessel\u2019s function of the inverse of temperature (Chandrasekhar 1939; Synge 1957; Cox & Giuli 1968). It can be trivially shown that the di\ufb00erent forms of the exact EoS obtained by the above three authors are equivalent (Vyas et al. 2015). Moreover, it has been shown that it is unphysical to use \ufb01xed \u0393 EoS when the temperature changes by a few orders of magnitude (Taub 1948). The \ufb01rst accretion solution using a relativistic EoS on to a Schwarzschild BH was obtained by Blumenthal & Mathews (1976). Takahashi (2007) regenerated the solutions of Peitz and Appl, but also obtained solutions with another form of viscosity using variable \u0393 EoS in Kerr-Schild metric. However, the EoS used was again for a \ufb02uid composed of similar particles. Fluids around BH should be fully ionized given the temperature associated with these \ufb02uids, and ionized single species \ufb02uid can only be electron-positron \ufb02ow which cannot exist for thousands of Schwarzschild radii around the BH. Blumenthal & Mathews (1976) however, hinted how to describe a \ufb02uid composed of di\ufb00erent particles. Fukue (1987) in a seminal paper solved accretion solutions in the advective domain for electron-proton \ufb02ow, and predicted the possibility of accretion shocks around BH. The inherent problem of using the exact relativistic EoS in simulation codes is that, it is a ratio of modi\ufb01ed Bessels function which make transformation between primitive variables and state variables non-trivial. To circumvent this problem we obtained an approximate EoS which is very accurate (Ryu et al. 2006) for single species \ufb02uid, and then extended it to multi-species \ufb02uid (Chattopadhyay 2008; Chattopadhyay & Ryu 2009; c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f6 Chattopadhyay & Kumar Chattopadhyay & Chakrabarti 2011). The adiabatic EoS was also obtained for such a \ufb02ow by integrating the entropy generation equation without source terms (Kumar et al. 2013). The comparison of Chattopadhyay-Ryu (CR) EoS with an exact one showed negligible difference between the two (Vyas et al. 2015). The approximate CR EoS was also used in the pNp regime to study dissipative accretion \ufb02ow (Kumar & Chattopadhyay 2014), which showed that accretion shocks may exist for very high viscosity, as well as, high accretion rates. Moreover, depending on these \ufb02ow parameters such discs can be of low luminosity, as well as, can emit above the Eddington limit. Interesting as it may be, but we know pNp regime can only be considered to be qualitatively correct, and a general relativistic viscous disc should be considered to fully understand the behaviour of such discs. Investigations of general relativistic, dissipative, advective accretion discs around BH, described by relativistic EoS has not been done for multi-species EoS, in addition, estimation of mass loss from such disc has not been undertaken as well. Apart from the highly non-linear equations of motion in GR to contend with, it is also a fact that in curved space time, the constant angular momentum surfaces are special surfaces called von-Zeipel surfaces (e. g. Chakrabarti 1985, and references therein). Jets launched with some angular momentum would follow these surfaces. So an accretion-ejection system in GR is signi\ufb01cantly di\ufb00erent from pursuing the same study in pNp regime. In this paper, we obtain a simultaneous, self-consistent bipolar jet solution from a general relativistic viscous disc around a BH, described by multi-species relativistic EoS. In the next section, we present the equations of motion for the accretion disc and the jet, and also a brief description of the EoS used. In Section 3, we present the solution procedure of the equations of motions. In Section 4, we present the results, and then present our concluding remarks in Section 5. 2 ASSUMPTIONS AND EQUATIONS In this section, we \ufb01rst present the equations of motion governing the accretion disc and then those governing the matter leaving the disc as bipolar jets. Although equations of motion for both disc and jets are conservation of four-momentum and four-mass \ufb02ux, but since the \ufb02ow geometry of the disc and that of the jet are di\ufb00erent, we will separately present the two sets of equations. In Fig. 1, a cartoon diagram of the disc jet system is presented. The accretion disc occupies the region around the equatorial plane, while the jet \ufb02ows about the c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 7 Figure 1. Cartoon diagram of disc-jet system. The arrows show the direction of motion. The disc \ufb02ow geometry is on and around the equatorial plane, while the jet \ufb02ow geometry is about the axis of symmetry. The post-shock disc or PSD and the pre-shock disc are shown. The jet streamline is also mentioned. Here BH stands for the black hole. axis of symmetry. The jet geometry is signi\ufb01cantly di\ufb00erent from the pNp prescription and will be described in Section 2.3. 2.1 Equations governing accretion disc The energy momentum tensor for the viscous \ufb02ow is T \u00b5\u03bd = (e + p)u\u00b5u\u03bd + pg\u00b5\u03bd + t\u00b5\u03bd, (1) where e, p and u\u00b5 are the local energy density, local gas pressure and four-velocities, respectively. The inverse of the metric tensor components is g\u00b5\u03bd and Greek indices \u00b5, \u03bd represent the space-time coordinates. Here, t\u00b5\u03bd is viscous stress tensor and considering it is only the shear that gives rise to the viscosity, then t\u00b5\u03bd = \u22122\u03b7\u03c3\u00b5\u03bd, where \u03b7 is the viscosity coe\ufb03cient. The shear tensor has the general form (Peitz & Appl 1997) \u03c3\u00b5\u03bd = 1 2 \u0014 (u\u00b5;\u03b3h\u03b3 \u03bd + u\u03bd;\u03b3h\u03b3 \u00b5) \u22122 3\u0398exph\u00b5\u03bd \u0015 , (2) where h\u00b5\u03bd = g\u00b5\u03bd + u\u00b5u\u03bd is the projection tensor, and \u0398exp = u\u03b3 ;\u03b3 is expansion of the \ufb02uid world line. Equation (2) can be rewritten as \u03c3\u00b5\u03bd = 1 2 \u0014 (u\u00b5;\u03bd + u\u03bd;\u00b5 + a\u00b5u\u03bd + a\u03bdu\u00b5) \u22122 3\u0398exph\u00b5\u03bd \u0015 , (3) where a\u00b5 = u\u00b5;\u03b3u\u03b3 is the four-acceleration. The covariant derivative of covariant component of four-velocity is de\ufb01ned as u\u00b5;\u03b3 = u\u00b5,\u03b3 \u2212\u0393\u03b2 \u00b5\u03b3u\u03b2, where \u0393\u03b2 \u00b5\u03b3 is the Christo\ufb00el symbol. We c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f8 Chattopadhyay & Kumar choose the geometric units where G = Mbh = c = 1 (G is the gravitational constant, Mbh is the mass of the BH), which has been used in all the equations, unless mentioned otherwise. The governing equations of the relativistic \ufb02uid are T \u00b5\u03bd ;\u03bd = 0, (\u03c1u\u03bd);\u03bd = 0. (4) The relativistic Navier Stokes equation is obtained by projecting the energy momentum conservation along the ith direction i. e. hi \u00b5T \u00b5\u03bd ;\u03bd = 0 (i = 1, 2, 3) and can be written as, [(e + p)u\u03bdui ;\u03bd + (gi\u03bd + uiu\u03bd)p,\u03bd] + hi \u00b5t\u00b5\u03bd ;\u03bd = 0 (5) The energy generation equation or the \ufb01rst law of thermodynamics is u\u00b5T \u00b5\u03bd ;\u03bd = 0 and is given by, u\u00b5 \u0014\u0012e + p \u03c1 \u0013 \u03c1,\u00b5 \u2212e,\u00b5 \u0015 = Q+, (6) where, Q+ = t\u00b5\u03bd\u03c3\u00b5\u03bd is the viscous heating term and we ignore cooling terms, to stress on the e\ufb00ect of viscous dissipation. Here \u03c1 is the mass density of the \ufb02ow and h is the speci\ufb01c enthalpy of the \ufb02ow, h = e + p \u03c1 . (7) We have considered only the r \u2212\u03c6 component of relativistic shear tensor. This would on one hand simplify the equations tremendously, and on the other hand would allow us to directly compare with the plethora of work done with pseudo potentials (Becker et al. 2008; Kumar & Chattopadhyay 2013; Kumar et al. 2014; Kumar & Chattopadhyay 2014). The r \u2212\u03c6 component of the shear tensor (equation 3) is written as (Peitz & Appl 1997) 2\u03c3r \u03c6 = ur ;\u03c6 + grru\u03c6;r + aru\u03c6 + a\u03c6ur \u22122 3\u0398expuru\u03c6. (8) Following Peitz & Appl (1997), we neglect derivatives of ur, ar and \u0398exp and equation (8) becomes 2\u03c3r \u03c6 = (grr + urur)du\u03c6 dr \u22122u\u03c6 r grr. (9) In this paper, we consider only the simplest BH metric for the accretion disc, namely the Schwarzschild metric, in which the non-zero metric components are gtt = \u2212 \u0012 1 \u22122 r \u0013 ; grr = \u0012 1 \u22122 r \u0013\u22121 ; g\u03b8\u03b8 = r2; g\u03c6\u03c6 = r2sin2\u03b8. For accretion, the \ufb02ow is around the equatorial plane; therefore, the equations are obtained at \u03b8 = \u03c0/2 and assumed hydrostatic equilibrium along the transverse direction. With these assumptions, we write down the radial component of Navier Stokes equation (5), ur dur dr + 1 r2 \u2212(r \u22123)u\u03c6u\u03c6 + (grr + urur) 1 e + p dp dr = 0, (10) c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 9 the integrated form of the azimuthal component of equation (5), \u2212\u03c1ur(L \u2212L0) = 2\u03b7\u03c3r \u03c6, (11) where L = hu\u03c6 = hl and L0 are the local bulk angular momentum and bulk angular momentum at the horizon of the BH, respectively. It must be remembered that while l = u\u03c6 is a conserved quantity in the absence of dissipation for particles, for \ufb02uid L is the corresponding conserved quantity. The speci\ufb01c angular momentum for \ufb02uid is therefore \u03bb = \u2212u\u03c6/ut, but for particles it is l or u\u03c6. Moreover, the radial three velocity is de\ufb01ned as v2 \u02c6 r = \u2212(urur)/(utut) and in the local corotating frame v2 = \u03b32 \u03c6v2 \u02c6 r (Lu 1985). The associated Lorentz factors being \u03b3v = (1 \u2212v2)\u22121/2, \u03b3\u03c6 = (1 \u2212v2 \u03c6)\u22121/2 and the total Lorentz factor is \u03b3 = \u03b3v\u03b3\u03c6. Moreover, v\u03c6 = p \u2212u\u03c6u\u03c6/utut = \u221a \u2126\u03bb, where \u2126= u\u03c6/ut. The hydrostatic equilibrium along the transverse direction gives local disc height expression (Lasota 1994; Ri\ufb00ert & Herold 1995; Peitz & Appl 1997), H = pr3 \u03c1\u03b32 \u03c6 !1/2 . (12) The \ufb01rst law of thermodynamics (equation 6) ur \u0014\u0012e + p \u03c1 \u0013 \u03c1,r \u2212e,r \u0015 = tr\u03c6\u03c3r\u03c6 (13) Integrating mass-conservation equation, we obtain the expression of the mass accretion rate, \u2212\u02d9 M = 4\u03c0\u03c1Hurr. (14) We can now de\ufb01ne the dynamical viscosity coe\ufb03cient and it is \u03b7 = \u03c1\u03bd, where the kinematic viscosity is given by \u03bd = \u03b1arfc, a is the sound speed (see equation 23) and fc = (1 \u2212v2)2. Since \u03c3r\u03c6 may or may not be equal to zero on the horizon, with the choice of fc we have made tr\u03c6|horizon = 0 (see Peitz & Appl 1997, for details). The constant of motion can be obtained by integrating equation (10), log(E) = \u22121 2log(1 \u2212v2) + 1 2log \u0012 1 \u22122 r \u0013 \u2212 Z (r \u22123)l2 r3(r \u22122)\u03b32 v dr + Z 1 e + pdp. (15) The last term of equation (15) with the help of equations (7) and (13) can be written as Z 1 e + pdp = Z 1 h dp \u03c1 = Z 1 h \u0014 dh \u2212tr\u03c6\u03c3r\u03c6 \u03c1ur dr \u0015 . (16) Using equation (11) and relation tr\u03c6 = \u22122\u03b7\u03c3r\u03c6 in equation (16), we get, Z 1 e + pdp = Z 1 h \u0014 dh + ur(L \u2212L0)2 2\u03bdr(r \u22122) dr \u0015 . (17) Combining equation (17) in equation (15) and re-arranging, we get E = h\u03b3v q 1 \u22122 r exp(Xf) , (18) c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f10 Chattopadhyay & Kumar where Xf = Z \u0014\u0012r \u22123 r \u22122 \u0013 l2 r3\u03b32 v \u2212ur(L \u2212L0)2 2\u03bdhr(r \u22122) \u0015 dr. E is the constant of motion in the presence of viscous dissipation and may be called the relativistic Bernoulli constant in the presence of viscosity. It is interesting to note that in the absence of viscosity, the \ufb01rst term in the parentheses of Xf is ln(\u03b3\u22121 \u03c6 ), and so E(inviscid) = h\u03b3v\u03b3\u03c6 p (1 \u22122/r) = \u2212hut = E, i.e. the relativistic Bernoulli constant. It is indeed intriguing to note that E also has the same dimension of E, i.e. of speci\ufb01c energy, but the former is a constant of motion while E is not. It must be noted that E incorporates the information of motion locally, i.e. motion along radial and azimuthal direction (quasi-one-dimensional), and the e\ufb00ect of gravity through \u2212ut, while the information of internal energy is through h. Therefore, E contains the information of viscous heat dissipation (it increases where viscosity is e\ufb00ective), but not the angular momentum transport due to viscosity; as a result, it is not a constant of motion. However, E contains all the information carried by E, as well as the information of angular momentum transport, which makes E constant. So it might be physically more relevant to consider E as the speci\ufb01c energy for dissipative \ufb02ow than E. Since speci\ufb01c energy expression in GR is not additive, so all the terms are not apparent; however, a comparison of the constants of motion for dissipative and inviscid Newtonian \ufb02ow might be instructive. From Gu & Lu (2004); Becker et al. (2008) and Kumar & Chattopadhyay (2013, 2014), one may write down the grand speci\ufb01c energy or generalized Bernoulli parameter for Newtonian \ufb02uid as E(pNp) = 1 2v2 pNp + hpNp \u2212\u03bb2 pNp 2r2 + \u03bbpNp\u03bb0pNp r2 \u2212 1 2(r \u22121). (A) The canonical Bernoulli parameter for Newtonian \ufb02uid is E(pNp) = 1 2v2 pNp + hpNp + \u03bb2 pNp 2r2 \u2212 1 2(r \u22121). (B) In the above, the su\ufb03x pNp denotes that the \ufb02ow variables are in pNp regime, \u03bb0pNp is the speci\ufb01c angular momentum at rg and the last term on r.h.s of both the equations (A and B) is the gravity term in pNp. It is clear that while E(pNp) contains the local information of radial motion (\ufb01rst term), azimuthal motion (\u03bbpNp), gravity and the thermal (hpNp) terms, E contains all of them, as well as the angular momentum transport term (third and fourth terms of equation A). Clearly, if there is no viscosity, then \u03bb0pNp = \u03bbpNp, so E(pNp) \u2192E(pNp). Therefore, one may say E in equation (18) is the constant of motion for viscous, relativistic \ufb02uid, equivalent to the one obtained in the pseudo-Newtonian limit (e.g., Gu & Lu 2004; Kumar & Chattopadhyay 2013, 2014). c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 11 2.2 Relativistic EoS and the equations of motion: To solve the equations of motion, we need a closure relation between thermodynamic quantities called the EoS. In this subsection, we will start by expressing the variables in physical units, and at the end while applying into equations of motion we will impose the geometric units. We consider that the \ufb02uid is composed of electrons (e\u2212), positrons (e+) and protons (p+) of varying proportions, but always maintaining the overall charge neutrality: ne\u2212= np+ + ne+, here ns is the number density of the sth species of the \ufb02uid. The mass density is given by Chattopadhyay (2008) and Chattopadhyay & Ryu (2009), \u03c1 = \u03a3inimi = ne\u2212me\u2212[2 \u2212\u03be(1 \u22121/\u03c7)] = ne\u2212me\u2212\u02dc \u03c4, (19) where, \u03c7 = me\u2212/mp+, \u03be = np+/ne\u2212is the composition parameter and \u02dc \u03c4 = [2 \u2212\u03be(1 \u22121/\u03c7)]. The electron and proton masses are me\u2212and mp+, respectively. For single temperature \ufb02ow, the isotropic pressure is given by p = \u03a3ipi = 2ne\u2212kT = 2ne\u2212me\u2212c2\u0398 = 2\u03c1c2\u0398 \u02dc \u03c4 . (20) The EoS for multi-species \ufb02ow is (Chattopadhyay 2008; Chattopadhyay & Ryu 2009) e = \u03a3iei = \u03a3 \u0014 nimic2 + pi \u00129pi + 3nimic2 3pi + 2nimic2 \u0013\u0015 . (21) The non-dimensional temperature is de\ufb01ned with respect to the electron rest mass energy, \u0398 = kT/(me\u2212c2). Using equations (19) and (20), the expression of the energy density in equation (21) simpli\ufb01es to e = ne\u2212me\u2212c2f = \u03c1e\u2212c2f = \u03c1f \u02dc \u03c4 , (22) where f = (2 \u2212\u03be) \u0014 1 + \u0398 \u00129\u0398 + 3 3\u0398 + 2 \u0013\u0015 + \u03be \u0014 1 \u03c7 + \u0398 \u00129\u0398 + 3/\u03c7 3\u0398 + 2/\u03c7 \u0013\u0015 . The expressions of the polytropic index, the adiabatic index and the sound speed are given as, N = 1 2 d f d\u0398; \u0393 = 1 + 1 N , and a2 = \u0393p e + p = 2\u0393\u0398 f + 2\u0398. (23) Integration of \ufb01rst law of thermodynamics (equation 13) by assuming adiabatic \ufb02ow (Q+ = 0) and using the EoS (equation 22), gives us the adiabatic relation of multi-species relativistic \ufb02ow (Chattopadhyay & Kumar 2013; Kumar et al. 2013), \u03c1 = K exp(k3) \u03983/2(3\u0398 + 2)k1(3\u0398 + 2/\u03c7)k2, (24) where k1 = 3(2 \u2212\u03be)/4, k2 = 3\u03be/4 and k3 = (f \u2212\u02dc \u03c4)/(2\u0398) and K is the constant of entropy. Equation (24) is the generalized version of p = K\u03c1\u0393. Combining equations (24) and (14), we c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f12 Chattopadhyay & Kumar get the expression of entropy accretion rate, \u02d9 M = \u02d9 M 4\u03c0K = exp(k3)\u03983/2(3\u0398 + 2)k1(3\u0398 + 2/\u03c7)k2Hrur. (25) Re-arranging equations (10-14) with the help of equations (9), (7), (19), (20) and (22) in geometric units, we present the spatial derivative of \ufb02ow variables v, l and \u0398, dv dr = N D , (26) where N = \u2212 1 r(r \u22122) + (r \u22123 r \u22122) l2 r3\u03b32 v + 2a2 \u0393 + 1 \u00d7 \u0014 e \u03c4ur(L \u2212L0)2 8\u03bdr(r \u22122)(N + 1)\u0398 + 5r \u22128 2r(r \u22122) \u2212 l2 r2\u03b32 \u00121 l dl dr \u22121 r \u0013\u0015 D = \u03b32 v \u0014 v \u22122a2 \u0393 + 1 \u0012 l2 r2\u03b32v + 1 v \u0013\u0015 . Here, D contains an extra term l2v/(r2\u03b32) compared to the inviscid case (Chattopadhyay & Chakrabarti 2011). There is \u03b3\u03c6 term in the expression of disc height (equation 12). The radial derivative of equation (14) implies that the radial derivative of the speci\ufb01c angular momentum will be non-zero, which causes the extra term to appear. There are many height prescriptions (Lasota 1994; Ri\ufb00ert & Herold 1995; Peitz & Appl 1997), and choice of any one of them apart from the one used, will not a\ufb00ect the result qualitatively. Then, dl dr = \u0014 \u2212ur(L \u2212L0) \u03bd(1 \u22122 r) + 2l r \u0015 (1 \u2212v2). (27) Moreover, d\u0398 dr = \u2212 e \u03c4ur(L \u2212L0)2 2\u03bdr(r \u22122)(2N + 1) \u2212 2\u0398 2N + 1 \u00d7 \u0014 5r \u22128 2r(r \u22122) + \u03b32 v \u00121 v + v l2 r2\u03b32 \u0013 dv dr \u2212 l2 r2\u03b32 \u001a1 l dl dr \u22121 r \u001b\u0015 . (28) These di\ufb00erential equations are integrated by using fourth order Runge Kutta numerical method with the help of using critical point conditions and l\u2032Hospital rule at critical point. 2.2.1 Sonic point equations Mathematical form of critical point equation is dv/dr = N /D = 0/0, which gives two equations as, \u0014 1 \u2212 2 \u0393c + 1 \u0012 a2 cl2 c r2 c\u03b32 c + a2 c v2 c \u0013\u0015 = 0 (29) c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 13 and \u2212 1 rc(rc \u22122) + \u0012rc \u22123 rc \u22122 \u0013 l2 c r3 c\u03b32 vc + 2a2 c \u0393c + 1 (30) \u00d7 \u0014 e \u03c4ur c(Lc \u2212L0)2 8\u03bdcrc(rc \u22122)Nc\u0393c\u0398c + 5rc \u22128 2rc(rc \u22122) + l2 c rc\u03b32 c \u0012 ur c(Lc \u2212L0) \u03bdclc\u03b32 vc(rc \u22122) \u22121 \u22122v2 c r2 c \u0013\u0015 = 0. Here, the subscript \u2018c\u2019 denotes the same physical quantities described in equations (26-28), but evaluated at the location of the critical point. The velocity gradient on the sonic point, i.e. (dv/dr)c, is obtained by employing l\u2032Hospital rule. 2.2.2 Relativistic shocks for viscous \ufb02ow The relativistic shock conditions were \ufb01rst obtained by Taub (1948), which for viscous \ufb02ow in the presence of mass-loss are \u02d9 M+ = \u02d9 M\u2212\u2212\u02d9 Mo (31) [\u03a3h\u03b32 vvv + W] = 0 (32) [ \u02d9 J] = 0 (33) [ \u02d9 E] = 0 (34) where, \u02d9 J = \u02d9 ML0 = \u02d9 M(L \u22122\u03bd\u03c3r \u03c6/ur), \u02d9 E = \u02d9 ME, \u03a3 = 2\u03c1H and W = 2pH. We have solved four shock conditions (31-34) simultaneously, where viscous shear tensor (\u03c3r \u03c6) is continuous across the shock and we obtained the relation between pre-shock (su\ufb03x \u2018\u2212\u2019) and post-shock (su\ufb03x \u2018+\u2019) \ufb02ow variables, L\u2212= L+ + (2\u03c3r \u03c6|+) \u0014\u03bd+ u+ \u2212\u03bd\u2212 u\u2212 \u0015 ; h\u2032 \u2212u2 \u2212\u2212k1u\u2212+ 2\u0398\u2212= 0; k2 \u2212exp(Xf \u2212)h\u2032 \u2212\u03b3v\u2212= 0, (35) where, k1 = (1 \u2212R \u02d9 m)(h\u2032 +u2 + + 2\u0398+)/u+, R \u02d9 m = \u02d9 Mo/ \u02d9 M\u2212, k2 = exp(Xf +)h\u2032 +\u03b3v+, h\u2032 = (f + 2\u0398) and u = v\u03b3v. Here, Xf \u2212= (fl/f\u03b3)2Xl++fuf 2 LXL+/(f\u03bdfh), Xl+ = R ( r\u22123 r\u22122) l2 + r3\u03b3v2 +dr, XL+ = \u2212 R ur +(L+\u2212L0)2 2\u03bd+h+r(r\u22122)dr, fl = l\u2212/l+, f\u03b3 = \u03b3v\u2212/\u03b3v+, fu = ur \u2212/ur +, fL = (L\u2212\u2212L0)/(L+\u2212L0), f\u03bd = \u03bd\u2212/\u03bd+, fh = h\u2212/h+, and Xf + = Xl+ + XL+. From equation (11), viscous shear tensor can be written as 2\u03c3r \u03c6|+ = \u2212u+(L+ \u2212L0)/\u03bd+. 2.3 Out\ufb02ow equations The jet being tenuous, we idealize it to be inviscid; therefore, the energy momentum tensor of jet \ufb02uid should be ideal. The general form of the equations of motion would be similar c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f14 Chattopadhyay & Kumar (equation 4); however, the geometry is entirely di\ufb00erent (see Fig. 1). For the jet we de\ufb01ne, \u03d1i = ui j ut j and \u03d1i = \u2212uij utj , (36) where i = (r, \u03b8, \u03c6) and \u2018j\u2019 implies jet quantities and should not be confused with vector or tensor components. Here, \u03d1i and \u03d1i are the component of \u2018transport\u2019 velocity (also called as coordinate velocity) and the respective momentum per unit inertial mass (Chakrabarti 1985). The azimuthal three-velocity of the jet is de\ufb01ned as v\u03c6 j = (\u03d1\u03c6\u03d1\u03c6)1/2 = (\u2126j\u03bbj)1/2, where \u03bbj, the speci\ufb01c angular of the jet, is constant along the \ufb02ow. The three-velocity of the jet along the stream line is given by v2 p = \u03d1r\u03d1r + \u03d1\u03b8\u03d1\u03b8. The surfaces of constant angular momentum for jets in GR are VZS where the von Zeipel parameter is constant (Kozlowski et. al. 1978; Chakrabarti 1985). The von Zeipel parameter is de\ufb01ned as Z\u03c6 = \u0012\u03d1\u03c6 \u03d1\u03c6 \u00131/2 = \u0012 \u2212gtt g\u03c6\u03c6 \u00131/2 = rj sin\u03b8j (1 \u22122/rj)1/2. (37) Equation (37) de\ufb01nes the streamline. The angular momentum of jets would be related to the von Zeipel parameter (Chakrabarti 1985) \u03d1\u03c6 = c\u03c6Zn \u03c6, (38) where c\u03c6 and n are some constant parameters. Using equation (38) along with EoS (equation 22), the de\ufb01nitions of h (equation 7) and Z\u03c6 (equation 37) while integrating the jet equations of motion gives us the constant of motion of the jet, which is similar to the Bernoulli parameter along the streamline of the jet, \u211cj = \u2212hjutj[1 \u2212c2 \u03c6Z(2n\u22122) \u03c6 ]\u03b2, (39) where utj = \u2212(1 \u22122/rj)1/2\u03b3j, \u03b3j = \u03b3vj\u03b3\u03c6j, \u03b3vj = 1/ q (1 \u2212v2 j ), \u03b3\u03c6j = 1/ q (1 \u2212c2 \u03c6Z(2n\u22122) \u03c6 ), vj = \u03b3\u03c6jvp and \u03b2 = n/(2n \u22122). The mass out\ufb02ow equation can be written as, \u02d9 Mo = \u03c1jup j Aj, (40) where \u03c1j, up j = \u221agpp\u03b3vjvj and Aj are jet mass density, jet four-velocity along the VZS and area of jet cross-section, respectively. The expression of gpp = 1/h2 p is de\ufb01ned in Appendix A. And similar to the accretion disc equations, we can also derive the entropy-out\ufb02ow rate for the jet, and is de\ufb01ned as \u02d9 Mj = \u02d9 Mo 2\u03c0K = exp(k3) \u03983/2 j (3\u0398j + 2)k1(3\u0398j + 2/\u03c7)k2up j Aj 2\u03c0. (41) If there are no shocks in jets, then \u02d9 Mj will remain constant along the streamline. The di\ufb00erential form of equation (39) with the help of equations (40) and (24) and after some c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 15 manipulations is obtained as dvj drj = a2 j Aj dAj drj \u2212 a2 j hp dhp drj \u2212 1 rj(rj\u22122) vj\u03b32 vj[1 \u2212 a2 j v2 j ] = Nj Dj (42) and d\u0398j drj = \u2212\u0398j Nj \u0014\u03b32 vj vj dvj drj + 1 Aj dAj drj \u22121 hp dhp drj \u0015 . (43) Here, expression of Aj is de\ufb01ned in equation (50) in Section 3.3. It is to be noted that (dAj)/(Ajdrj) = (rj \u22121)/[rj(rj \u22122)] and (dhp)/(hpdrj) = (dh1)/(h1drj) \u2212(dh2)/(h2drj) \u2212 tan\u03b8j(d\u03b8j/drj)\u22121/[rj(rj\u22122)]. Here, h1 = 1+tan2\u03b8j(rj \u22123)2/[rj(rj \u22122)], h2 = h2 3+h2 4tan4\u03b8j(rj\u2212 3)2/(rj \u22122)2, dh1/drj = \u2212\u03b8\u2032 jtan\u03b8j[(6\u2212rj)/rj +(rj \u22123)\u03b8\u2032 jtan\u03b8j], dh2/drj = h3(2\u2212sin2\u03b8j\u03b8\u2032 j)+ h4(rj \u22123)tan4\u03b8j[(rj \u22123){1 + (sin2\u03b8j + 4h4/sin2\u03b8j)\u03b8\u2032 j} + h4/(rj \u22122)]/(rj \u22122)2, h3 = (2rj \u2212 2 \u2212sin2\u03b8j), h4 = (rj \u22124 + sin2\u03b8j) and from di\ufb00erentiation of eq. (37), we get d\u03b8j/drj = \u03b8\u2032 j = \u2212tan\u03b8j(rj \u22123)/[rj(rj \u22122)]. 2.3.1 Jet sonic point From the de\ufb01nitions, jet critical point conditions are obtained from equations (42) and (43) as, Nj = 0 \u21d2 a2 jc = 1/[rjc(rjc \u22122)] [ 1 Ajc dAjc drjc \u22121 hp dhp drj ] , (44) and Dj = 0 \u21d2 M2 jc = vjc ajc , (45) where, subscript \u2018c\u2019 denotes \ufb02ow values at critical point. And the velocity gradient at the critical point is obtained by l\u2032Hospital\u2019s rule. 3 SOLUTION PROCEDURE We \ufb01rst solve for the accretion solution and once the accretion solution is obtained, we iteratively \ufb01nd the jet solution from the accretion solution. Since, close to the horizon, gravity dominates all other physical processes, so the infall time-scale of matter will be smaller than viscous time-scale or any other time-scales. In other words, very close to the horizon, matter is almost falling freely and E \u2243E. It may be remembered from Section 2.1 that E is the generalized relativistic Bernoulli parameter in the presence of viscosity and E is the canonical relativistic Bernoulli parameter. In steady state, for inviscid \ufb02ow E is a c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f16 Chattopadhyay & Kumar constant of motion and for viscous \ufb02ow E is a constant of motion. Therefore, at a distance rin \u2192rg, vin = \u03b4 p 2/rin. Here, rg = 2rs = 2GMB/c2, rin = 2.001rs and \u03b4 < 1. We start by assigning \u03b4 = 1 in vin, and obtain \u0398in and L0. With these values, we integrate equations (26), (27) and (28) outwards. If the ensuing solution does not satisfy critical point conditions (equations 29 and 30), we reduce \u03b4 and repeat the procedure till the accretion critical points are obtained and thereby \ufb01xing the value of \u03b4. 3.1 Method to \ufb01nd L0 We have provided four \ufb02ow parameters (E, \u03be, \u03b1 and \u03bbin or Lin) and by using vin, we can calculate \u0398in from relativistic Bernoulli equation E = \u2212hut. Since we know ut [= \u2212 p (1 \u22122/r) \u03b3] from vin, \u03bbin and E = E at r = rin = 2.001rs, so enthalpy (h) can be expressed as cubic equation in \u0398 from enthalpy equation (7), which is X3\u03983 + X2\u03982 + X1\u0398 + X0 = 0, (46) where, X3 = 72\u03c7, X2 = 3[16(\u03c7 + 1) \u22123\u03c7\u02dc \u03c4Xc], X1 = 2[10 \u22123\u02dc \u03c4(Xc \u22121)(\u03c7 + 1)], X0 = \u22124\u02dc \u03c4(Xc \u22121) and Xc = \u2212E/ut. Equation (46) gives three real roots but two are negative and only one is positive, so we used positive root and is symbolized as \u0398in. Now, L0 can be calculated from equation (18) by assuming E = E at rin. Since we assume E = E = \u2212hut close to the horizon, therefore, from equation (18) at r = rin we have \u03b3\u03c6 exp(Xf) = 1. This condition is written as, \u22121 \u03b3\u03c6 d\u03b3\u03c6 dr = \u0014\u0012r \u22123 r \u22122 \u0013 l2 r3\u03b32 v \u2212ur(L \u2212L0)2 2\u03bdhr(r \u22122) \u0015 . (47) Simplifying the above equation with the help of equations (26) and (27), we get a quadratic equation in L0, given by b2L2 0 + b1L0 + b0 = 0, (48) where, b2 = ur[\u02dc \u03c4vv2 \u03c6\u2118/(4\u0398DN\u0393) + 1/h]/[2\u03bdr(r \u22122)], b1 = \u22122Linb2 \u2212a1 and b0 = b2L2 in + a1Lin + a0. Here, a1 = [urv2 \u03c6(v \u2212\u2118/v)]/[\u03bd\u03b32 vl(1 \u22122/r)D], a0 = vv2 \u03c6[\u22121 + (5r \u22128)\u2118/2 + (r \u22123)\u03b32 \u03c6\u2118(v2 \u03c6 + 1/v2) + (r \u22122)(1 \u2212\u2118/v2)(1 \u22122/\u03b32 \u03c6)]/[r(r \u22122)D], \u2118= 2a2/(\u0393 + 1) and D = [v \u2212\u2118(v2 \u03c6v +1/v)]. Equation (48) gives two real roots, one is greater than Lin and other less than Lin. Since viscosity transports angular momentum outward, so second root, which is less than Lin, is the correct solution. To summarize, we have obtained asymptotic values of vin and \u0398in at rin = 2.001 and L0 or \u03bb0 on the horizon by using three \ufb02ow parameters, E, \u03b1, Lin and \u03be to \ufb01x the EoS, so that c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 17 we can integrate equations (26 28) simultaneously outwards from rin. It is to be noted that only correct values of vin, \u0398in and L0 will produce a transonic solution. 3.2 To \ufb01nd critical point and shock locations in disc Initially, a tentative accretion solution is obtained without considering mass-loss from the disc. We obtain the transonic solution iteratively, i.e., to say, for a given set of (E, \u03b1, Lin), there exists a unique set of vin, \u0398in and L0 which will pass through a certain critical point (rc). Once we obtain rc, we integrate outwards to obtain global solution. Gravity induces one sonic point or critical point. Rotation induces multiple sonic points. If the \ufb01rst sonic point obtained is close to the horizon, we call it inner sonic point rci. If the transonic solution is monotonic, then there are no other sonic points. Once we get one sonic point, we continue to search for other sonic points. Up to three sonic points can be obtained, in which the inner (rci) and the outer (rco) sonic points are X-type and are physical sonic points as \ufb02ow actually passes through these sonic points. The middle sonic point is unphysical because \ufb02ow actually does not pass through it, since the (dv/dr)c at middle sonic point is complex. For viscous \ufb02uid, the middle sonic point is spiral type. For \ufb02ows going through rci, we check for the shock conditions equations (35), initially assuming \u02d9 Mo = 0, and compute the pre-shock \ufb02ow variables (i.e. v\u2212, a\u2212, L\u2212). We integrate with v\u2212, a\u2212, L\u2212along the supersonic branch and check whether solution passes through the outer sonic point or rco. The location of the jump rsh, for which the supersonic branch starting with v\u2212, a\u2212, L\u2212goes through rco is the shock location. When there is a shock, then the entropy of the \ufb02ow through rco is less than the entropy of the \ufb02ow through rci, i.e. \u02d9 Mo < \u02d9 Mi. 3.3 To \ufb01nd jet critical point and mass out\ufb02ow rate While E (/E) is the constant of motion along equatorial plane for viscous (/inviscid) accretion solution, however, away from the equatorial plane, the constant of motion is given by equation (39) which is constant along the jet stream-line de\ufb01ned by equation (37). Numerical simulations show that the post-shock disc is the jet base (Molteni et al. 1996b; Das et al. 2014). Numerical simulations also show that the angular momentum at the top of the PSD (the base of the jet) is about 20-30% less than from the equatorial plane, so without losing generality we consider at the base \u03bbj = 2\u03bb/3, and the location of the jet base c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f18 Chattopadhyay & Kumar xb = (rci + rsh)/2. We estimate \u211cat xb on the disc surface and the jet is launched with the same modi\ufb01ed Bernoulli parameter, i.e. \u211cj = \u211c(xb). The modi\ufb01ed Bernoulli parameter (\u211cj) depends on constants n and c\u03c6 apart from its local \ufb02ow variables. Interestingly, the entropy of the jet also depends on these two parameters. Keeping same \u211cj, but by changing n and c\u03c6, iteratively, we obtain the \u02d9 Mj which admits the transonic jet solutions, with the help of equations (44) and (45) for particular values of n > 0. Since only a fraction of matter escapes as jets, so \u02d9 Mj should be less than local disc entropy at xb but greater than the disc pre-shock entropy. Following the above constraint, c\u03c6 and n would be related by c\u03c6 = Zn \u03c6/\u03bbj. Once we know the jet solution it is easy to de\ufb01ne the relative mass out\ufb02ow rate, R \u02d9 m = \u02d9 M0 \u02d9 M\u2212 = 1 [ \u02d9 M+/ \u02d9 M0 + 1] . (49) The jet base cross-sectional area, perpendicular to tangent of the stream line at rj is, Aj = Ab \u0012 rj rb \u00132 sin\u03b8j, (50) where, Ab = A\u2032 bsin\u03b8b and A\u2032 b = 2\u03c0(r2 b0 \u2212r2 bi) are area along the accretion cylindrical radial coordinate and area along the spherical radial coordinate, respectively. Here, rb = p x2 b + h2 b, \u03b8b = sin\u22121(xb/rb), rbi = xbi/sin\u03b8b, rb0 = xb0/sin\u03b8b, xb = (rci + rsh)/2, xbi = rci and xb0 = rsh. Here, \u03b8j = sin\u22121(Z\u03c6 p 1 \u22122/rj/rj) and Z\u03c6 = rbsin\u03b8b/ p (1 \u22122/rb). Now the equation (49) with the help of equations (50), (40) and (14) can be written as, R \u02d9 m = 1 \u0002 (4\u03c0H+r+\u03c1+ur +)/(Ajb\u03c1jbup jb) + 1 \u0003 (51) = 1 \u0002 \u03a3(RAR\u039e)\u22121 + 1 \u0003, where \u03c1jb = \u03c1bexp(\u22127xb/(3hb))/h2 b, up jb = \u221agpp\u03b3vbvjb and Ajb = Absin\u03b8b are jet base density, four-velocity at jet base and jet base area, respectively. Moreover, RA = Ajb/(4\u03c0H+r+), R = (ur \u2212)/(ur +) the compression ratio, \u03a3 = \u03c1+/\u03c1\u2212o, the density jump across the accretion shock and \u039e = (\u03c1jbup jb)/(\u03c1\u2212ur \u2212) or the ratio of the relativistic mass \ufb02ux of the pre-shock accretion \ufb02ow and the jet base, respectively. It is to be noted that \u039e measures the upward thrust imparted by the shock through the compression ratio. Once the jet solution is obtained for a particular accretion shock solution, we compute the relative mass out\ufb02ow rate or R \u02d9 m, and feed it back to the shock conditions (equation 35) and retrace the steps mentioned in Sections 3.1 and 3.2 to \ufb01nd a new rsh. Then from this new rsh we \ufb01nd a new jet solution and new R \u02d9 m (Section 3.3). We continue these iterations till c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 19 Figure 2. Variation of accretion Mach number M (a), bulk velocity v (b), dimensionless temperature \u0398 (c), sound speed a (d), entropy accretion rate \u02d9 M (e), accretion adiabatic index \u0393 (f), generalized relativistic Bernoulli parameter E (g), speci\ufb01c angular momentum \u03bb (h) and bulk angular momentum L (i). The sonic point is indicated by the star mark in panel (a). The accretion disc parameters are E = 1.0005, L0 = 2.6, \u03b1 = 0.01 and \u03be = 1.0. the shock location converges and then we obtain a self-consistent accretion-ejection solution around BHs in full general relativistic regime. 4 RESULTS In this paper, we obtained jet solution from accretion solutions. In other words, we supplied accretion disc parameters E, \u03b1, Lin and \u03be to \ufb01x the EoS of the relativistic \ufb02ow, obtained accretion and jet solutions simultaneously. However, in the following subsection we will \ufb01rst present all possible accretion solution and then in the next subsection we will present the accretion-ejection solutions. The location of the outer boundary of the accretion disc is 105rg for totally sub-Keplerian disc and/or wherever the angular momentum distribution achieves the local Keplerian value. 4.1 In\ufb02ow solutions In Fig.(2), we plot the accretion solution for E = 1.0005, L0 = 2.6, \u03b1 = 0.01. We choose \u03be = 1.0, until speci\ufb01ed otherwise. Various \ufb02ow variables plotted are the Mach number M = v/a (a), v (b), \u0398 (c), a (d), \u02d9 M (e), \u0393 (f), E (g), \u03bb (h) and L (i). The disc parameters were such that it produces a single outer-type sonic point. While \u0393 varies from semi-relativistic c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f20 Chattopadhyay & Kumar Figure 3. Variation of accretion Mach number M in plot (a), bulk velocity v in plot (b), dimensionless temperature \u0398 in plot (c), local sound speed a in plot (d), entropy accretion rate \u02d9 M in plot (e), accretion adiabatic index \u0393 in plot (f), general relativistic Bernoulli parameter E in plot (g), speci\ufb01c angular momentum \u03bb in plot (h) and bulk angular momentum L in plot (i) are shown in this \ufb01gure. Here, vertical jump shows the location of shock, which is rs = 51.19 and the two star marks in panel (a) indicate the X-type sonic points. The accretion disc parameters are E = 1.0001, L0 = 2.91, \u03b1 = 0.01 and \u03be = 1.0. to relativistic values (1.437 < \u0393 < 1.59), the constant of motion E is indeed a constant. The entropy also increases due to viscous dissipation. And the angular momentum is transported outwards. In Fig.(3), we have shown typical shocked accretion solution and variation of various \ufb02ow quantities with radial distance, for a di\ufb00erent value of E (= 1.0001) and L0 (= 2.91) while keeping the viscosity and the nature of the \ufb02uid similar to the previous \ufb01gure. Since E is a constant of motion in the viscous relativistic disc, and L0 is a constant of integration, so changing these two disc parameters is equivalent to changing the inner boundary condition of the accreting \ufb02ow. It is to be noted that, the solution in Fig. (2) is similar to a Bondi type solution (i.e. low angular momentum \ufb02ow through an outer critical point rco; see Bondi 1952). So accretion \ufb02ow is not decidedly monotonic or shocked, it depends on the boundary condition of the \ufb02ow. In Fig. (4), we obtain a parameter space of E and L0 for \u03b1 = 0.01 and \u03be = 1, and demarcate the regions which will give transonic solutions with single sonic points, multiple sonic points and shocked solutions. For all E, L0 values in the domain ABD\u2032, angular momentum is low and all possible solutions in this domain will possess a single outer-type sonic point similar to Bondi \ufb02ow (typical Mach number variation: panel a). The region BGFB c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 21 Figure 4. Division of parameter space (E, L0) on the basis of number of critical points and corresponding solutions topologies [Mach number, M, versus radial distance, log(r) plots in panels a, b, c, d, e and f]. In this \ufb01gure viscosity parameter, \u03b1 = 0.01 and composition parameter, \u03be = 1.0. is with a bit more angular momentum and the inner sonic point (rci) appears, although the accreting matter still \ufb02ows through rco into the BH (typical solution: panel b). Since the entropy of rci is higher for these values of E and L0, so oscillating shock is a distinct possibility. Solutions in the domain GFHG admit steady-state shock in accretion solutions and thereby joining the solutions through outer and inner sonic points (typical solution: panel c). In the domain HFADEH, the angular momentum is much higher, multiple sonic points still exist, but the accreting matter prefers to \ufb02ow into the BH through rci because \u02d9 Mi > \u02d9 Mo (typical solution: panel d). For solutions from the region AEI, the angular momentum is so large that matter falls with very low in\ufb02ow velocity, and becomes transonic only close to the horizon, and therefore possess an inner-type sonic point only (typical solution: panel e). Solutions from the domain BDCB are bound through out and do not produce global transonic solutions (typical solution: panel f). The solid curves within panels (a) \u2014 (f) indicate physical solutions, which accreting matter actually follows. The dashed part of the solution indicates those which are viable solutions but matter do not choose. The dotted curves in the panels show also transonic solutions which have wind-type boundary conditions (low v close to horizon and high v at large distances). However, these so-called wind-type c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f22 Chattopadhyay & Kumar Figure 5. Variation of M with r for di\ufb00erent viscosity parameters marked in each panel. For all panels, E = 1.001, L0 = 2.85, and \u03be = 1.0. solutions should not be confused with proper wind or out\ufb02ow solutions, since these solutions are de\ufb01ned only on the equatorial plane. All possible accretion solutions can also be produced even if the viscosity is varied for a given value of E, L0 and \u03be. In Fig. (5a), we obtain a Bondi-type solution for a low-L0 and low-viscosity (\u03b1 = 0.001) solution. We know viscosity transports angular momentum outwards, but low \u03b1 means the angular momentum remains low at the outer edge too. Such low angular momentum does not produce a strong centrifugal barrier and therefore produces a shock-free Bondi-type solution with a single, outer-type sonic point. Keeping the same inner boundary condition, we increase the viscosity to \u03b1 = 0.01 and multiple sonic points appear in Fig. (5b). Higher viscosity for the same values of L0 implies higher angular momentum at larger distances. Gravity ensures a single sonic point; however, for higher angular momentum \ufb02ow, the e\ufb00ect of gravity is impeded by rotation at distances of few tens of rg, while gravity dominating at distances further away, and also very close to the horizon. This causes multiple sonic points to form. Increasing to \u03b1 = 0.015 and keeping the same inner boundary condition, steady accretion shock is obtained in Fig. (5c). Higher \u03b1 also ensures even higher \u03bb of the disc, thus enhancing the centrifugal barrier. This causes the supersonic matter to be slowed down and eventually forms a shock. For even higher viscosity \u03b1 = 0.02, the solution through the inner sonic point opens up as shown c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 23 Figure 6. Parameter space of E and \u03b1 for given values of L0 = 3 (L) and L0 = 2.8 (R). In the inset panels, solutions, i.e.M versus r, are plotted, corresponding to the \ufb02ow parameters (E and \u03b1) from various regions marked as a-e. In both the plots ABCDA is the region for multiple critical points. in Fig. (5d). Increasing the viscosity even further, monotonic accretion solution is obtained (Figs. 5e, f). If the angular momentum increases beyond a certain limit, then the accreting matter becomes rotation dominated, and becomes supersonic only very close to the horizon. Therefore, accreting matter does not pass through outer sonic point (if present), and falls on to the BH through the inner sonic point. Hence, there exist two critical \u03b1 for such boundary conditions, where the lower value of it would initiate the shock and the higher one will remove it. Such dependence of the nature of accretion solution on viscosity parameter have been studied in the pNp regime before (Chakrabarti 1996; Chattopadhyay & Das 2007; Kumar & Chattopadhyay 2013, 2014), but not in the GR regime. In Fig. (6 L), we plot the parameter space of E and \u03b1 for L0 = 3 and various regions in the parameter space are marked as a\u2014d and the typical solutions are plotted in the inset marked by the same alphabets. In Fig.(6R), we plot E and \u03b1 for L0 = 2.8 and various regions are marked as a\u2014e, and the corresponding solutions are plotted in inset panels. Therefore, parameter space depicted in Figs. (6 L & R) is analogous to the parameter space depicted in Fig. (4), which pans all possible accretion solutions. It may be noted that the solutions for \u03b1 = 0 which harbour shocks also exhibit steady shocks up to moderate levels of \u03b1, but solutions which were Bondi type to start with for \u03b1 = 0, generate a shock transition above a critical value of \u03b1. For these kind of solutions, one can identify two critical viscosity c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f24 Chattopadhyay & Kumar Figure 7. Typical accretion-jet \ufb02ow geometry for accretion disc parameters E = 1.0001, L0 = 2.92, \u03b1 = 0.01 and \u03be = 1. Here solid (red) curve represents disc-half height. Dot-dashed (blue) line is jet stream-line for von Zeipel parameter Z\u03c6 = 13.28, and dotted (blue) line is the inner and outer boundary of jet \ufb02ow cross-section. The jet sonic point is located at rjc. Arrows represent direction of bulk motion and the solid thick quarter of a circle represents the event horizon. parameters, one denotes the onset of steady shock, and the other which marks the limit above which no steady shock is obtained. 4.2 Out\ufb02ow solutions It has been shown in many simulations that the PSD drives bipolar out\ufb02ows (Molteni et al. 1996a,b; Das et al. 2014), and in theoretical studies of simultaneous accretion-ejection model in the pNp regime, the \ufb02ow geometry of the bipolar out\ufb02ow or jet was considered within the two surfaces, one, centrifugal barrier surface (pressure maxima) and the other, funnel wall (minima of the e\ufb00ective potential), both described in the o\ufb00-equatorial region (Chattopadhyay & Das 2007; Kumar et al. 2013, 2014). The problem is that both these surfaces depend primarily on the angular momentum of the \ufb02ow, and therefore the out\ufb02ow geometry depends poorly on the base of the jet or other factors of the \ufb02ow, which should not be the case. In order to circumvent this as well as in GR, we were forced by correct physics to obtain the local out\ufb02ow cross-section by identifying the relevant VZS, which is not bound by the limitations of pNp regime. In Fig. (7), we present the \ufb02ow geometry of accretion disc, as well as bipolar jets which are actually solved self-consistently for accretion disc parameters E = 1.0001, L0 = 2.92, \u03b1 = 0.01, where the disc half-height is plotted as c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 25 Figure 8. (a) Accretion Mach number M (solid) is plotted w.r .t r and jet Mach number Mj (dashed-dot) is plotted w. r. t zj; (b) variation jet 3-velocity vj; (c) jet Bernoulli parameter \u211cj; (d) jet dimensionless temperature \u0398j; (e) jet adiabatic index \u0393j and (f) jet entropy ( \u02d9 Mj) all are plotted w.r.t zj. Accretion disc parameters are E = 1.001, L0 = 2.906, \u03b1 = 0.01. The disc and jet \ufb02ow composition is described by \u03be = 1.0 and relative mass out\ufb02ow rate is R \u02d9 m = 0.053. solid curve, the jet streamline is represented by dot-dashed curve, while the dotted curve shows the jet \ufb02ow geometry. The arrows show the direction of the \ufb02ow. In Fig. (8a), we plot the combined accretion-jet solution, here the accretion Mach number M (solid) is plotted with respect to r, while the jet Mach number Mj is plotted w.r.t zj in the same panel. In Figs. (8b-f), we plot various jet variables, for e.g. the jet three-velocity vj (Fig. 8b), \u211cj (Fig. 8c), \u0398j (Fig. 8d), \u0393j (Fig. 8e), and \u02d9 Mj (Fig. 8f), for accretion disc parameters E = 1.001, L0 = 2.906 and \u03b1 = 0.01. The jet is followed up to zj = 104rg above the equatorial plane of the accretion disc. In Schwarzschild metric we do not \ufb01nd multiple sonic points in jets, and jets are transonic \ufb02ow also with only one sonic point. However, the jet achieves fairly high terminal speed (\u223c0.11c), inspite of being only thermally driven (i.e. vj increases as \u0398j decreases). The speci\ufb01c energy of the jet \u211cj and its entropy-accretion rate \u02d9 Mj are constants of motion since the jet is assumed to be adiabatic. We have shown in Figs. 2\u20144 that for given values of \u03b1, the nature of accretion solution depends on L0 and since accretion disc launches the jet, we would like to analyse how the jet depends on the inner boundary condition of the \ufb02ow. In Fig. (9a), we plot rsh as a function of L0; each curve is obtained for a given value of E = 1.00001 (solid, red), E = 1.0001 (dotted, blue) and 1.001 (dashed, black). The viscosity is given by \u03b1 = 0.01 and the disc-jet c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f26 Chattopadhyay & Kumar Figure 9. (a) Variation of shock location rsh with L0;(b) compression ratio R with rsh; (c) mass out\ufb02ow rate R \u02d9 m with rsh and (d) R \u02d9 m with R. Each curve is for E = 1.00001 (solid, red), 1.0001 (dotted, blue) and 1.001 (dashed, black). For all the cases, \u03be = 1.0 and \u03b1 = 0.01. is composed of electron-proton \ufb02uid. For a given value of L0, the rsh increases with increasing E if steady shock is allowed by the \ufb02ow, while for a given value of E, rsh increases with L0. The corresponding compression ratio R as a function of rsh is shown in Fig. (9b), while the relative mass out\ufb02ow rates R \u02d9 m (e.g. equation 49) are plotted with rsh in Fig. (9c). As the rsh increases, the compression ratio decreases (Fig. 9b) so the upward thrust becomes weaker. However, higher value of rsh also makes the surface area of PSD and therefore the base of the jet larger, so the net mass \ufb02owing out as jet should become more. These contradictory tendencies cause the mass out\ufb02ow rate to peak at some intermediate value of rsh, as well as that of R (Fig. 9d). In Figs. (10a-d), the converse dependence is studied where, rsh is plotted with E, where each curve represent L0 = 2.95 (solid, red), 2.94 (dotted, blue) and 2.93 (dashed, black). The composition of the \ufb02ow and the viscosity parameter is the same as in Fig. (9a-d). The shock location increases (Fig. 10a) with both L0 and E, as was observed in the previous \ufb01gure. As the shock increases, the compression ratio decreases (Fig. 10b). However, R \u02d9 m do not monotonically increase with decreasing rs, for the same reason as was discussed in the previous \ufb01gure. Interestingly, lower L0 produces lower values of rsh, but since these shocks are mainly rotation mediated, so lower L0 implies weaker shock, and therefore the compression ratio R (\u2261the amount of squeezing on the post-shock \ufb02ow) is weak too. Therefore, although c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 27 Figure 10. (a) Variation of rsh with E, (b) R with rsh, (c) R \u02d9 m with rsh and (d) R \u02d9 m with R. Each curve is plotted for L0 = 2.95 (solid, red), 2.94 (dotted, blue) and 2.93 (dashed, black). For all the curves, \u03be = 1.0 and \u03b1 = 0.01. the shock is located closer to the BH for lower L0, the R \u02d9 m is less even for the same values of rsh. 4.2.1 E\ufb00ect of viscosity, \u03b1 In Fig. (11a), we plot how rsh would behave with the change in \u03b1, for \ufb01xed inner boundary condition or for the same values of E and L0. We plot the corresponding R as a function of rsh (Fig. 11b) and R \u02d9 m with rsh (Fig. 11c). Each curve is for constant E = 1.0001 (solid, red), E = 1.00055 (dotted, blue) and E = 1.001 (dashed, black), where for all curves \u03be = 1.0, L0 = 2.94. And in Fig. (11d), we plot R (solid, red), \u03a3 (long dashed, magenta), \u039e (dotted, blue) and RA (dashed, black) for E = 1.0001 (solid, red curve of Fig. 11a\u2014c). Since E is a constant of motion for the accretion disc, and L0 is the bulk angular momentum on the BH horizon, so \ufb01xed values of E and L0 correspond to \ufb01xed inner boundary condition. For same L0 and E, as one increases \u03b1, then the angular momentum at the outer edge of the disc would be higher. This implies that in the PSD too, the angular momentum L or speci\ufb01c angular momentum \u03bb will be higher. Thus the shock location would increase with \u03b1. For a given E, the compression ratio decreases with increasing rsh. Since the accretion shock is rotation dominated, therefore, the rsh will increase for hotter \ufb02ow (\u2261higher E), but the compression ratio will decrease. Thus, for a given value of \u03b1, R \u02d9 m will be less for higher c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f28 Chattopadhyay & Kumar Figure 11. Variation of rsh with \u03b1 (a), R with rsh (b), R \u02d9 m with rsh (c). Each curve is for E = 1.0001 (solid, red), E = 1.00055 (dotted, blue) and E = 1.001 (dashed, black). In panel (d), we plot R (solid, red), \u03a3 (long dashed, magenta), \u039e (dotted, blue) RA (dashed, black) for E = 1.0001 (solid, red curve of panels a\u2014c). For all curves \u03be = 1.0, L0 = 2.94. Dependence of rsh on \u03b1, for \ufb01xed inner boundary condition. values of E. Fig. (11c) shows that the R \u02d9 m is low for high and low values of rsh and maximizes at some intermediate value. In Fig. (11d), we \ufb01nd out why the mass out\ufb02ow rate or R \u02d9 m has a non-uniform dependence on rsh. From equation (52), we know that R \u02d9 m increases with increasing RA, R and \u039e, but decreases with increasing \u03a3. So as the rsh increases (Fig. 11a), Fig. (11d) shows that R and \u039e decrease, which implies that the post-shock thrust which is responsible for driving the jet decreases which should decrease R \u02d9 m. However, RA, or the ratio between jet cross-sectional area and the PSD surface area, increases; therefore, this should increase R \u02d9 m. These two contradictory tendencies, make R \u02d9 m attain low values when the rsh is very close to horizon and when it is far away, but maximize for some intermediate values. Let us compare the \ufb02ow variables of accreting matter which starts with the same outer boundary condition. We plot and compare the three velocity v (Fig. 12a), sound speed a (Fig. 12b) and the bulk angular momentum L (Fig. 12c) of accretion \ufb02ows starting with the same outer boundary condition E = 1.0001 and \u03bbout = \u03bbK = 140.85 at the outer edge of the accretion disc rout = 19835.3. Each curve represents the solution for \u03b1 = 0.01 (solid, red), \u03b1 = 0.0105 (dotted, blue) and \u03b1 = 0.011 (dashed, black), and the net relative mass out\ufb02ow computed were R \u02d9 m = 0.047 (solid, red), R \u02d9 m = 0.059 (dotted, blue) and R \u02d9 m = 0.054 c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 29 Figure 12. Three-velocity v (a), local sound speed a (b) and bulk angular momentum L (c) of the accretion disc plotted with r. Each curve is for \u03b1 = 0.01 (solid, red), \u03b1 = 0.0105 (dotted, blue) and \u03b1 = 0.011 (dashed, black). For all the curves, the outer boundary is at rout = 19835.3rg, the corresponding speci\ufb01c angular-momentum is the Keplerian angular momentum at rout, i.e.\u03bbout = \u03bbK = 140.85 and the constant of motion for all the curves is E = 1.0001. Inset in panel (c) zooms on the L distribution around the location of the shock. (dashed, black). As \u03b1 is increased, the net angular momentum of the inner disc decreases, and since the shock is rotation driven, lower angular momentum causes rsh to decrease (see the inset of Fig. 12c). Although it is interesting to show how \u03b1 will a\ufb00ect rsh, for the same inner boundary condition of the disc. But the physics of accretion disc is controlled by outer boundary condition, so it will be more physical to study how the disc solution, as well as, the ensuing jet solutions depend on \u03b1 when the outer boundary condition of the accretion disc is kept the same. In Fig. (13a), rsh is plotted with \u03b1 for E = 1.0001. The outer boundary of the disc is rout = 16809.016 for all solutions for which the curve is plotted. The speci\ufb01c angular momentum at rout is the local Keplerian value \u03bbout = \u03bbK = 129.662. Since E is a constant of motion for all the solutions presented, and \u03bbout is also same for all the disc solutions, so comparing solutions for same E and \u03bbout is equivalent to comparing solutions starting with the same outer boundary. Viscosity transports angular momentum outwards; therefore, for a given value of E, the shock moves closer to the BH as viscosity is increased. So rsh decreases with increasing \u03b1. The corresponding dependence of R (solid, red), \u039e (dotted, blue), \u03a3 (long dashed, magenta) and RA (dashed, black) with \u03b1 has been plotted in Fig. 13b. The shock becomes stronger as it moves towards the horizon therefore R increases, but the enhanced c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \f30 Chattopadhyay & Kumar Figure 13. (a) Variation of rsh with \u03b1, (b) R (solid, red), \u039e (dotted, blue), \u03a3 (long dashed, magenta) and RA (dashed, black) with \u03b1, (c) R \u02d9 m with \u03b1 and (d) vj\u221ewith \u03b1. The outer boundary is at rout = 16809.016, and corresponding speci\ufb01c angular momentum is the Keplerian angular momentum at rout, i.e. \u03bbout = \u03bbK = 129.662. For all the curves, E = 1.0001, \u03be = 1.0. Dependence of rsh on \u03b1 for \ufb01xed outer boundary. compression also squeezes more matter along the jet channel so \u039e increases too. However, \u03a3 increases and RA decreases which should decrease the R \u02d9 m. Such antagonistic tendencies make the R \u02d9 m to peak at some intermediate \u03b1, as is depicted in Fig. (13c). In Fig. 13d, the jet terminal speed vj\u221ewith \u03b1 is plotted. Since R increases, so the upward thrust also increases, making jets stronger, even if R \u02d9 m decrease. It means we can have stronger but lighter jets. 4.2.2 E\ufb00ect of composition, \u03be In all the previous \ufb01gures, we dealt with \ufb02uid composed of only electrons and protons. Chattopadhyay & Ryu (2009) showed that if the proton proportion is reduced (where the charge balance is maintained by proportionate increase of positrons), the \ufb02ow becomes thermally more relativistic because the decrease in thermal energy is compensated by decrease in inertia of the \ufb02ow. Fig. (14a) shows that rsh increases with \u03be, where each curve is for E = 1.0001 (solid, red), 1.00055 (dotted, blue) and 1.001 (dashed, black), and L0 = 3.0 and \u03b1 = 0.01. Higher rsh implies lower R (Fig. 14b); as a result, R \u02d9 m decrease with increasing rsh, although, due to the related increase in the jet base and other factors [dealt with related to Figs. 11(a)-(d)], R \u02d9 m peaks at some intermediate value (Fig. 14c). In Fig. (14d), the terminal speed of the jet vj\u221ewith rsh is plotted. As the shock recedes, the speed of the jet decreases, c \u20dd0000 RAS, MNRAS 000, 000\u2013000 \fGeneral relativistic viscous accretion disc 31 Figure 14. Dependence of rsh on composition parameter \u03be (a), R with \u03be (b), R \u02d9 m on rsh (c) and vj\u221ewith rsh (d). Each plot corresponds to E = 1.0001 (solid, red), 1.00055 (dotted, blue) and 1.001 (dashed, black). For all the curves L0 = 3.0, \u03b1 = 0.01. even where R \u02d9 m is increasing. But if the accretion disc \ufb02ow is more energetic, the jet terminal speed is higher, although R \u02d9 m is lower. In Figs. 15(a) and b), we plot the shock parameter space in the E \u2212L0 space for various combinations of viscosity and composition parameters like \u03be, \u03b1 = 1.0, 0.01 (solid), 1.0, 0.02 (dotted), 0.27, 0.01 (dashed) and 0.27, 0.02 (long dashed) in Fig. 15(a) and for (\u03be, \u03b1) = 0.25, 0.01 (solid), 0.25, 0.02 (dotted) and 0.0625, 0.01 (dashed) in Fig. 15(b). The shaded region indicates the steady shock region of the parameter space when mass-loss is considered. Similar to the inviscid study (Chattopadhyay & Chakrabarti 2011), the shock parameter space moves to the higher energy direction of the parameter space till \u03be is reduced from 1 to 0.27. As \u03be is reduced further, the shock parameter space moves towards the low-energy side. The reduction of steady shock parameter space due to mass-loss actually indicates that shock in accretion actually exists in a wide range, but only as a time-dependent one. 5 DISCUSSIONS AND" + }, + { + "url": "http://arxiv.org/abs/1204.1133v1", + "title": "Simulation of radiation driven wind from disc galaxies", + "abstract": "We present 2-D hydrodynamic simulation of rotating galactic winds driven by\nradiation. We study the structure and dynamics of the cool and/or warm\ncomponent($T \\simeq 10^4$ K) which is mixed with dust. We have taken into\naccount the total gravity of a galactic system that consists of a disc, a bulge\nand a dark matter halo. We find that the combined effect of gravity and\nradiation pressure from a realistic disc drives the gas away to a distance of\n$\\sim 5$ kpc in $\\sim 37$ Myr for typical galactic parameters. The outflow\nspeed increases rapidly with the disc Eddington parameter $\\Gamma_0(=\\kappa\nI/(2 c G \\Sigma)$) for $\\Gamma_0 \\ge 1.5$. We find that the rotation speed of\nthe outflowing gas is $\\lesssim 100$ km s$^{-1}$. The wind is confined in a\ncone which mostly consist of low angular momentum gas lifted from the central\nregion.", + "authors": "Indranil Chattopadhyay, Mahavir Sharma, Biman B. Nath, Dongsu Ryu", + "published": "2012-04-05", + "updated": "2012-04-05", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA", + "astro-ph.CO" + ], + "main_content": "INTRODUCTION Many galaxies are observed to have moving extraplanar gas, generally termed as galactic superwinds (see Veilleux et al. 2005 for a recent review). Initial observations showed the H\u03b1 emitting gas above the plane of M82 (e.g. Lynds & Sandage 1963). The advent of X-ray astronomy established yet another phase of galactic out\ufb02ows, namely the hot plasma, emitting X-rays in the temperature range 0.3\u20132 keV (Strickland et al. 2004). Also recent observations have revealed the existence of molecular gas in these out\ufb02ows (Veilleux et al. 2009, walter et al. 2002). Earlier observations were limited to local dwarf starburst galaxies that showed these winds. However, in recent years, the observations of out\ufb02ows in Ultra Luminous Infra-red Galaxies (ULIGs) have extended the range of galaxies in which out\ufb02ows are found (Martin 2005, Rupke et al. 2005, Rupke et al. 2002). On the theoretical side, there have been speculations on winds from starburst galaxies (Burke 1968, Mathews & Baker 1971, Johnson & Axford 1971). In these models the large scale winds are a consequence of energy injection by multiple supernovae (Larson 1974, Chevalier & Clegg 1985, Dekel & Silk 1986, Heckman 2002). In the context of the multiphase structure of the out\ufb02ows, the results of these theoretical models are more relevant for the X-ray emitting hot wind. On the other hand, observations of the cold out\u22c6indra@aries.res.in \u2020 mahavir@rri.res.in \u2021 biman@rri.res.in \ufb02ows are better explaind by the radiation driving (Murray et al. 2005, Martin 2005). If only Thompson scattering is considered, then radiation from galaxies does not seem to be a reasonable wind driving candidate because opacities would be small; however one should consider that these winds are heavily enriched. Murray et al. 2005 proposed a wind driving mechanism based on the scattering of dust-grains by the photons from the galaxy (see also Chiao & Wickramasinghe 1972; Davies et al. 1998). This mechanism can be quite e\ufb00ective since the opacities in dust-photon scattering can be of the order of hundred cm2g\u22121 and gas in turn, being coupled with the dust, is driven out of the galaxy if the galaxy posseses a certain critical luminosity. Bianchi & Ferrara (2005) argued that dust grains ejected from galaxies by radiation pressure can enrich the intergalactic medium. Nath & Silk (2009) then described a model of out\ufb02ows with radiation and thermal pressure, in the context of out\ufb02ows from Lyman break galaxies observed by Shapely et al. (2005). Murray et al. (2010) have also described a similar model in which radiation pressure is important for the \ufb01rst few million years of the starburst phase, after which SN heated hot gas pushes the out\ufb02owing material. Sharma & Nath (2011) have also shown that radiation pressure is important for out\ufb02ows from high mass galaxies with a large SFR (with vc \u2a7e200 km s\u22121, SFR \u2a7e100 M\u2299yr\u22121), particularly in ULIGs. In this paper, we study the e\ufb00ect of radiation pressure in driving cold and/or warm gas out\ufb02ows from disc galaxies with numerical simulations. Recently, Sharma et al. (2011) calculated the terminal speed of such a \ufb02ow along the pole of \f2 I. Chattopadhyay, M. Sharma, B. B. Nath and D. Ryu a disc galaxy, taking into account the gravity of disc, stellar bulge and dark matter halo. They determined the minimum luminosity (or, equivalently, the maximum mass-to-light ratio of the disc) to drive a wind, and also showed that the terminal speed lies in the range of 2\u20134 Vc (where Vc is the rotation speed of the disc galaxy), consistent with observations (Rupke et al. 2005, Martin 2005), and the ansatz used by numerical simulations in order to explain the metal enrichment of the IGM (Oppenheimer et al. 2006). We investigate further the physical processes for a radiation driven wind. Rotation is yet another aspect of the winds that we address in our simulation. As the wind material is lifted from a rotating disc, it should be rotating inherently which is seen in observations as well (Greve 2004, Westmoquette et al. 2009, Sofue et al. 1992, Seaquist & Clark 2001, Walter et al. 2002). Previous simulations of galactic out\ufb02ows have considered the driving force of a hot ISM energized by the e\ufb00ects of supernovae (Kohji & Ikeuchi 1988; Tomisaka & Bregman 1993; Mac Low & Ferrara 1999; Suchkov et al. 1994, 1996 ; Strickland & Stevens 2000; Fragile et al. 2004; Cooper et al. 2008, Fujita et al. 2009). However the detailed physics of a radiatively driven galactic out\ufb02ow is yet to be studied with a simulation. In this work, we study the dynamics of an irradiated gas above an axisymmetric disc galaxy by using hydrodynamical simulation. Recently Hopkins et al. (2011) have explored the relative roles of radiation and supernovae heating in galactic out\ufb02ows, and studied the feedback on the star formation history of the galaxy. Our goal here is di\ufb00erent in the sense that we focus on the structure and dynamics, particularly the e\ufb00ect of rotation, of the wind. In order to disentangle the e\ufb00ects of various processes involved, we intentionally keep the physical model simple. For example, we begin with a constant density and surface brightness disk, then study the e\ufb00ect of a radial density and radiation pro\ufb01le, and \ufb01nally introduce rotation of the disk, in order to understand the e\ufb00ect of each detail separately, instead of performing one single simulation with many details put together. 2 GRAVITATIONAL AND RADIATION FIELDS The main driving force is radiation force and the containing force is due to gravity. We take the system to be composed of three components disc, bulge & dark matter halo. We describe the forces due to these three constituents below. We take a thin galactic disc and a spherical bulge. All these forces are given in cylindrical coordinates because we solve the \ufb02uid equations in cylindrical geometry. 2.1 Gravitational \ufb01eld from the disc Consider a thin axisymmetric disc in r\u03c6 plane with surface mass density \u03a3(r). As derived in the Appendix, the vertical and radial components of gravity due to the disc material at a point Q above the disc with coordinates (r, 0, z), are given by fdisc,z = Z \u03c6\u2032 Z r\u2032 d\u03c6\u2032 dr\u2032 zG\u03a3(r\u2032) r\u2032 [r2 + z2 + r\u20322 \u22122rr\u2032cos\u03c6\u2032]3/2 0.0 0.2 0.4 0.6 0.8 1.0 r (10 kpc) 0.2 0.4 0.6 0.8 1.0 z (10 kpc) 0.2 0.4 0.6 0.8 1.0 r (10 kpc) (a) (b) Figure 1. Magnitude of gravitational force of the (a) uniform disc (UD) (b) exponential disc (ED) in colours with direction in arrows. Values are in the units of G\u03a30(= 4.5 \u00d7 10\u22129) dyne. fdisc,r = Z \u03c6\u2032 Z r\u2032 d\u03c6\u2032 dr\u2032 (r \u2212r\u2032cos\u03c6\u2032) G\u03a3(r\u2032) r\u2032 [r2 + z2 + r\u20322 \u22122rr\u2032cos\u03c6\u2032]3/2 (1) The azimuthal coordinate of Q is taken to be zero, because of axisymmetry. The integration limit for \u03c6\u2032 = 0 to 2\u03c0. We consider two types of disc in our simulations, one with uniform surface mass density and radius rd (UD), and another with an exponential distribution of surface mass density (ED) with a scale radius rs. The surface mass density of uniform surface density disc (i. e.,UD) is \u03a3 = \u03a30 = constant (2) and in the case of a disc with exponentially falling density distribution (ED) \u03a3 = \u02dc \u03a30exp(\u2212r\u2032/rs), rs \u2261scale length . (3) In case of UD (eqn 2), the integration limit would be r\u2032 = 0 to rd, while for ED (eqn 3), the limits of the integration run from r\u2032 = 0 to \u221e. Numerically this means, we integrate up to a large number, increasing which will not change the gravitational \ufb01eld by any signi\ufb01cant amount. We have chosen the \u03a3s in such a way that the total disc mass remains same for the UD or ED. Therefore, \u02dc \u03a30 = \u03a30 2 \u0012rd rs \u00132 . (4) In Figure 1, we plot the contours of gravitational \ufb01eld strength and its direction vectors due to a UD (left panel), and that for the ED (right panel). Interestingly, discs with same mass but di\ufb00erent surface density distributions, produces di\ufb00erent gravitational \ufb01elds. For the UD the gravitational \ufb01eld is not spherical and the gravitational acceleration is maximum at the edge of the disc. On the other hand, the \ufb01eld due to ED is closer to spherical con\ufb01guration with the maximum being closer to the centre of the disc and falling o\ufb00outwards. 2.2 Bulge and the dark matter halo We consider a bulge with a spherical mass distribution and constant density, with mass Mb and radius rb. The radiation force due to the bulge is negligible as it mostly hosts the old stars. The gravitational force of the bulge is given by \fSimulation of radiation driven wind from disc galaxies 3 fbulge,r = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2212GMbr r3 b if R < rb \u2212GMbr R3 otherwise (5) fbulge,z = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2212GMbz r3 b , if R < rb \u2212GMbz R3 , otherwise (6) where R = \u221a r2 + z2. We consider a NFW halo with a scaling with disc mass as given by Mo, Mao and White (1998; hereafter referred to as MMW98) where the total halo mass is \u223c20 times the total disc mass. The mass of an NFW halo has the following functional dependence on R M(R) = 4\u03c0\u03c1crit\u03b40R3 s \u0014 ln (1 + cx) \u2212 cx 1 + cx \u0015 (7) where x = R R200 , c = R200 Rs , \u03b40 = 200 3 c3 ln(1+c)\u2212c/(1+c). Here \u03c1crit is the critical density of the universe at present epoch, Rs is scale radius of NFW halo and R200 is the limiting radius of virialized halo within which the average density is 200\u03c1crit. This mass distribution corresponds to the following potential, \u03a6NF W = \u22124\u03c0\u03c1crit\u03b40R3 s \u0014 ln (1 + R/Rs)/R \u0015 (8) The gravitational force due to the dark matter halo is therefore given by, fhalo,r = \u2212\u2202\u03a6NF W \u2202r = \u2212r GM(R) (r2 + z2)3/2 ; fhalo,z = \u2212\u2202\u03a6NF W \u2202z = \u2212z GM(R) (r2 + z2)3/2 . (9) The net gravitational acceleration is therefore given by Fgrav,r = fdisc,r + fbulge,r + fhalo,r = G\u03a30fg,r(r, z) (10) Fgrav,z = fdisc,z + fbulge,z + fhalo,z = G\u03a30fg,z(r, z) . The gravitational \ufb01eld for both bulge and halo is spherical in nature, although, that due to the bulge maximises at rb. However, the net gravitational \ufb01eld will depend on the relative strength of the three components. In Figure 2 (left panel), we plot the contours of total gravitational \ufb01eld strength due to the bulge, the halo and an UD. The nonspherical nature of the gravitational \ufb01eld is evident. A more interesting feature appears due to the bulge gravity. The net gravitational intensity maximizes in a spherical shell of radius rb(= 0.2Lref; see section \u00a73.1). Therefore, there is a possibility of piling up of out\ufb02owing matter at around a height z \u223crb near the axis. In the right panel of Figure (2), we present the contours of net gravitational \ufb01eld due to an embedded exponential disc within a halo and a bulge. 2.3 Radiation from disc and the Eddington factor We treat the force due to radiation pressure as it interacts with charged dust particles that are assumed to be strongly coupled to gas by Coulomb interactions and which drags the gas with it. The strength of the interaction is parameterized by the dust opacity \u03ba which has the units cm2 gm\u22121. Gravitational pull on the \ufb01eld point Q(R, Z) due to the disc point P(r\u2032, \u03c6\u2032, 0) is along the direction \u2212 \u2212 \u2192 QP (see 0.0 0.2 0.4 0.6 0.8 1.0 r (10 kpc) 0.2 0.4 0.6 0.8 1.0 z (10 kpc) 0.2 0.4 0.6 0.8 1.0 r (10 kpc) (a) (b) Figure 2. Total gravitational force of the (a) uniform disc (b) exponential disc in colors with direction in arrows. The values are in the same units as in Figure 1. appendix). The di\ufb00erence in computing the radiation force arises due to the fact that one needs to account for the projection of the intensity at Q (for radiation force from more complicated disc, see Chattopadhyay 2005). For a disc with surface brightness I(r), we can \ufb01nd the radiation force by replacing G\u03a3(r\u2032) in eqn 1 by I(r\u2032)\u03ba/c, and take into account the projection factor z/ p r2 + z2 + r\u20322 \u22122rr\u2032 cos \u03c6\u2032. Similar to the disc gravity, the net radiation force \u2212 \u2192 F rad at any point will have the radial component (Frad,r) and the axial component (Frad,z) and are given by, Frad,r(r, z) = \u03baz c Z Z d\u03c6\u2032dr\u2032I(r\u2032)(r \u2212r\u2032cos\u03c6\u2032) r\u2032 [r2 + z2 + r\u20322 \u22122rr\u2032cos\u03c6\u2032]2 (11) = \u03baI0 c fr,r(r, z) Frad,z(r, z) = \u03baz2 c Z Z d\u03c6\u2032dr\u2032I(r\u2032)r\u2032 [r2 + z2 + r\u20322 \u22122rr\u2032cos\u03c6\u2032]2 (12) = \u03baI0 c fr,z(r, z) Since we have two models for disc gravity, we also consider two forms of disc surface brightness. I = I0 = constant, for UD (13) and I = \u02dc I0exp(\u2212r\u2032/rs) , for ED (14) If the two disc types are to be compared for identical luminosity, then one \ufb01nds \u02dc I0 = I0 2 \u0012rd rs \u00132 . (15) The disc Eddington factor is de\ufb01ned as the ratio of the radiation force and the gravitational force (MQT05). In spherical geometry this factor is generally constant at each point because both gravity and radiation has an inverse square dependence on distance. Although in the case of a disc, the two forces have di\ufb00erent behaviour, we can still de\ufb01ne an Eddington parameter as \u0393 = Frad Fgrav . In this case this parameter depends on the coordinates r, \u03c6, z of the position under consideration. We can however de\ufb01ne a parameter whose value is the Eddington factor at the centre of the disc, i.e., \u03930 = \u03baI 2cG\u03a3. (16) \f4 I. Chattopadhyay, M. Sharma, B. B. Nath and D. Ryu 0.0 0.2 0.4 0.6 0.8 1.0 r (10 kpc) 0.2 0.4 0.6 0.8 1.0 z (10 kpc) 0.2 0.4 0.6 0.8 1.0 r (10 kpc) (a) (b) Figure 3. Magnitude of force due to radiation from the (a) uniform disc, (b) exponential disc for \u03930 = 0.5, with arrows for direction. If \u03930 = 1, then the radiation and gravity of the disc will cancel each other at the centre of the disc. We will parameterize our results in terms of \u03930. Therefore, the components of the net external force due to gravity and radiation is given by Rr = Fgrav,r \u2212Frad,r = G\u03a30 (fg,r \u22122\u03930fr,r) (17) Rz = Fgrav,z \u2212Frad,z = G\u03a30 (fg,z \u22122\u03930fr,z) In Figure 3, we plot the contours of radiative acceleration from an UD, and the same from an ED. There is a signi\ufb01cant di\ufb00erence between the radiation \ufb01eld above an ED and that above an UD. While the radiation \ufb01eld from an UD is largely vertical for small radii, but starts to diverge at the disc edge, at r \u223crd. One can therefore expect that for high enough I, the wind trajectory will diverge. In case of ED, the radiation \ufb01eld above the inner portion of the disc is strong and decreases rapidly towards the outer disc. 3 NUMERICAL METHOD The hydrodynamic equations have been solved in this paper by using the TVD (i. e.,Total Variation Diminishing) code, which has been quite exhaustively used in cosmological and accretion disc simulations (see, Ryu et al. 1993, Kang et al. 1994, Ryu et al. 1995, Molteni et al. 1996) and is based on a scheme originally developed by Harten (1983). We have solved the equations in cylindrical geometry in view of the axial symmetry of the problem. This code is based on an explicit, second order accurate scheme, and is obtained by \ufb01rst modifying the \ufb02ux function and then applying a nonoscillatory \ufb01rst order accurate scheme to obtain a resulting second order accuracy (see, Harten 1983 and Ryu et al. 1993 for details). The equations of motion which are being solved numerically in the non-dimensional form is given by \u2202q \u2202t + 1 r \u2202(rF1) \u2202r + \u2202F2 \u2202r + \u2202G \u2202z = S (18) where, the state vector is q = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \u03c1 \u03c1 vr \u03c1 v\u03c6 \u03c1 vz E \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8, (19) and the \ufb02uxes are F1 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \u03c1 vr \u03c1 v2 r \u03c1vrv\u03c6 \u03c1vzvr (E + p)vr \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8, F2 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed 0 p 0 0 0 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8, G = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \u03c1vz \u03c1vrvz \u03c1v\u03c6vz \u03c1v2 z + p (E + p)vz \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8(20) and the source function is given by S = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 \u03c1v2 \u03c6 r \u2212\u03c1Rr \u2212 \u03c1vrv\u03c6 r \u2212\u03c1Rz \u2212\u03c1[vrRr + vzRz) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (21) 3.1 Initial and boundary conditions We do not include the disc in our simulations and only consider the e\ufb00ect of disc radiation and total gravity on the gas being injected from the disc. We choose the disc mass to be Md = 1011 M\u2299and assume it to be the unit of mass (i. e.,Mref). The unit of length (i. e.,Lref) and velocity (i. e.,vref) are rd = 10 kpc and vc = 200 km s\u22121, respectively. Therefore, the unit of time is tref = 48.8Myr. We introduce a normalization parameter \u03be such that GMd/v2 c = \u03berd, which turns out to be \u03be = 1.08. Hence the unit of density is \u03c1ref = 6.77\u00d710\u221224g cm\u22123 (\u223c4mp cm\u22123). All the \ufb02ow variables have been made non-dimensional by the choice of unit system mentioned above. It is important to choose an appropriate initial condition to study the relevant physical phenomenon. We note that previous simulations of galactic out\ufb02ows have considered a variety of gravitational potential and initial ISM con\ufb01gurations. For example, Cooper et al. (2008) considered the potential of a spherical stellar bulge and an analytical expression for disc potential, but no dark matter halo, and an ISM that is strati\ufb01ed in z-direction with an e\ufb00ective sound speed that is \u223c5 times the normal gas sound speed. Suchkov et al. (1994) considered the potential of a spherical bulge and a dark matter halo and an initial ISM that is spherically strati\ufb01ed. Fragile et al. (2004) considered a spherical halo and a z-strati\ufb01ed ISM. However, in a recent simulation of out\ufb02ows driven by supernovae from disc galaxies, Dubois & Teyssier (2008) found that the out\ufb02owing gas has to contend with infalling material from halo, which inhibits the out\ufb02ow for a few Gyr. Fujita et al. (2004) also studied out\ufb02ows from pre-formed disc galaxies in the presence of a cosmological infall of matter. We choose a z-strati\ufb01ed gas to \ufb01ll the simulation box, with a scale height of 100 pc. For the M2 and M3 case (of exponential disc), we also assume a radial pro\ufb01le for the initial gas, with a scale length of 5 kpc. For the M3 case, we further assume this gas to rotate with v\u03c6 decreasing with a scale height of 5 kpc. These values are consistent with the observations of Dickey & Lockman (1990) and Savage et al. (1997) for the warm neutral gas (T \u223c104 K) in Milky Way. We note that although the scale height for the warm neutral gas in our Galaxy is \u223c400 pc at the solar vicinity, this is expected to be smaller in the central region because of strong gravity due to bulge. The density of the gas just above the disc is assumed to be 0.1 particles /cc (0.025 in simulation units). Furthermore, the adiabatic index of the gas is 5/3 and \fSimulation of radiation driven wind from disc galaxies 5 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 r (10 kpc) v\u03c6 Total Halo Bulge Disc Used in this work Figure 4. Rotation curves corresponding to the gravitational \ufb01elds of an exponential disc, bulge and halo are shown here in the units of vref [= 200 km s\u22121], along with the total rotation curve. The approximation used in our simulation is shown by thick red line. the gas is assumed initially to be at the same temperature corresponding to an initial sound speed cs(ini) = 0.1vref, a value which is consistent with the values in our Galaxy for the warm ionized gas with sound speed \u223c18 km s\u22121. Our computation domain is rd \u00d7 rd in the r \u2212z plane, with a resolution 512 \u00d7 512 cells. The size of individual computational cell is \u223c20 pc. We have imposed re\ufb02ective boundary condition around the axis and zero rotational velocity on the axis. Continuous boundary conditions are imposed at r = rd and z = rd. The lower boundary is slightly above the galactic disc with an o\ufb00set z0 = 0.01. We impose \ufb01xed boundary condition at lower z boundary. The velocity of the injected matter is vz(r, z0) = v0 = 10\u22125vref, and its density is given by, \u03c1(r, z0) = \u03c1z0, for UD (22) = \u03c1z0exp \u0012 \u2212r rs \u0013 , for ED . The density of the injected matter at the base \u03c1z0 = 0.025 (corresponding to 0.1 protons per cc). For the case of exponential disc with rotation (M3), we assume for the injected matter to have an angular momentum corresponding to an equilibrium rotation pro\ufb01le. We show in Figure 4 the rotation curves at z = 0 for all components (disc, bulge and halo) separately and the total rotation curve. We use the following approximation (shown by thick red line in Figure 4) which matches the total rotation curve, v\u03c6(r, z0) = 1.6 vc [1 \u2212exp(\u2212r/0.15rd)] . (23) We assume a bulge of mass Mb = 0.1Mref and radius rb = 0.2Lref. The scale radius for NFW halo (Rs) is determined for a halo mass Mh = 20Md, as prescribed by MMW98. The corresponding disc scale radius is found to be rs \u223c5.8 kpc, again using MMW98 prescriptions. Therefore we set the disc scale length for the ED case to be rs \u223c0.58Lref . The above initial conditions have been chosen to satisfy Table 1. Models. Model name \u03930 v\u03c6 Disc type M1 2.0 0.0 UD M2 2.0 0.0 ED M3 2.0 1.0 ED the following requirements in order to sustain a radiatively driven wind as simulated here. (i) The strong coupling between dust grains and gas particles require that there are of order \u223cmd/mp number of collisions between protons and dust grains of mass md \u223c10\u221214 g, for size a \u223c0.1 \u00b5m with density \u223c3g cm\u22123. To ensure suf\ufb01cient number of collisions, the number density of gas particles should be n \u2a7emd mp 1 \u03c0a2 1 Lref \u223c10\u22123 cm\u22123, for Lref = 10 kpc. (ii) The time scale for radiative cooling of the gas, assumed to be at T \u223c104 K, is tcool \u223c1.5kT n\u039b , where \u039b \u223c10\u221223 erg cm3 s\u22121 (Sutherland & Dopita 1993; Table 6) for solar metallicity. The typical density \ufb01lling up the wind cone in the realistic case (M3) is \u223c10\u22123\u201310\u22124 cm\u22123, which gives tcool \u223c8\u201380 Myr and the dynamical time scale of the wind is tref \u223c50 Myr. Hence radiative cooling is marginally important and we will address the issue of radiative cooling in a future paper. (iii) Radiative transfer e\ufb00ects are negligible since the total opacity along a vertical column of length Lref is \u03ba(nmp)Lref \u223c0.003, for n \u223c10\u22123 cm\u22123 and \u03ba \u223c100 cm2 g\u22121. (iv) The mediation of the radiation force by dust grains also implies that the gas cannot be too hot for the dust grains to be sputtered. The sputtering radius of grains embedded in even in a hot gas of temperature T\u223c105 K is \u223c0.05(n/0.1 /cc) \u00b5m in a time scale of 100 Myr (Tielens et al. 1994), and this e\ufb00ect is not important for the temperature and density considered here. 3.2 Simulation set up We present 3 models with parameters listed in the Table 1. The initial condition for all the models are described in \u00a73.1. The boundary condition is essentially same, except that the mass \ufb02ux into the computational domain from the lower z boundary depends on the type of disc. As has been mentioned in section 3.1, we keep the velocity of injected matter very low, vz(r, z0) = vz(ini) = 10\u22125vref, so that it does not a\ufb00ect the dynamics. The three models have been constructed by a combination of di\ufb00erent values of three parameters \u03930, v\u03c6 and the distribution of the density in the disc. Model M3 has been run for di\ufb00erent values of \u03930, to ascertain the e\ufb00ect of radiation. 4 RESULTS In Figure 5, we present the model M1 for a constant surface density disc (UD). The density contour and the velocity vec\f6 I. Chattopadhyay, M. Sharma, B. B. Nath and D. Ryu Figure 5. M1 : Logarithmic density contours for radiation driven wind from UD for four snapshots running up to t = 98 Myr, with velocity vectors shown with arrows. Densities are colour-coded according to the computational unit of density, 6.7 \u00d7 10\u221224 g cm\u22123 \u223c4mp cm\u22123. tors for the wind are shown in four snapshots in Figure (5) upto a time t = 98 Myr (corresponding to t = 2 in computational time units). There are a few aspects of the gaseous \ufb02ow that we should note here. Firstly, the disc and the out\ufb02owing gas in this case has no rotation (v\u03c6 = 0). In the absence of the centrifugal force due to rotation which might have reduced the radial gravitational force, there is a net radial force driving the gas inward. At the same time, the radiation force, here characterized by \u03930 = 2, propels the gas upward (the radial component of radiation being weak). The net result after a few Myr is that the gas in the region near the pole moves in the positive z direction, and there is a density enhancement inside a cone around the pole, away from which the density and velocities decrease. Also, because of the strong gravity of the bulge, the gas tends to get trapped inside the bulge region, and even the gas at larger r tends to get dragged towards the axis. This region pu\ufb00s due to accumulation of matter. Ultimately the radiative force drives matter outwards in the form of a plume. Next, we change the disc mass distribution and simulate the case of wind driven out of an exponential disc (ED). We show the results in Figure 6. Since both gravity and radiation forces in this case of exponential disc are quasispherical in nature, therefore in the \ufb01nal snapshot the \ufb02ow appears to follow almost radial streamlines. Although in the vicinity of the disc, the injected matter still falls towards the axis, but this is not seen at large height as was seen in the previous case of M1. This makes the wind cone of rising gas more diverging than in the case of UD (M1). Figure 6. M2 : Logarithmic density contours for radiation driven wind from ED for four snapshots running up to t = 98 Myr, with velocity vectors shown with arrows. 4.1 Rotating wind from exponential disc The direction of the \ufb02uid \ufb02ow in M1 and M2 is by and large towards the axis, and this \ufb02ow is mitigated in the presence of rotation in the disc and injected gas. In the next model M3, we consider rotating matter being injected into the computational domain and which follows a v\u03c6 distribution given by Eq. (23). This is reasonable to assume since the disc from which the wind is supposed to blow, is itself rotating. In M3, we simulate rotating gas being injected above a ED and being driven by a radiation force of \u03930 = 2. We present nine snapshots of the M3 case in Figure 7. The \ufb01rst six snapshots of Figure 7 show the essential dynamics of the out\ufb02owing gas. The fast rotating matter from the outer disc is driven outward because the radial gravity component is balanced by rotation. Near the central region, rotation is small and also the radial force components are small. Therefore the gas is mostly driven vertically. The injected gas reaches a vertical height of \u223c5 kpc in a time scale of \u223c37 Myr (t=0.75). The \ufb02ow reaches a steady state after \u223c60 Myr (t=1.25). In the steady state we \ufb01nd a rotating and mildly divergent wind. We show the azimuthal velocity contours in Figure 8 in colour for the fully developed wind (last snapshot in M3), and superpose on it the contour lines of \u03c1. The density contours clearly show a conical structure for out\ufb02owing gas. The rotation speed of the gas peaks at the periphery of the cone, and is of order \u223c50\u2013100 km s\u22121. Compared to the disc rotation speed, the rotation speed of the wind region is somewhat smaller. In other words, we \ufb01nd the wind mostly consisting of low-angular momentum gas lifted from the disc. We plot the velocity of gas close to the axis in Figure 9 for di\ufb00erent times in this model (M3), using v(0, z) \u223c vz(0+, z). The velocity pro\ufb01le in the snapshots at earlier \fSimulation of radiation driven wind from disc galaxies 7 Figure 7. M3: Contours of log10(\u03c1) and v-\ufb01eld of radiation driven wind with \u03930 = 2.0 from an ED. t = 2 corresponds to 98 Myr. Figure 8. The rotation velocity v\u03c6 for the case M3 at a time of 98 Myr is shown in colours. Contour lines of log10(\u03c1) are plotted over it. Figure 9. The axial velocity vz(0+, z) with z at di\ufb00erent time steps for the model M3. t = 2.0 corresponds to a time of 98Myr. \f8 I. Chattopadhyay, M. Sharma, B. B. Nath and D. Ryu Figure 10. The axial velocity vz(0+, 10kpc) in simulation units vref = 200 km s\u22121 with \u03930, at a time t \u223c102 Myr. time \ufb02uctuates at di\ufb00erent height, but becomes steady after t \u2a7e1.5, as does the density pro\ufb01le. We have run this particular case of ED with rotation (model M3) for di\ufb00erent values of \u03930. In order to illustrate the results of these runs, we plot the z-component of velocity (vz(0+, 10 kpc)) at 10 kpc and at simulation time, t = 2 as a function of \u03930 in Figure 10. We \ufb01nd that signi\ufb01cant wind velocities are obtained for \u03930 \u22731.5 and wind velocities appear to rise linearly with \u03930 after this critical value is acheived. Sharma et al. (2011) found this critical value to be \u03930 \u223c2 for a constant density disc and wind launched above the bulge. For the realistic case of an exponential disc, we \ufb01nd in the present simulation the critical value to be somewhat smaller than but close to the analytical result. The important point is that the critical \u03930 is not unity. This is because the parameter \u03930 is not a true Eddington parameter since it is de\ufb01ned in terms of disc gravity and radiation, whereas halo and bulge also contribute to gravity. 5 DISCUSSIONS Our simulation di\ufb00ers from earlier works (e.g. Suchkov et.al. 1994) mainly in that we speci\ufb01cally target warm out\ufb02ows and the driving force is radiation pressure. Most of the previous simulations of galactic wind have used energy injected from supernovae blasts as a driving force. However, with the ideas presented in Murray et al. (2005), which worked out the case of radiation pressure in a spherical symmetric setup, it beomes important to study the physics of this model in an axisymmetric set up, as has been done analytically by Sharma et al. (2011) (see also, Zhang & Thompson 2010). Also we have tried to capture all features of a typical disc galaxy like a bulge and a dark matter halo, and a rotating disc. Recent analytical works (Sharma & Nath 2011) and simulations (Hopkins et al. 2011) have shown that out\ufb02ows from massive galaxies (Mhalo \u2a7e1012 M\u2299) have di\ufb00erent characteristics than those from low mass galaxies. Out\ufb02ows from massive galaxies are mostly driven by radiation pressure and the fraction of cold gas in the halos of massive galaxies is large (van de Voort & Schaye 2011). Our simulations presented here addresses these out\ufb02ows in particular. We have parameterized our simulation runs with the disc Eddington factor \u03930, and it is important to know the corresponding luminosity for a typical disc galaxy, or the equivalent star formation rate. For a typical opacity of a dust and gas mixture (\u03ba \u223c200 cm2 g\u22121) (Draine 2011), the correspondig mass-to-light ratio requirement for \u03930 \u22731.5 is that M/L \u2a7d0.03. Sharma et al. (2011) showed that for the case of an instantaneous star formation, \u03930 \u22732 is possible for an initial period of \u223c10 Myr after the starburst. However for a continuous star formation, which is more realistic for disc galaxies, Sharma & Nath (2011) found that only ultra luminous infrared galaxies (ULIGs), with star formation rate larger than \u223c100 M\u2299yr\u22121 and which are also massive, are suitable candidates for such large values of \u03930, and for radiatively driven winds. The results presented in the previous sections show that the out\ufb02owing gas within the central region of a few kpc tends to stay close to the pole, and does not move outwards because of its low angular momentum. This makes the out\ufb02ow somewhat collimated. Although out\ufb02ows driven by SN heated hot wind also produces a conical structure (e.g., Fragile et al. 2004) emanating from a breakout point of the SN remnants, there is a qualitative di\ufb00erence between this case and that of radiatively driven winds as presented in our simulations. While it is the pressure of the hot gas that expands gradually as it comes out of a strati\ufb01ed atmosphere, in the case of a radiation driven wind, it is the combination of mostly the lack of rotation and almost vertical radiation driving force in the central region that produce the collimation e\ufb00ect. We also note that the conical structure of rotation in the out\ufb02owing gas is similar to the case of out\ufb02ow in M82 (Greve 2004), where one observes a diverging and rotating periphery of conical out\ufb02ow. We have not considered radiative cooling in our simulations, since for typical density in the wind the radiative cooling time is shorter or comparable than the dynamical time. However, there are regions of higher density close to the base and radiative cooling can be important there. We will address this point in a future paper. From our results of the exponential and rotating disc model, we \ufb01nd the wind comprising of low-angular momentum gas lifted from the disc. It is interesting to note that recent simulations of supernovae driven winds have also claimed a similar result (Governato et al. 2010). Such loss of low angular momentum gas from the disc may have important implication for the formation and evolution of the bulge, since the bulge population is de\ufb01cient in stars with low speci\ufb01c angular momentum. Binney, Gerhardt & Silk (2001) have speculated that out\ufb02ows from disc that preferentially removes low angular momentum material may resolve some discrepancies between observed properties of disc and results of numerical simulations. As a caveat, we should \ufb01nally note that the scope and predictions of our simulation is limited by the simple model of disc radiation adoped here. In reality, radiation from disks \fSimulation of radiation driven wind from disc galaxies 9 is likely to be con\ufb01ned in the vicinity of star clusters, and not spread throughout the disk as we have assumed here. This is likely to increase the e\ufb03cacy of radiation pressure, but which is not possible within the scope of an axisymmetric simulation. 6 SUMMARY We have presented the results of hydrodynamical (Eulerian) simulations of radiation driven winds from disc galaxies. After studying the cases of winds from a constant surface density disc and exponential disc without rotation, we have studied a rotating out\ufb02ow originating from an exponential disc with rotation. We \ufb01nd that the out\ufb02ow speed increases rapidly with the disc Eddington parameter \u03930 = \u03baI/(2cG\u03a3) for \u03930 \u2a7e1.5, consistent with theoretical expectations. The density structure of the out\ufb02ow has a conical appearance, and most of the ou\ufb02owing gas consists of low angular momentum gas. We thank Yuri Shchekinov for constructive comments and critical reading of the manuscript. IC acknowledges the hospitality of the Astronomy and Astrophysics Group of Raman Research Institute, where the present work was conceived. DR was supported by National Research Foundation of Korea through grant 2007-0093860." + }, + { + "url": "http://arxiv.org/abs/0812.2607v1", + "title": "Effects of Fluid Composition on Spherical Flows around Black Holes", + "abstract": "Steady, spherically symmetric, adiabatic accretion and wind flows around\nnon-rotating black holes were studied for fully ionized, multi-component\nfluids, which are described by a relativistic equation of state (EoS). We\nshowed that the polytropic index depends on the temperature as well as on the\ncomposition of fluids, so the composition is important to the solutions of the\nflows. We demonstrated that fluids with different composition can produce\ndramatically different solutions, even if they have the same sonic point, or\nthey start with the same specific energy or the same temperature. Then, we\npointed that the Coulomb relaxation times can be longer than the dynamical time\nin the problem considered here, and discussed the implication.", + "authors": "Indranil Chattopadhyay, Dongsu Ryu", + "published": "2008-12-14", + "updated": "2008-12-14", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction It is generally inferred from observations that the matter falling onto black holes is of very high temperature, both in microquasars (Corbel et al. 2003) as well as in AGNs (R\u00b4 oza\u00b4 nska & Czerny 2000). The electron temperature around 109 K and/or the proton temperature around 1012 K or more are accepted as typical values within few tens of the Schwarzschild radius, rs, of the central black holes. Moreover, the general theory of relativity 1ARIES, Manora Peak, Nainital-263129, Uttaranchal, India: indra@aries.ernet.in 2Department of Astronomy and Space Science, Chungnam National University, Daejeon 305-764, South Korea: ryu@canopus.cnu.ac.kr *Corresponding Author \f\u2013 2 \u2013 demands that the matter crosses the black hole horizon with the speed of light (c). In other words, close to black holes, the matter is relativistic in terms of its bulk speed and/or its temperature. On the other hand, at large distances away from black holes, the matter should be non-relativistic. It is also inferred from observations that the astrophysical jets around black hole candidates have relativistic speeds (Biretta et al. 2003). Since the jets originate from the accreting matter very close to black holes, their base could be very hot. At a few hundred Schwarzschild radii above the disc plane, they can expand to very low temperatures but very high speeds (Lorentz factor \u03b3 \u2273a few). And as the fast moving matter of the jets hits the ambient medium and drastically slows down to form shocks and hot spots, once again the thermal energy increases to relativistic values though the bulk velocity becomes small. Relativistic \ufb02ows are inferred for gamma-ray bursts (GRBs) too. In the so-called collapsar model scenario (Woosley 1993), the collimated bipolar out\ufb02ows emerge from deep inside collapsars and propagate into the interstellar medium, producing GRBs and afterglows. In such model, these collimated out\ufb02ows are supposed to achieve Lorentz factors \u03b3 \u2273100. It is clear in the above examples that as a \ufb02uid \ufb02ows onto a black hole or away from it, there are one or more transitions from the non-relativistic regime to the relativistic one or vice-versa. It has been shown by quite a few authors that to describe such trans-relativistic \ufb02uid, the equation of state (EoS) with a \ufb01xed adiabatic index \u0393 (= cp/cv, the ratio of speci\ufb01c heats) is inadequate and the relativistically correct EoS (Chandrasekhar 1938; Synge 1957) should be used (e.g., Taub 1948; Mignone et al. 2005; Ryu et al. 2006). A \ufb02uid is said to be thermally relativistic, if its thermal energy is comparable to or greater than its rest mass energy, i.e., if kT \u2273mc2. The thermally non-relativistic regime is kT \u226amc2. Here, T is the temperature, k is the Boltzmann constant, and m is the mass of the particles that constitute the \ufb02uid. So it is not just the temperature that determines a \ufb02uid to be thermally relativistic, but it is the ratio, T/m, that determines it. Therefore, together with the temperature, the composition of the \ufb02uid (i.e., either the \ufb02uid is composed of electron-positron pairs, or electrons and protons, or some other combinations) will determine whether the \ufb02uid is in the thermally relativistic regime or not. The study of relativistic \ufb02ows around compact objects including black holes was started by Michel (1972). It was basically recasting the transonic accretion and wind solutions around Newtonian objects obtained by Bondi (1952) into the framework of the general theory of relativity. Since then, a number of authors have addressed the problem of relativistic \ufb02ows around black holes, each focusing on its various aspects (e.g., Blumenthal & Mathews 1976; Ferrari 1985; Chakrabarti 1996; Das 2001, 2002; Meliani et al. 2004; Barai et al. 2006; Fukumura & Kazanas 2007; Mandal et al. 2007). Barring a few exceptions (e.g., Blumenthal & Mathews \f\u2013 3 \u2013 1976; Meliani et al. 2004), most of these studies used the EoS with a \ufb01xed \u0393, which, as we have noted, is incapable of describing a \ufb02uid from in\ufb01nity to the horizon. Blumenthal & Mathews (1976) for the \ufb01rst time calculated the spherical accretion and wind solutions around Schwarzschild black holes, while using an approximate EoS for the single-component relativistic \ufb02uid (Mathews 1971). Meliani et al. (2004) modi\ufb01ed the EoS used by Blumenthal & Mathews (1976) to obtain thermally driven spherical winds with relativistic terminal speeds. However, there has been no extensive study of the e\ufb00ects of \ufb02uid composition on the solutions of transonic \ufb02ows around black holes. We in this paper investigate the e\ufb00ects. The paper is organized as follows. In the next section, we present the governing equations including the EoS. In section 3, we present the sonic point properties. In section 4, we present the accretion and wind solutions. In section 5, we discuss the validity of our relativistic EoS. Discussion and concluding remarks are presented in the last section. 2. Assumptions and Equations To ensure that the e\ufb00ects of \ufb02uid composition are clearly presented, we keep our model of accretion and wind as simple as possible. We consider adiabatic, spherical \ufb02ows onto Schwarzschild black holes. The space time is described by the Schwarzschild metric ds2 = \u2212 \u0012 1 \u22122GMB c2r \u0013 c2dt2 + \u0012 1 \u22122GMB c2r \u0013\u22121 dr2 + r2d\u03b82 + r2 sin2 \u03b8d\u03c62, (1) where r, \u03b8, \u03c6 are the usual spherical coordinates, t is the time, and MB is the mass of the central black hole. Although AGNs and micro-quasars are in general powered by rotating \ufb02ows, studies of spherical \ufb02ows are not entirely of pedagogic interest. For instance, such studies can throw light on the nature of accretions onto isolated black holes in low angular momentum and cold clouds. In addition, hot spherical \ufb02ows may mimic accretions very close to black holes, where the accreting matter is expected to be of low angular momentum, hot, and with strong advection. Non-conservative processes and magnetic \ufb01elds are ignored, too. The energy-momentum tensor of a relativistic \ufb02uid is given by T \u00b5\u03bd = (e + p)u\u00b5u\u03bd + pg\u00b5\u03bd, (2) where e and p are the energy density and gas pressure respectively, all measured in the local frame. The four-velocities are represented by u\u00b5. The equations governing \ufb02uid dynamics are given by T \u00b5\u03bd ;\u03bd = 0 and (nu\u03bd);\u03bd = 0, (3) where n is the particle number density of the \ufb02uid measured in the local frame. \f\u2013 4 \u2013 2.1. EoS for single-component \ufb02uids Equation (3) is essentially \ufb01ve independent equations, while the number of variables are six. This anomaly in \ufb02uid dynamics is resolved by a closure relation between e, p and n (or the mass density \u03c1 = nm), and this relation is known as the EoS. The EoS for singlecomponent relativistic \ufb02uids, which are in thermal equilibrium, has been known for a while, and is given by e + p \u03c1c2 = K3(\u03c1c2/p) K2(\u03c1c2/p) (4a) (Chandrasekhar 1938; Synge 1957). Here, K2 and K3 are the modi\ufb01ed Bessel functions of the second kind of order two and three, respectively. Owing to simplicity, however, the most commonly used EoS has been the one with a \ufb01xed \u0393, which is written as e = \u03c1c2 + p \u0393 \u22121. (4b) As noted in Introduction, this EoS, which admits the superluminal sound speed, is not applicable to all ranges of temperature (Mignone et al. 2005; Ryu et al. 2006). Here, we adopt an approximate EoS e = \u03c1c2 + p \u00129p + 3\u03c1c2 3p + 2\u03c1c2 \u0013 , (4c) which reproduces very closely the relativistically correct EoS in equation (4a), better than the one proposed by Mathews (1971) p = \u03c1c2 3 \u0012 e \u03c1c2 \u2212\u03c1c2 e \u0013 . (4d) A comparative study of various EoS\u2019s for single-component relativistic \ufb02uids was presented in Ryu et al. (2006). 2.2. EoS for multi-component \ufb02uids We consider \ufb02uids which are composed of electrons, positrons, and protons. Then the number density is given by n = \u03a3ni = ne\u2212+ ne+ + np+, (5a) where ne\u2212, ne+, and np+ are the electron, positron, and proton number densities, respectively. Charge neutrality demands that ne\u2212= ne+ + np+ \u21d2 n = 2ne\u2212 and ne+ = ne\u2212(1 \u2212\u03be), (5b) \f\u2013 5 \u2013 where \u03be = np+/ne\u2212is the relative proportion of protons. The mass density is given by \u03c1 = \u03a3nimi = ne\u2212me \u001a 2 \u2212\u03be \u0012 1 \u22121 \u03b7 \u0013\u001b , (5c) where \u03b7 = me/mp, and me and mp are the electron and proton masses, respectively. For single-temperature \ufb02uids, the isotropic pressure is given by p = \u03a3pi = 2ne\u2212kT. (5d) As our EoS for multi-component \ufb02uids, we adopt e = \u03a3ei = \u03a3 \u0014 nimic2 + pi \u00129pi + 3nimic2 3pi + 2nimic2 \u0013\u0015 . (5e) The non-dimensional temperature is de\ufb01ned with respect to the electron rest mass energy, \u0398 = kT/(mec2). With equations (5a) \u2013 (5d), the expression of the energy density in equation (5e) simpli\ufb01es to e = ne\u2212mec2f, (5f) where f = (2 \u2212\u03be) \u0014 1 + \u0398 \u00129\u0398 + 3 3\u0398 + 2 \u0013\u0015 + \u03be \u00141 \u03b7 + \u0398 \u00129\u0398 + 3/\u03b7 3\u0398 + 2/\u03b7 \u0013\u0015 . (5g) The expression of the polytropic index for single-temperature \ufb02uids is given by N = T p \u03a3ni d\u03a6i dT = 1 2 d f d\u0398, (5h) where \u03a6i = ei ni = mic2 + kT 9kT + 3mic2 3kT + 2mic2 (5i) is the energy density per particle of each component. The e\ufb00ective adiabatic index is calculated by \u0393 = 1 + 1 N . (5j) The de\ufb01nition of the sound speed, a, is a2 c2 = \u0393p e + p = 2\u0393\u0398 f + 2\u0398. (5k) The polytropic index N (and also the adiabatic index \u0393) is an indicator of the thermal state of a \ufb02uid. If N \u21923/2 (or \u0393 \u21925/3), the \ufb02uid is called thermally non-relativistic. \f\u2013 6 \u2013 On the other hand, if N \u21923 (or \u0393 \u21924/3), it is called thermally relativistic. For singlecomponent \ufb02uids, N and \u0393 are given as a function of the temperature alone (Ryu et al. 2006). For multi-component \ufb02uids, however, not just the temperature, the mass of the constituent particles also determines the thermal state. Hence, the proton proportion, \u03be, enters as a parameter too. In Figure 1, we show various thermodynamic quantities and their inter-relations for \ufb02uids with di\ufb00erent \u03be. In Figure 1a which plots N as a function of T, the left most (solid) curve represents the electron-positron pair \ufb02uid (\u03be = 0) (hereafter, the e\u2212\u2212e+ \ufb02uid) and the right most (dotted) curve represents the electron-proton \ufb02uid (\u03be = 1) (hereafter, the e\u2212\u2212p+ \ufb02uid). In the e\u2212\u2212e+ \ufb02uid, N \u21923 for kT > mec2, while in the e\u2212\u2212p+ \ufb02uid, N \u21923 for kT > mpc2. In the intermediate temperature range, mec2 < kT < mpc2, N decreases (i.e., the \ufb02uid becomes less relativistic) with the increase of \u03be. It is because if \u03be increases (i.e., the proton proportion increases), the thermal energy required to be in the relativistic regime also increases. By the same reason, at the same T, the local sound speed, a, decreases as \u03be increases, as shown in Figure 1b. However, in Figure 1c, it is shown that the relation between N and a is not as simple as the relation between N and T. At the same a, N is smallest for the e\u2212\u2212e+ \ufb02uid, and it increases and then decreases as \u03be increases. The behavior can be understood as follows. At the same a, as \u03be increases, the thermal energy increases, but at the same time, the rest mass energy increases as well. As noted in Introduction, it is not the thermal energy, but the competition between the thermal energy and the rest mass energy that makes a \ufb02uid relativistic. Consequently, for most values of a, N increases for \u03be \u22720.2 and then decreases for \u03be \u22730.2. For very low a, N increases up to \u03be \u223c0.5, and for very high a, N increases up to \u03be \u22720.1. In summary, at a given temperature, the e\u2212\u2212e+ \ufb02uid is most relativistic, but at a given sound speed, the e\u2212\u2212e+ \ufb02uid is least relativistic and \ufb02uids with \ufb01nite proton proportions are more relativistic. 2.3. Equations of motion The energy-momentum conservation equation [the \ufb01rst of equation (3)] can be reduced to the relativistic Euler equation and the entropy equation. Under the steady state and radial \ufb02ow assumptions, the equations of motion are given by ur dur dr + 1 r2 = \u2212 \u0012 1 \u22122 r + urur \u0013 1 e + p dp dr, (6a) and de dr \u2212e + p n dn dr = 0, (6b) \f\u2013 7 \u2013 along with the continuity equation [the second of equation (3)] 1 n dn dr = \u22122 r \u22121 ur dur dr . (6c) Here, we use the system of units where G = MB = c = 1, so that the units of length and time are rg = GMB/c2 and tg = GMB/c3. It is to be noted that in this system of units, the Schwarzschild radius or the radius of the event horizon is rs = 2. After some lengthy calculations, equations (6a) \u2013 (6c) are then simpli\ufb01ed to dv dr = (1 \u2212v2)[a2(2r \u22123) \u22121] r(r \u22122)(v \u2212a2/v) (7a) and d\u0398 dr = \u2212\u0398 N \u0014 2r \u22123 r(r \u22122) + 1 v(1 \u2212v2) dv dr \u0015 , (7b) where the radial three-velocity is de\ufb01ned as v2 = \u2212urur/(utut). For \ufb02ows continuous along streamlines, equations (7a) \u2013 (7b) admit the so-called regularity condition, or the critical point condition, or the sonic point condition (Chakrabarti 1990) that is given by ac = vc (8a) and a2 c = 1 2rc \u22123. (8b) Here, rc is the sonic point location. Hereafter, the quantities with subscript c denote those at rc. From equation (5k), we know amax = 1/ \u221a 3 (also see Figure 1b). Therefore, from equation (8b), we have rc \u22653 (Blumenthal & Mathews 1976). Since dv/dr = N /D \u21920/0 at rc, (dv/dr)rc is obtained by the l\u2019Hospital rule \u0012dv dr \u0013 rc = (dN /dr)rc (dD/dr)rc , (8c) where N and D are the numerator and denominator of equation (7a). The above equation simpli\ufb01es to A \u0012dv dr \u00132 rc + B \u0012dv dr \u0013 rc + C = 0, (8d) where A = \u0012 2 + 1 \u2212Nca2 c + (\u0398c/\u0393c)(d\u0393/d\u0398)c Nc(1 \u2212a2 c) \u0013 rc(rc \u22122), (8e) B = 21 \u2212Nca2 c + (\u0398c/\u0393c)(d\u0393/d\u0398)c Ncac , (8f) \f\u2013 8 \u2013 and C = 21 \u2212Nca2 c + (\u0398c/\u0393c)(d\u0393/d\u0398)c Ncrc \u22122a2 c(1 \u2212a2 c). (8g) Equation (8d) has two roots. For radial \ufb02ows, the roots are of the saddle type, where (dv/dr)c is real and (dM/dr)c is of opposite signs for the two roots. Here, M = v/a is the Mach number. Moreover, the two roots can be either of the acceleration type (A-type), where (dv/dr)c is of opposite signs, or of the deceleration type (D-type), where (dv/dr)c is negative for both roots. In the A-type, both the acceleration and wind \ufb02ows accelerate at the sonic point. On the other hand, in the D-type, only the accretion \ufb02ows accelerate, while the wind \ufb02ows decelerate at the sonic point. By substituting the quantities at the sonic point, equations (7b) give the temperature gradient at the sonic point \u0012d\u0398 dr \u0013 rc = \u2212\u0398c Nc \u0014 2rc \u22123 rc(rc \u22122) + 1 vc(1 \u2212v2 c) \u0012dv dr \u0013 rc \u0015 . (8h) Finally by integrating the equations of motion, we get the relativistic Bernoulli equation (Lightman et al. 1975) E = (f + 2\u0398)ut (2 \u2212\u03be + \u03be/\u03b7), (9) where E is the Bernoulli parameter or is also known as the speci\ufb01c energy of \ufb02ows. Since we assume adiabatic \ufb02ows without heating and cooling, E is a constant of motion. 2.4. Procedure to get global solutions Combining equation (8b) and (5k) gives \u0398c in terms of rc and \u03be. Combining it with equation (9) gives a formula involving rc, E, and \u03be (Chakrabarti 1990, 1996b; Fukumura & Kazanas 2007). If E and \u03be are given, then rc is computed from the formula. Once rc is known, all the quantities at rc, e.g., \u0398c, vc, (dv/dr)rc, (d\u0398/dr)rc, etc, are computed from equations (8a) \u2013 (8h). Then equations (7a) and (7b) are integrated, starting from rc, once inwards and then outwards, to obtain the global, transonic solutions of spherical \ufb02ows around black holes. By this way, we can obtain two parameter (E, \u03be) family of accretion and wind solutions. 3. Sonic Point Properties In the transonic \ufb02ows we study, the sonic point plays an important role. So before we present global solutions in the next section, we \ufb01rst investigate the properties of the sonic \f\u2013 9 \u2013 point in this section. Understanding the sonic-point properties will allow us to have an idea of the nature of global \ufb02ow structures. The sonic point location, rc, that is computed as a function of E and \u03be, is presented in Figure 2a. Corresponding to each set of E and \u03be values, there exists a unique rc. Each curve, which is given as a function of E, is for a di\ufb00erent value of \u03be. If a \ufb02ow is more energetic with larger E, it is characterized by a smaller value of rc. However, at the same E, rc is smallest for the e\u2212\u2212e+ \ufb02uid (solid line). The value of rc increases for \u03be \u22720.2, and then starts to decrease for larger \u03be. In other words, if \ufb02uids of the same E but di\ufb00erent \u03be are launched at a large distance away from a black hole, then the e\u2212\u2212e+ \ufb02uid crosses the sonic point closest to the event horizon, compared to the \ufb02uids of \ufb01nite proton proportion. Alternatively, at the same rc, E is smallest for the e\u2212\u2212e+ \ufb02uid, and it increases up to \u03be \u223c0.2 and then decreases for 0.2 \u2272\u03be \u22641. Although for the same rc the e\u2212\u2212p+ (dotted line) is not most energetic, it is de\ufb01nitely more energetic than the e\u2212\u2212e+ \ufb02uid. Since \ufb02uids of di\ufb00erent composition are energetically quite di\ufb00erent at the same rc, or conversely \ufb02uids of di\ufb00erent composition but the same E form the sonic point at widely di\ufb00erent rc, it is expected that the global solutions of accretion and wind \ufb02ows would be quantitatively and qualitatively di\ufb00erent, depending upon the composition of \ufb02uids. In Figures 2b and 2c, we show Tc and Nc as a function of rc. Equations (8b) tells that the sound speed at the sonic point, ac, is \ufb01xed, once rc is determined (ac implicitly depends on E and \u03be through rc). So plotting any variable as a function of rc is equivalent to plotting it as a function of ac. As noted above, at the same ac, \ufb02uids composed of lighter particles are colder. Therefore, in Figure 2b, at the same rc, the temperature is lowest for the e\u2212\u2212e+ \ufb02uid, and progressively gets higher for \ufb02uids with larger proton proportions, and the maximum temperature is for the e\u2212\u2212p+ \ufb02uid. However, as noted before, higher Tc does not necessarily ensure higher Nc (i.e., more relativistic \ufb02uids). In Figure 2c, at the same rc, the e\u2212\u2212e+ \ufb02uid has the lowest Nc, that is, it is least relativistic. In the range of a few \u2272rc \u2272100, at the same rc, Nc increases as the proton proportion increases for \u03be \u22720.2, and then starts to decrease for 0.2 \u2272\u03be \u22641. This is a consequence of the competition between the thermal energy and the rest mass energy, as discussed in connection with Figures 1c. In order to make the point even clearer, in Figure 2d, we show Nc as a function of \u03be for a wide range of values of rc. Each curve with a single value of rc signi\ufb01es \ufb02uids of di\ufb00erent composition but the same sound speed at the same sonic point. Nc tends to peak at some values of \u03be, where the thermal contribution with respect to the rest mass energy contribution peaks. For small values of rc (i.e., large ac\u2019s), a small increase of \u03be causes the thermal contribution to peak. For large values of rc (i.e., small ac\u2019s), large proton proportions are needed to achieve the same. \f\u2013 10 \u2013 As discussed in section 2.3, the roots of equation (8d) are either of the A-type or of the D-type. At small values of rc, the nature of the sonic point is of the A-type. It is because if the sonic point form closer to the central object, the \ufb02ow is hotter at the sonic point (Figure 2b), and in the wind that is thermally driven, the \ufb02ow tends to accelerate at the sonic point. But beyond a limiting value, say rc\u2113, the nature changes from the A-type to the D-type, where the wind \ufb02ow decelerates at the sonic point. In Figure 3a, rc\u2113is plotted as a function of \u03be. Since at a given rc the e\u2212\u2212e+ \ufb02uid is thermally least relativistic, rc\u2113is smallest for the \ufb02uid. The limit rc\u2113increases with \u03be. However, since increasing \u03be makes \ufb02uids \u2018heavy\u2019 too, rc\u2113peaks around \u03be \u223c0.75. In Figure 3b, we plot the limiting values of E corresponding to rc\u2113, E\u2113, as a function of \u03be, such that for E > E\u2113the nature of the sonic point is of the A-type, for E < E\u2113it is of the D-type. 4. Spherical Accretion and Wind Solutions In this section, we present the global solutions of equations (7a \u2013 7b) that were obtained with the procedure described in section 2.4. In Figure 4, we \ufb01rst compare typical accretion and wind solutions of the A and D-types for the e\u2212\u2212e+ \ufb02uid. The solutions of the Atype in the left panels have the sonic point at rc = 4, inside rc\u2113, while those of the D-type in the right panels have rc = 30, beyond rc\u2113(Figure 3a). The accretion solutions (solid curves) are characterized by supersonic \ufb02ows at the inner boundary and subsonic \ufb02ows at the outer boundary (Figures 4c and 4d). The wind solutions (dotted curves), on the other hand, have subsonic \ufb02ows at the inner boundary and supersonic \ufb02ows at the outer boundary. The accretion \ufb02ows around black holes necessarily accelerate inwards. However, the wind \ufb02ows may accelerate (Figure 4a) or decelerate (Figure 4b) outwards. The wind solutions considered in this paper are thermally driven. These winds are very hot at the base, and are powered by the conversion of the thermal energy into the kinetic energy. It can be shown from equation (7b) that \u2212d\u0398 dr \u2264\u0398 N \u0014 2r \u22123 r(r \u22122) \u0015 \u21d2 dv dr\u22640. (10) In other words, if the outward thermal gradient is weaker than the gravity, the out\ufb02ow can decelerate. For the wind with rc = 30 (Figures 4b, 4d, and 4f), \u2212d\u0398/dr \u223c(\u0398/N)(2r \u2212 3)/[r(r \u22122)] at r \u223c9.16, exactly where the out\ufb02ow starts to decelerate. However, the wind velocity will reach an asymptotic value at r \u2192large, since \u2212d\u0398/dr \u223c(\u0398/N)(2r\u22123)/[r(r\u2212 2)] \u223c0 at large distances from the black hole. Similar relation between the gradients at the sonic point will determine the nature of the sonic point. It may be noted that at rc \u2265rc\u2113 (Figures 3a and 3b), such relation between (dv/dr)c and (d\u0398/dr)c is satis\ufb01ed. Regardless \f\u2013 11 \u2013 of accretion/wind or the type, the temperature decreases with increasing r (Figures 4e and 4f). We note that the winds in our solutions are too weak to be the precursor of astrophysical jets, until and unless other accelerating processes like those caused by magnetic \ufb01elds or disc radiation are considered (Chattopadhyay 2005). In fact, we checked that it is not possible to generate the terminal speed much greater than \u223c0.8c for purely thermally driven winds, such as the ones that are considered in this paper. It is also to be noted that our D-type, wind solution is not an example of \u2018breeze\u2019. A breeze is always subsonic, while the wind here is transonic, albeit decelerating. In the previous \ufb01gure, we have compared the solutions with the same \u03be (= 0) but di\ufb00erent rc. In Figure 5, we compare the solutions with the same rc (= 20) but di\ufb00erent \u03be. As shown in Figure 2a, even for the same rc, the speci\ufb01c energy is di\ufb00erent for \ufb02uids of di\ufb00erent \u03be. Furthermore, the polytropic index at the sonic point is di\ufb00erent too (Nc = 1.547 in Figure 5a, Nc = 2.626 in Figure 5b, and Nc = 2.271 in Figure 5c). Therefore, even if we \ufb01x the sonic point (and therefore ac), the \ufb02ow structure and energetics are di\ufb00erent for \ufb02uids with di\ufb00erent \u03be. In these particular solutions, the e\u2212\u2212e+ \ufb02uid is not hot enough to drive an accelerating wind (Figure 5a), while the \ufb02uids with signi\ufb01cant protons can do so. As in the previous case of decelerating wind solution (i.e., Figure 4b), in the present case the e\u2212\u2212e+ \ufb02uid \ufb01rst accelerates and then starts to decelerate at r \u223c9.86. The velocity pro\ufb01le eventually tapers o\ufb00to an asymptotic value at large distances away from the black hole. It has been shown in Figure 2b that at the same rc, adding protons increases the temperature at the sonic point. Larger temperature gradient causes winds of \ufb01nite proton proportion to be accelerated at the sonic point (Figure 5b). It is seen that beyond a critical value, the increase in \u03be increases the inertia which reduces the wind speed, as is vindicated by Figures 5b and 5c. It is to be remembered that, the D type sonic point is a reality for \ufb02uids of any \u03be, provided rc \u2273rc\u2113. Although the wind solutions are noticeably di\ufb00erent depending on \u03be, there seems to be only small di\ufb00erence in the velocity pro\ufb01le of the accretion solutions. Henceforth, we concentrate only on accretion solutions. Such small di\ufb00erence in v in accretion solutions is expected. The accretion is generated mostly by the inward pull of the gravity, which gives the unique inner boundary condition for black holes, i.e., v = 1 at r = 2, regardless of other considerations. The pressure gradient changes the pro\ufb01le of v too. Since the composition of \ufb02uids determines the thermal state, it in\ufb02uences the pro\ufb01le of v, but the e\ufb00ect is not the dominant one. In Figure 6, we compare the accretion solutions with the same E (= 1.015) but di\ufb00erent \u03be. As noted below equation (9), E is a constant of motion. For r \u2192\u221e, as ut \u21921, we \f\u2013 12 \u2013 have E \u2192h\u221e, where h\u221e= \u0014e + p \u03c1 \u0015 \u221e = \u0014 f + 2\u0398 2 \u2212\u03be + \u03be/\u03b7 \u0015 \u221e (11) is the speci\ufb01c enthalpy at in\ufb01nity. Equation (11) tells us that at large distances from black holes, for the same E, T is large if \u03be is large. Hence, \ufb02uids with larger \u03be are hotter to start with. Therefore even for \ufb02uids with the same E, the solutions are di\ufb00erent if \u03be is di\ufb00erent. Figure 6a shows the velocity pro\ufb01le as a function of r. Here, the di\ufb00erence in v for \ufb02uids with di\ufb00erent \u03be is evident, albeit not big as pointed above. Figures 6b, 6c and 6d show the mass density, temperature, and polytropic index. To compute the mass density, we need to supply the mass accretion rate, which is given as \u02d9 M = 4\u03c0r2ur\u03c1 (12) from equation (6c). The mass density in Figure 6b was computed for MB = 10M\u2299and \u02d9 M = 0.1 \u02d9 MEdd, where \u02d9 MEdd is the Eddington rate of accretion. The di\ufb00erence in T and N for \ufb02uids with di\ufb00erent \u03be is more pronounced. The e\u2212\u2212e+ \ufb02uid is slowest, densest (for the same \u02d9 M), coldest, and least relativistic. The e\u2212\u2212p+ \ufb02uid is more relativistic than the e\u2212\u2212e+ \ufb02uid. But the most relativistic \ufb02uid is the one with the intermediate value of \u03be. It is interesting to note that except for the e\u2212\u2212e+ \ufb02uid, N is a slowly varying function of r for the other two \ufb02uids. Does this mean it would be su\ufb03cient to adopt the \ufb01xed \u0393 EoS with appropriate values of \u0393? Finally in Figure 7, we compare the accretion solutions with the same temperature at large distances but di\ufb00erent \u03be. All the \ufb02uids start with T = Tout = 1.3 \u00d7 109 K at r = rout = 2000. Again the mass density was computed for MB = 10M\u2299and \u02d9 M = 0.1 \u02d9 MEdd. It is to be noted that the \ufb02uids starting with the same Tout but di\ufb00erent \u03be have di\ufb00erent speci\ufb01c energies. Hence, the velocity at the outer boundary is di\ufb00erent too. As shown in Figure 7a, in these particular solutions, the e\u2212\u2212e+ \ufb02uid starts with a velocity substantially di\ufb00erent from those of the other two \ufb02uids, so the resulting velocity pro\ufb01le is substantially di\ufb00erent. From Figure 7d, it is clear that there are signi\ufb01cant variations in N for all the \ufb02uids. The e\u2212\u2212e+ \ufb02uid starts with the largest N. It is because at the same temperature, the e\u2212\u2212e+ \ufb02uid is thermally most relativistic. The behavior of N can be traced back to Figure 1a. For instance, the variations in N tend to \ufb02atten at T \u22731010 K. In Figure 7c, for the \ufb02uids with \u03be = 0.5 and 1, T \u22721010 K for r \u2273100 and T \u22731010 K for r \u2272100. So signi\ufb01cant variations are expected in N at r\u2273100, while the variations \ufb02atten at r < 100. Similar considerations will explain the variations in N for the e\u2212\u2212e+ \ufb02uid. From Figure 7d, it is clear that we need to adopt a relativistically correct EoS [equation (4c) or (4d)], instead of the EoS with a \ufb01xed \u0393, in order to capture the proper thermal properties of \ufb02ows around black holes. \f\u2013 13 \u2013 In this section, we have shown that \ufb02uids with di\ufb00erent composition can result in dramatically di\ufb00erent accretion and wind \ufb02ows, even if they have the same sonic point or the same speci\ufb01c energy, or they start with the same temperature at large distances from black holes. So not just adopting a correct EoS, but incorporating the e\ufb00ects of \ufb02uid composition into the EoS (see equation 5e) should be also important in describing such \ufb02ows. 5. Validity of EoS In section 2, we have made the following assumptions for our EoS (equation 5e); \ufb02uids are in equilibrium, i.e., 1) the distribution of the constituent particles is relativistically Maxwellian and 2) the multi-components are of single temperature. However, it is not clear whether the conditions are satis\ufb01ed. Most astrophysical \ufb02uids, unlike the terrestrial ones, consist of charged particles, which are collisionless, and so held together by magnetic \ufb01elds. The constituent particles, on the other hand, exchange energies, and become relaxed mostly through the Coulomb interaction, which is a slow process in collisionless plasmas. In addition, most of the heating processes, such as viscosity and shock heating, are likely to a\ufb00ect protons. However, it is mainly the electrons which radiate. So the energy exchange between electrons and protons should operate, and eventually govern the thermal properties of \ufb02uids. Let tee be the electron-electron relaxation time scale, tpp be the proton-proton relaxation time scale, and tep be the electron-proton relaxation time scale. And let tprob be the time scale of problem, such as the dynamical time scale, or the heating and/or cooling time scale. Only if tee < tprob and tpp < tprob, electrons and protons will separately attain the Maxwellian distributions. And only if tep < tprob, electrons and protons will relax to single temperature. To verify the assumptions for our EoS, in this section, we compare the relaxation time scales with the dynamical or accretion time scale (tdyn = r/v) for an accretion solution. We consider the temperature range where protons are thermally non-relativistic while electrons are relativistic. In most our solutions in the previous section, the computed temperature favors this range. The relativistic electron-electron interaction time scale was derived by Stepney (1983), tee = 8k2 (mec2)2\u03c3T cln\u039b T 2 ne\u2212. (13a) The time scale for the non-relativistic proton-proton interaction is given in Spitzer (1962), tpp = 4\u221a\u03c0k3/2 ln\u039b(mpc2)3/2\u03c3Tc \u0012mp me \u00132 T 3/2 np+ . (13b) \f\u2013 14 \u2013 The relativistic electron-proton interaction time scale was also derived by Stepney (1983), tep = 2 \u0012mp me \u0013 \u0012 \u03ba mec2 \u0013 1 \u03c3T c T np+ (13c) We present the electron number density, ne\u2212(Figure 8a), the three velocity v (Figure 8b) and the temperature T (Figure 8c) of the accretion solution for the e\u2212\u2212p+ \ufb02uid with E = 1.5247. The electron number density was computed for MB = 10M\u2299and \u02d9 M = 0.1 \u02d9 MEdd. In Figure 8d, various time scales are compared. All the relaxation time scales were calculated for the solution of single-temperature. To our surprise, it is clear that the accretion \ufb02ow in the \ufb01gure is \u2018too fast\u2019, such that various relaxation time scales are longer than the accretion time scale at least within few tens of rs. The implication of it is not clear, however. For instance, in relativistic plasmas, the constituent particles can be relaxed through the interactions with magnetic \ufb01elds, too. But the relaxation will depend on the details of \ufb01eld con\ufb01guration, such as the strength and the topology. Since we ignore in this study magnetic \ufb01elds as well as other processes such as non-conservative ones, we leave this issue of the validity of our EoS for future studies. 6. Discussion and Concluding Remarks In this paper, we have investigated the e\ufb00ects of \ufb02uid composition on the solutions of accretion and wind \ufb02ows onto black holes. In order to elucidate the e\ufb00ects, we have considered a very simple model of spherical \ufb02ows onto Schwarzschild black holes, and nonconservative processes and magnetic \ufb01elds have been ignored. First, we have suggested an approximate EoS for multi-component \ufb02uids in equation (5e), and studied the thermal properties of \ufb02uids with the EoS. Three temperature ranges have been categorized; for kT < mec2, any type of \ufb02uids are thermally non-relativistic, for kT > mpc2, any type of \ufb02uids are thermally relativistic, and for mec2 < kT < mpc2, the degree to which \ufb02uids are relativistic is determined by the composition of the \ufb02uids as well as the temperature (Figure 1a). Then we have shown that although at the same temperature the e\u2212\u2212e+ \ufb02uid is most relativistic (Figure 1a), at the same sound speed it is least relativistic (Figure 1c), compared to the \ufb02uids with protons. It is because whether a \ufb02uid is relativistic or not depends on the competition between the thermal energy and the rest mass energy of the \ufb02uid. The thermal properties of \ufb02uids carry to the sonic point properties. The sound speed at the sonic point, ac, explicitly depends only on the sonic point location, rc (it implicitly \f\u2013 15 \u2013 depends on the speci\ufb01c energy, E, and the proton proportion, \u03be, through rc). Therefore, comparing the thermodynamic quantities at the same rc is equivalent to comparing those quantities at the same ac. We have shown that at the same rc, the e\u2212\u2212e+ \ufb02uid is least relativistic, and a \ufb02uid with a \ufb01nite \u03be is most relativistic (Figures 2c and 2d). Then, we have presented the global solutions of accretion and wind \ufb02ows for the same rc but di\ufb00erent \u03be, for the same E but di\ufb00erent \u03be, and for the same T at large distances from black holes but di\ufb00erent \u03be. In all the cases, the \ufb02ows can be dramatically di\ufb00erent, if the composition is di\ufb00erent. This asserts that the e\ufb00ects of \ufb02uid composition are important in the solutions, and hence, incorporating them properly into the solutions through the EoS is important. Lastly, we have noted that the EoS in equation (5e) is based on the assumptions that the distribution of the constituent particles is relativistically Maxwellian and the multicomponents are of single temperature. However, at the same time, we have pointed out that while the Coulomb relaxation times are normally shorter than the dynamical time far away from black holes, close to black holes they can be longer. It means that close to black holes, the assumptions for the EoS can be potentially invalidated. The implication of it needs to be understood, and we leave further consideration of this issue for future studies. The work of DR was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2007-341-C00020)." + } + ], + "Renyue Cen": [ + { + "url": "http://arxiv.org/abs/2012.02230v1", + "title": "Physics of Non-Universal Larson's Relation", + "abstract": "From a new perspective, we re-examine self-gravity and turbulence jointly, in\nhopes of understanding the physical basis for one of the most important\nempirical relations governing clouds in the interstellar medium (ISM), the\nLarson's Relation relating velocity dispersion ($\\sigma_R$) to cloud size\n($R$). We report on two key new findings. First, the correct form of the\nLarson's Relation is $\\sigma_R=\\alpha_v^{1/5}\\sigma_{pc}(R/1pc)^{3/5}$, where\n$\\alpha_v$ is the virial parameter of clouds and $\\sigma_{pc}$ is the strength\nof the turbulence, if the turbulence has the Kolmogorov spectrum. Second, the\namplitude of the Larson's Relation, $\\sigma_{pc}$, is not universal, differing\nby a factor of about two between clouds on the Galactic disk and those at the\nGalactic center, evidenced by observational data.", + "authors": "Renyue Cen", + "published": "2020-12-03", + "updated": "2020-12-03", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION The interstellar medium (ISM) in galaxies is subject to a myriad of physical processes, including gravitational interactions, in\ufb02ow and ou\ufb02ow, radiative processes, magnetic \ufb01eld and feedback from stellar evolution (e.g., McKee & Ostriker 2007) and thus, perhaps unsurprisingly, bears a chaotic and turbulent appearance (e.g., Elmegreen & Scalo 2004). The role of supersonic turbulence in interacting with the process of gravitational collapse of molecular clouds has long been recognized (e.g., Larson 1981). We inquire and seek solutions as to why ISM clouds appear to follow a number of well de\ufb01ned empirical governing relations, by examining together the two most important physical processes turbulence and self-gravity guided by a new conceptual insight. Our goal is not set out to precisely nail down these relations, but rather to make sense of complex players involved, in a simple fashion, if possible. The results we \ufb01nd are gratifyingly simple and accurate. The turbulence in the ISM is driven at some large scales. In incompressible turbulence, the structure function is derived by Kolmogorov (1941), most notably the expression for the relation between velocity difference between two points and their separation, \u03c3R \u221dR1/3, based on a constant energy transmission rate through the inertia scale range. In highly compressible turbulence, the energy transmission down through the scale is no longer conservative, with kinetic energy also being spent to shock and/or compress the gas. Thus, if the relation remains a scale free powerlaw, the resulting exponent for a compressive turbulent medium is expected to be larger than 1/3. We will show that the right exponent is 3/5 in this case. Opposite to the driving scale, the \u201ccoherence\" scale in dense cores, introduced in Goodman et al. (1998), encapsulates the transition from turbulence dominated energy regime to a subsonic regime, 1 Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:2012.02230v1 [astro-ph.GA] 3 Dec 2020 \f2 where the sum of the thermal, magnetic and possibly other forms of energy dominates over turbulent energy. The turbulence is then often thought of cascading down between these two scales. In contrast to this simple cascading (down) of eddies in the gravity-free case, a new conceptual notion that we put forth here is that the dynamic interactions between turbulence and gravity occurring on all scales result in the formation of clouds, within which self-gravitational force becomes important (not necessarily dominant in general), on all scales. While the formation of clouds is originally driven by supersonic turbulence, gravity acts to both solidify them and in some cases detach them from the turbulence, hence provides a feedback loop to the turbulence itself, where the clouds may be visualized as the boundary conditions (on all scales) for the turbulence. As such, we shall call such an additionally constrained turbulence a \u201ccloud bound turbulence chain\" (CBTC), as opposed to a gravity-free turbulence. The singular coherence scale (\u223c0.1pc) above represents the smallest cloud of our CBTC. Based on this conception, we attempt to rederive the (revised) Larson\u2019s Relation, and compare to observations. 2. LARSON\u2019S RELATION: CONFLUENCE OF SUPERSONIC TURBULENCE AND SELF-GRAVITY In the ISM, self-gravity has the tendency to organize and fortify suitable regions into their own entities, playing a countervailing role against supersonic turbulence that would otherwise produce only transient structures. For a powerlaw radial density pro\ufb01le of slope \u2212\u03b2, the self-gravitating potential energy is W = \u22123\u2212\u03b2 5\u22122\u03b2 GM2 R R , where R and M are the radius and mass of the cloud. As we will show later, the density pro\ufb01le of gas clouds in the supersonic regime is expected to have \u03b2 = 4/5, thus we will use W = \u221211 17 GM2 R for all subsequent calculations. For a self-gravitating sphere of the same density pro\ufb01le, the mean velocity dispersion within radius R is related to the 1-d velocity dispersion at separation R by \u00af \u03c32 R = 11 14\u03c32 R. However, it proves more convenient to use \u00af \u03c3R instead of \u03c3R, since the former is a more used observable. Hence, we shall use \u00af \u03c3R for all subsequent expressions; for brevity, we use \u03c3R to represent \u00af \u03c3R hereafter. To reduce cumbersomeness in expressions, we neglect all other forms of energy but only to keep the gravitational energy W and gas kinetic energy K; it is straight-forward to include those neglected, by modifying the expression for virial parameter. We thus de\ufb01ne the virial parameter \u03b1v as \u03b1v = \u22122K W . The self-gravitating tendency may then be formulated as a 3-d region in the four-dimensional parameter space of (R, \u03c3R, \u03c1R, \u03b1v): \u03c32 R = \u03b1v 11 51 GM R = \u03b1v 44\u03c0 153 G\u03c1RR2 = \u03b1v 11\u03c0 51 G\u03a3RR. (1) where G is gravitational constant, and \u03c1R and \u03a3R are the mean volume and surface density within radius R. If \u03b1v and \u03c3R are independent, which we will show is the case, the region would look like a thick plane. Eq (1) is essentially the proposed modi\ufb01cation to Larson\u2019s Relation by Heyer et al. (2009). More comparisons will be made in \u00a73. Kolmogorov (1941) power spectrum is derived for homogeneous and isotropic three-dimensional subsonic turbulence in incompressible \ufb02ows, valid in the energy conserving inertial range. In contrast, the kinetic energy in the supersonic compressible turbulence in ISM is dissipative on all scales due to shocks and radiative processes in the ISM. It thus, at \ufb01rst instant, might suggest that the Kolmogorov turbulence may provide an inadequate description of the compressible turbulence of the ISM. Fleck (1983) suggests that the relation between a scaled velocity vR and scale R of compressible turbulence be expressed as vR \u2261\u03c11/3 R \u03c3R = AR1/3, (2) \f3 which constitutes a plane in the parameter space of (R, \u03c3R, \u03c1R), generally different from that of selfgravity (Eq 1), where A is a constant. The expression essentially asserts that a constant volumetric energy density transfer rate in compressible \ufb02ow is transmitted down the turbulence cascading scale. Eq (2) reduces to the original Kolmogorov form for incompressible \ufb02ow that is a line in the two-dimensional parameter space of (R, uR). A formal proof of the existence of an inertial range for highly compressible turbulence is given by Aluie (2011, 2013), validating the density-weighted velocity formulation. Importantly, numerical simulations show that the spectrum of vR indeed follows remarkably well the Kolmogorov spectrum for the isothermal ISM (e.g., Kritsuk et al. 2007, 2013). We thus continue to use the nomenclature of Kolmogorov compressible turbulence, despite its oxymoronic sounding, given the spectral slope we adopt and its empirical validity to describe the turbulence of the isothermal ISM. The general physical arguments and quantitative conclusions reached are little altered with relatively small variations of the slope of the turbulence power spectrum. As a related note, in the subsonic compressible turbulence, with gravity also playing an important role, such as in dark cores in molecular clouds, the physical premise for the argument of energy transmission through the inertia scale range ceases to apply with respect to the total velocity. This may be understood in that the turbulence chain driven at some large scales no longer is the primary driver of velocity in the subsonic regime. Rather, the velocity \ufb01eld is driven jointly by turbulence, thermal (and possible other forms of) pressure, and gravity (Myers 1983). Combining Eq (1) and Eq (2) gives \u03c3R = \u03b11/5 v ( 44 153A3G)1/5R3/5. (3) Because A is unknown but a constant, we simply introduce another parameter, \u03c3pc, which denotes the 1-d mean turbulence velocity dispersion within a region of radius 1 parsec, to express the strength of the turbulence. Now Eq (3) is simpli\ufb01ed to \u03c3R = \u03b11/5 v \u03c3pc( R 1pc)3/5. (4) Looking at Eq (4), it may seem puzzling as to why the virial parameter \u03b1v appears in this expression that is supposedly an expression of the strength of the turbulence chain. But it is expected. The appearance of \u03b1v (and the disappearance of gas density \u03c1) in this expression re\ufb02ects the feedback of the boundary condition at the clouds that terminates the turbulence chain at the small scale end, in lieu of gas density. To see that we may express the cloud density in terms of \u03c3pc: npc = \u03b1\u22123/5 v 153 44\u03c0 \u03c32 pc Gmp(1pc)2 = 1.04 \u00d7 104cm\u22123\u03b1\u22123/5 v ( \u03c3pc 1 km/s)2, (5) where npc is the mean density within a cloud of radius 1pc with a virial parameter \u03b1v. Eq (4) is the (revised) Larson\u2019s \ufb01rst relation, relating the velocity dispersion to the size of the cloud. Let us now proceed to compare this relation to observational data. Figure (1) shows the observational data along with best powerlaw \ufb01ts. We \ufb01t the data to a powerlaw of the form \u03c3R = \u03b11/5 v \u03c3pc( R 1pc)\u03b2, (6) \f4 -1.5 -1 -0.5 0 0.5 1 1.5 2 log R (pc) -1.5 -1 -0.5 0 0.5 1 log pc (km/s) Galactic disk clouds obs data best fit: pc=0.46km/s, =0.57 2 lower slope: =0.56 2 upper slope: =0.59 Figure 1. Top panel shows the velocity as a function of its size for the observed molecular clouds on the Galactic disk (open red circles), from Dame et al. (1986), Solomon et al. (1987),Heyer et al. (2001), Heyer & Brunt (2004), Ridge et al. (2006), Narayanan et al. (2008) and Ripple et al. (2013). Bottom panel shows the velocity as a function of its size, for the observed molecular clouds at the Galactic center from the CHIMPS2 survey (Eden et al. 2020) (open red circles) and the SEDIGISM (Duarte-Cabral et al. 2020) (solid black squares). In each panel, we show as red solid line as the best powerlaw \ufb01t using linear regression, along with the 2\u03c3 upper and lower slopes shown as dotted and dashed lines, respectively, obtained with bootstrapping. leaving both the amplitude \u03c3pc and the exponent \u03b2 as two free parameters. Moreover, we perform bootstrap resampling to obtain upper and lower 2\u03c3 limits of the \ufb01tting parameters by \ufb01tting both parameters. \f5 We \ufb01nd the best parameters and the \u00b12\u03c3 limits for the disk clouds to be best \ufb01t : \u03c3pc = 0.46 \u00b1 0.03 km/s and \u03b2 = 0.57 \u00b1 0.02 +2\u03c3 : \u03c3pc = 0.48 and \u03b2 = 0.59 \u22122\u03c3 : \u03c3pc = 0.45 and \u03b2 = 0.56, (7) shown as the solid, dotted and dash lines, respectively, in the top panel of Figure (1). Repeating the calculation for the clouds at the Galactic center yields the best parameters and the \u00b12\u03c3 limits best \ufb01t : \u03c3pc = 1.03 \u00b1 0.01 km/s and \u03b2 = 0.63 \u00b1 0.01 +2\u03c3 : \u03c3pc = 1.02 and \u03b2 = 0.65 \u22122\u03c3 : \u03c3pc = 1.05 and \u03b2 = 0.61, (8) shown as the solid, dotted and dash lines, respectively, in the bottom panel of Figure (1). We note that the errorbars of the best using the linear regression method is not necessarily consistent with and often larger than the 2\u03c3 range obtained using bootstrap, due to the latter\u2019s larger sample size with bootstrapping. The discrepancy is more noticeable for the disk clouds due to the smaller observational data sample size, as compared to that of the Galactic center clouds. Nevertheless, even in the absence of this shift for the best slope, the traditional exponent of the Larson\u2019s Relation of 1/2 is inconsistent with the disk data at 100% level if bootstrap is used and at 3.5\u03c3 if the direct regression is used, whereas a slope of 0.6 is about 1.5\u03c3 away. If considering the clouds at the Galactic center, the contrast is still larger. So far, we have not considered possible (perhaps different) systematics for the observations of the Galactic disk clouds as compared to the Galactic center clouds. The fact that the best \ufb01tting slope of the disk clouds of 0.57 and that of the Galactic center clouds of 0.63 equidistantly \ufb02ank our proposed slope of 0.60 is intriguing. It may be caused by some additional physics that are not considered in our simpli\ufb01ed treatment but operates to varying degrees of importance in the cases. It may also be caused by data inhomogeneities in the plotted plane, which may already be visible. We shall take the simpler interpretation that both slopes are intrinsically equal to 0.60 and the apparent values are due to some observational systematics, although we are not in a position to justify this assertion. This new Larson\u2019s Relation with the exponent 3/5 is in excellent agreement with observational data. Heyer & Brunt (2004) measure the value of the scaling exponent of 0.59 \u00b1 0.07 in the spatial range of 1 \u221250pc [corresponding to the original range of Larson (1981) and Solomon et al. (1987)], while \ufb01tting the entire spatial range of 0.03 \u221250pc probed they get 0.62 \u00b1 0.09. It is clear now that it is not just the gravity alone that gives rise to the Larson\u2019s Relation, rather it is a combination of gravity and turbulence physics that naturally yields it. Larson 1981 invoked virial equilibrium to explain his relation. What is new here is that the intersection of gravity and turbulence provides a signi\ufb01cantly better \ufb01t for data. Forcing the slope of 0.6 to both data sets, the best \ufb01t \u03c3pc is found to be \u03c3pc = 0.44 \u00b1 0.02 km/s for Galactic disk clouds \u03c3pc = 1.08 \u00b1 0.01 km/s for Galactic center clouds (9) From data in our Galaxy alone, one can thus already conclude that CBTCs vary in different environments within a galaxy. A two-sample KS test between the \u03c3pc distribution of the Galactic disk clouds \f6 and that of the Galactic center clouds gives a p-value p = 5 \u00d7 10\u221220, indicating they are statistically different. It follows then that CBTCs and hence Larson\u2019s Relation may vary across galaxies and in different environments within galaxies. This prediction is supported by recent observations of molecular clouds in other galaxies (e.g., Donovan Meyer et al. 2013; Hughes et al. 2013; Colombo et al. 2014; Krieger et al. 2020). Historically, from Eq (1) we see that, if one insists expressing the Larson\u2019s \ufb01rst relation with the exponent close to 0.5, the original Larson\u2019s \ufb01rst relation would be gas cloud surface density dependent, a point later re-iterated (Heyer et al. 2009). But if the range in \u03a3R is suf\ufb01ciently narrow, one would obtain the original scale of a slope of 1/2, which may be the reason for that result obtained by Larson (1981). Thus, the original Larson\u2019s \ufb01rst relation has a limited scope and is applicable only when the range of surface density is narrow enough. In contrast, the revised Larson\u2019s Relation, Eq (4), is expected to be valid universally, except that the strength parameter, \u03c3pc, is expected to vary across different environments and across galaxies. To illustrate this point better, let us express \u03c3pc in terms of direct observables, involving gas surface density. Combining Eq (1) and Eq (4) gives \u03c3pc = \u03b13/10 v ( \u03a3R 341 M\u2299pc\u22122)1/2( R 1pc)\u22121/10 km/s. (10) The large difference between the Larson\u2019s Relation for the disk clouds and the Galactic center clouds strongly indicates an important role played by turbulence and that the CBTCs in the disk and at the Galactic centers are different, since gravity is the same. While one may use Eq (4) or Eq (10) or other variants to drive \u03c3pc empirically with three observables, such a derivation does not address the physical origin of the magnitude of \u03c3pc. A simple top-down illustrative method to derive \u03c3pc is given in \u00a74. We should note that our adoption of the Kolmogorov turbulence spectrum is largely motivated by available simulations. The obtained consistency with observations suggests it may be valid. The agreement with the observed fractal dimension in \u00a73 is consistent with Kolmogorov spectrum as well. Nonetheless, in general, the turbulence spectrum may not adhere strictly to that of Kolmogorov type. A more general form of Eq (4) may be written as \u03c3R \u221d\u03b11/5 v R(3\u03c6+2)/5. (11) For the Kolmogorov turbulence, we have \u03c6 = 1/3, which yields an exponent of 0.6. For Burgers (1948) turbulence, we have \u03c6 = 1/2, corresponding to an exponent 0.7, while the turbulence in a strong magnetic \ufb01eld may have a Iroshnikov-Kraichnan (Iroshnikov 1964; Kraichnan 1965) type with \u03c6 = 1/4, which would yield an exponent of 0.55. If one were to ascribe the difference in the exponent for the Galactic disk and Galactic center clouds to physical differences in the respective turbulence, one possible exit is that the turbulence on the Galactic disk is closer to that of Iroshnikov-Kraichnan type than the turbulence at the Galactic center. This requires further work to clarify that is beyond the scope of this paper. Nonetheless, none of different types of turbulence is expected to yield the conventional exponent for the Larson\u2019s Relation of 0.5. Another point maybe worth noting is that there are clouds with \u03b1v < 1, i.e., over-virialized clouds. Obviously, these clouds seem unlikely to be evolutionary descendants of clouds that had \u03b1 = 1 and subsequently endured some gravitational collapse. If that were the case, it would imply a turbulence dissipation time signi\ufb01cantly less than the free-fall time of the system, inconsistent with simulations (e.g., Stone et al. 1998). Therefore, we suspect that these low \u03b1v systems are a direct product of \f7 turbulence, clouds that have relatively low velocity dispersion for their gravitational strength and are probably transient, due to the randomness of the turbulence. To further clarify the nature of these special clouds, we show in Figure (2) the cloud mass as a function of its virial parameter. To our surprise, clouds with \u03b1v < 1 span the entire mass range. This may be consistent with the randomness of the turbulence suggested above. We note, however, that some of the most massive clouds (\u2265106 M\u2299), i.e., giant molecular clouds, may be a collection of uncommunicative, smaller clouds in an apparent contiguous region, where the measured velocity dispersions re\ufb02ect those of their smaller constituents, while the overall gravitational energy increases with congregation; we note that the velocity dispersion in this case may be signi\ufb01cantly anisotropic. Finally, the ubiquitous existence of gravitationally unbound clouds is simply due to insuf\ufb01cient gravitational force relative to the turbulence velocity \ufb01eld in these clouds. A point made here is that gravitationally unbound clouds are not necessarily those that become gravitationally bound \ufb01rst and later become unbound due to internal stellar feedback or cloud-cloud collisions (e.g., Dobbs et al. 2011). In Figure (2) it is seen that the clouds at the Galactic center (black squares) show a noticeable gap in mass, from \u223c3 M\u2299to \u223c30 M\u2299. It is not clear to us what might have caused this. There is a separate ridge (horizontally oriented) of clouds near the bottom of the plot for the Galactic center clouds with masses around one solar mass. These low mass clouds appear to be mostly unbound. While it is not de\ufb01nitive, these clouds may be the counterpart of sub-solar mass clouds on the Galactic disk called \u201cdroplets\" with odd \u201cvirial\" properties (Chen et al. 2019a,b), although we are not sure why their typical mass is about 1 M\u2299instead of \u223c0.4 M\u2299found for the droplets. These small systems have large virial parameters but remain bound by external (thermal and turbulent) pressure. The connection between these systems and the CBTC that we envision here may no longer be direct, and considerations of some additional physics may be required to place these systems also within the general framework outlined here. We defer this to a later work. Another word to further clarify the physical meaning of Eq (4) may be in order, which, let us recall, is a result derived based on the joint action of the statistical order imposed by turbulence of strength \u00af \u03c3pc (with a small dispersion) and the natural selection effect by self-gravity, with (the inverse of) \u03b1v describing the strength of the latter acting against the former. If \u03b1v is much greater than unity, gravitational force would be too feeble to hold the cloud together long enough to dissipate the excess energy to allow for further consistent gravitational contraction in the presence of internal and external disruptive force of turbulence. Thus, the observed clouds with \u03b1v greatly exceeding unity that are products of supersonic turbulence are likely transient in nature. Nonetheless, they may be useful for some physical analysis. They may be considered good candidates for analyses where a statistical equilibrium is a useful assumption. At the other end, when \u03b1v is close to unity, gravitational collapse of a cloud may ensue, detaching it from the parent CBTC. However, as noted in Figure (2), one should exercise caution to treat clouds with an apparent \u03b1v less than unity that may not be genuinely coherent gravitational entities, ready to run away and collapse. We shall not delve into this further but note that these apparently over-virialized clouds may not possess the usual gravitationally induced density strati\ufb01cation and may lack a coherent structure (such as a well de\ufb01ned center). \f8 Figure 2. shows cloud mass as a function of \u03b1v for the Galactic disk clouds (open red circle) and Galactic center clouds (open black squares). The two vertical lines indicate clouds with \u03b1v = 1 and 2, respectively, for reference. 3. FRACTAL DIMENSION OF THE ISM Using Eq (1) and Eq (4), we may express the cloud density-size relation: nR = \u03b1\u22123/5 v 153 44\u03c0 \u03c32 pc Gmp(1pc)2( R 1pc)\u22124/5 = 1.0 \u00d7 104\u03b1\u22123/5 v ( \u03c3pc 1 km/s)2( R 1pc)\u22124/5 cm\u22123, (12) where nR is the mean hydrogen number density within radius R and mp is proton mass. Then, the size-cloud mass relation follows: MR = 2.6 \u00d7 102\u03b1\u22123/5 v ( \u03c3pc 1 km/s)2( R 1pc)11/5 M\u2299. (13) Since \u03b1v and R are uncorrelated, for clouds generated by a same CBTC (a \u03c3pc with dispersion), we see that MR \u221dR11/5. This mass-size relation with a slope of 2.2 is in excellent agreement with observed value of 2.2 \u00b1 0.1 (Heyer et al. 2001), and 2.36 \u00b1 0.04 (Roman-Duval et al. 2010). There are many different techniques used to measure cloud mass and size. We stress that the size-mass relation depends on how clouds are de\ufb01ned or selected. For the same reason that the original Larson\u2019s size-velocity dispersion relation has an exponent of 1/2, the original Larson\u2019s sizemass relation has an exponent of 2. Both are due to a small surface density range of the clouds (e.g., Beaumont et al. 2012). The exponent in Eq (13) expresses the size-mass relation for clouds at a \ufb01xed virial parameter. \f9 In the context of a fractal, self-similar structure, which may approximate the ISM reasonably well, Eq (13) indicates that the fractal dimension of the ISM is D = 2.2 (Mandelbrot 1983) with the implied size function of the form n(L)dL \u221dL\u2212D\u22121dL \u221dL\u221216/5dL. (14) The slope 16/5 in Eq (14) is in excellent agreement with the observed value of 3.2\u00b10.1 for CO detected molecular clouds in the Milky Way spanning the range of \u223c1 \u2212100pc (Heyer et al. 2001). The fractal dimension of the ISM of D = 2.2 corresponds to density power spectrum of Pk \u221dkD\u22123 \u221d k\u22120.8. It is helpful to have an intuitive visualization of this outcome. In the process of energy transmitting downward along the spatial/mass scale via supersonic motion, shocks and radiative cooling, density structure (density \ufb02uctuation spectrum) is generated. In three dimensional space, an ideal, long and uniform \ufb01lament will have a density power spectrum Pk \u221dk\u22121 on scales below the length of the \ufb01lament. Similarly, a uniform sheet corresponds to Pk \u221dk\u22122, whereas a point corresponding to a density power spectrum of Pk = k0. In absence of self-gravity, compressive supersonic turbulence with suf\ufb01cient cooling has the tendency to form \ufb01laments where two planar shocks intersect. In realistic situations with self-gravity, \ufb01laments have varying lengths and the actual density power spectrum is expected to deviate somewhat from this, depending on the nature of driving and energy distribution of the driving, and the power spectrum is in general Pk \u221dk\u2212\u03b2 with \u03b2 < 1. Nevertheless, as long as the energy in the turbulence is dominated on the large scales, \u03b2 is not likely to be much less than unity. Thus, we see that the Kolmogorov compressive turbulence generated, gravitationally signi\ufb01cant structures, in the presence of rapid radiative cooling, have a density structure that is dominated by \ufb01lamentary structures with a small mixture of knots. 4. ESTIMATE \u03c3PC FOR VISCOUSLY DRIVEN TURBULENCE In the normal situation where star formation occurs on a disk, it is reasonable to assume that the radius of the largest turbulence \u201ccloud\", which will be the driving scale of the CBTC, is equal to the scale height of the disk for isotropic turbulence. This driving scale, Rd, can be expressed as Rd = CRg\u03c32 d(Rg) v2 c(Rg) , (15) where \u03c3d(Rg) is the velocity dispersion on the driving scale Rd at a galacto-centric radius Rg, which is also the vertical dispersion, vc(Rg) is the circular velocity at radius Rg, and C is a constant of order unity to absorb uncertainty. We shall assume that the energy source is the rotational energy at the location, where the turbulence may be driven by some viscous processes on the disk. With such an assertion, one can relate \u03c3d to Rd by \u03c3d = 2BRd\u2126(Rg), (16) where \u2126(Rg) is the angular velocity at the radius Rg for a Mestel disk that we will adopt as a reasonable approximation, and B is another constant of order unity to absorb uncertainty. For a gas cloud (assumed to be uniform) of radius Rd, we can express the virial parameter by \u03b1d = 3\u03c32 d(Rg) 3 5 GMd Rd = 15\u03c32 d 4\u03c0G\u03c1dR2 d , (17) where \u03c1d is the gas density at the driving scale. With Eq (15,16,17) we can compute \u03c3pc using Eq (4); \u03c3pc = 0.44 km/s( D 2.3)\u22121/5( \u03a3d 5 M\u2299pc\u22122)1/5( vc 220 km/s)3/5( Rg 8kpc)\u22122/5, (18) \f10 where we have de\ufb01ned another constant D \u2261B/C. Eq (18) is expressed such that if the \ufb01ducial values are taken, we obtain \u03c3pc = 0.44 km/s for disk clouds center near the solar radius, as derived earlier (see Eq 9). Aside from the unknown combination of D, all other \ufb01ducial values are well observed, including the gas surface density of 5 M\u2299pc\u22122 (e.g., Sofue 2017). Interestingly, if we use the same D = 2.9 value along with the relevant values for other parameters for the Galactic center, \u03a3d = 30 M\u2299pc\u22122 (Sofue 2017), Rg = 500pc (within which the Galactic center clouds are observed), vc = 250 km/s (Sofue 2017), we obtain \u03c3pc = 2.1 km/s, larger than the value of 1.08 km/s \u00b1 0.01dex, derived for the clouds at the Galactice center (see Eq 9). Although the expectation that \u03c3pc at the Galactic center is larger than that on the Galactic disk is in agreement with the derived values, the numerical discrepancy may be due to a number of causes. It may be in part due to different observational systematics for disk clouds and center clouds. It may be in part due to that the treatment of the central region of the Galaxy as a disk breaks down or that the effective viscosity in the two regions are different. It is notable that our simple calculations do not require participation of some other physical processes that might be relevant, including magnetic \ufb01eld, stellar feedback. While this is not a vigorous proof of the veracity of our assumptions, the found agreement between the predicted \u03c3pc and the directly calculated value for the Galactic center clouds is a validation of our basic assumptions and the resulting outcomes, that is, turbulence and gravity play a dominant role in shaping the interstellar medium and the formation of clouds down to at least the sonic scale. 5." + }, + { + "url": "http://arxiv.org/abs/2001.11083v1", + "title": "Physics of Prodigious Lyman Continuum Leakers", + "abstract": "An analysis of the dynamics of a star formation event is performed. It is\nshown that galaxies able to drive leftover gas to sufficient altitudes in a few\nmillion years are characterized by two basic properties: small sizes (<1kpc)\nand high star formation rate surface densities (Sigma_SFR > 10 Msun/yr/kpc2).\nFor the parameter space of relevance, the outflow is primarily driven by\nsupernovae with radiation pressure being significant but subdominant. Our\nanalysis provides the unifying physical origin for a diverse set of observed\nLyC leakers, including the green-peas galaxies, [SII]-weak galaxies,\nLyman-alpha emitters, with these two characteristics as the common denominator.\nAmong verifiable physical properties of LyC leakers, we predict that (1) the\nnewly formed stellar masses are are typically in the range of 1e8-1e10 Msun,\nexcept perhaps ULIRGs, (2) the outflow velocities are typically in the range\ntypically of 100-600km/s, but may exceed 1e3 km/s in ULIRGs, with a strong\npositive correlation between the stellar masses formed and the outflow\nvelocities, (3) the overall escape fraction of galaxies is expected to increase\nwith increasing redshift, given the cosmological trend that galaxies become\ndenser and more compact with increasing redshift. In addition, two interesting\nby-product predictions are also borne out. First, ULIRGs appear to be in a\nparameter region where they should be prodigious LyC leakers, unless there is a\nlarge ram-pressure. Second, Lyman break galaxies (LBGs) are not supposed to be\nprodigious LyC leakers in our model, given their claimed effective radii\nexceeding 1kpc.", + "authors": "Renyue Cen", + "published": "2020-01-29", + "updated": "2020-01-29", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction Understanding how Lyman continuum photons (LyC) escape from galaxies is necessary for understanding the epoch of reionization (EoR), one of the last major frontiers of astrophysics. High resolution cosmological hydrodynamic galaxy formation simulations have widely evidenced that supernova feedback driven blastwaves are the primary facilitator to evacuate or create major pores in the interstellar medium to enable the escape of LyC (e.g., Wise & Cen 2009; Kimm & Cen 2014; Cen & Kimm 2015; Ma et al. 2016; Kimm et al. 2019). Since LyC escape is not directly measurable 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:2001.11083v1 [astro-ph.GA] 29 Jan 2020 \f\u2013 2 \u2013 at EoR due to its limited mean free path, it is imperative to ascertain this unknown by establishing observable proxies for the escape fraction, fesc, when both proxies and fesc are measurable at lower redshift, based upon a satisfactory physical understanding. Observationally, in the low-z (z < 0.4) universe the majority of galaxies with large fesc values turn out to belong to the compact, so-called green-peas galaxies from the SDSS sample, characterized by their low stellar masses, low metallicities, very strong nebular emission-lines (H\u03b2 equivalent widths > 200\u02da A) and very high \ufb02ux ratios of [OIII]5007/[OII]3727 > 5 (e.g., Schaerer et al. 2016; Izotov et al. 2016a,b, 2018a,b, 2019). Interestingly, the green-peas galaxies have star-formation rate surface densities of 10 \u2212100 M\u2299yr\u22121kpc\u22122, which are much higher than typical star-forming galaxies in the local universe but may be similar to those at EoR. Another class of low redshift galaxies that have high LyC escape fraction is identi\ufb01ed by their high Ly\u03b1 emission (e.g., Verhamme et al. 2015, 2017), which typically have star-formation rate surface densities of \u223c10 M\u2299yr\u22121kpc\u22122. At z \u223c3 LyC escape is detected in dozens of individual galaxies (e.g., Mostardi et al. 2015; Vanzella et al. 2016; Shapley et al. 2016; Steidel et al. 2018), some of which also show intense [OIII] emission that are consistent with low-z observations and characteristic of galaxies at EoR (e.g., Fletcher et al. 2019). Furthermore, recently, another set of galaxies with relatively weak [SII] nebular emission lines are also observed to show high LyC escape (Wang et al. 2019). The low-redshift green-peas galaxies, z \u223c3 high LyC leakers and the [SII]-weak LyC leaking galaxies are di\ufb00erent in various respects, such as stellar mass, metallicity, dust content and ISM properties. But all appear to share two common characteristics: all four have very high star-formation rate surface densities and relatively compact sizes. This Letter aims to understand if supernova feedback may be the common physical process that underwrites the commonality shared by these di\ufb00erent classes of galaxies observed. We will show that this is indeed the case. This \ufb01nding thus provides a physical basis to help identify galaxies with high LyC leakage at the epoch of reionization by indirect but robust markers that can be established at more accessible redshift and for why dwarf galaxies at EoR are much more capable of enabling high LyC escape fraction than typical low redshift counterparts. 2. Physics of Lyman Continuum Leakers We explore if gas density-bound structures in star forming galaxies may be produced. The following treatment is undoubtedly simpli\ufb01ed but capture the essence of the physics, and is primarily a means to identify likely physical parameter space that is relevant for making galaxies with high LyC escape fractions. A gas cloud of initial mass Mgas,0 with a half light radius rh and a star formation rate SFR gives rise to an outward radial force on the gas cloud itself due to supernova explosion generated radial momentum equal to FSN = SFR \u00d7 pSN \u00d7 M \u22121 SN, (1) where pSN = 3 \u00d7 105 M\u2299km/s is the terminal momentum generated per supernova (e.g., Kimm & Cen 2014), MSN is the amount of stellar mass formed to produce one supernova, which is equal to \f\u2013 3 \u2013 about (50, 75, 100) M\u2299for (Chabrier, Kroupa, Salpeter) initial mass function (IMF), respectively. The exact value of pSN weakly depends on density and metallicity of the ambient gas. For simplicity without loss of validity given the concerned precision of our treatment, we use the above \ufb01ducial value. Another mechanical form of feedback from massive stars is fast stellar winds due to O stars. The total energy from stellar winds is about a factor of ten lower than the total energy from supernovae (e.g., Leitherer et al. 1999). Since stellar winds roughly track core collapse supernovae, we simply omit stellar winds bearing a loss of accuracy at 10% level. The second important outward force on the gas is the radiation pressure on dust grains, equal to Frad = SFR \u00d7 \u03b1 \u00d7 c [1 \u2212exp (\u2212\u03a3gas\u03baUV )] (1 + \u03a3gas\u03baFIR), (2) where \u03b1 = 3.6 \u00d7 10\u22124 is an adopted nuclear synthesis energy conversion e\ufb03ciency from rest mass to radiation, c speed of light, \u03baUV = 1800cm2 g\u22121 and \u03baFIR = 20cm2 g\u22121 the opacity at UV (e.g., Draine 2003) and dust processed radiation far infrared (FIR) radiation (e.g., Lenz et al. 2017), respectively, \u03a3gas the surface density of the gas. The exact value of \u03baUV matters little in the regime of interest but variations of the value of \u03baFIR does matter to some extent. To place the two forces in relative terms, we note that at \u03a3gas = 1.3 \u00d7 104 M\u2299pc\u22122, the radiation pressure due to IR photons equals the ram pressure due to supernova blastwaves, with the former and latter dominating at the higher and lower surface densities, respectively. There are two relevant inward forces. The mean gravitational force, when averaged over an isothermal sphere, which is assumed, is Fg = ln(rmax/rmin)GMgas,0Mgas(t) 4r2 h , (3) where rmin to rmax are minimum and maximum radii of the gas cloud being expelled. For our calculations below we adopt rmin = 100pc and rmax = rh; the results depend weakly on the particular choices of these two radii. We note that Mgas(t) is the remaining mass of the gas cloud when it starts to be lifted at time tL by the combined force of supernova driven momentum \ufb02ux and radiation momentum \ufb02ux against inward forces, with Mgas,0 \u2212Mgas(tL) having formed into stars. Another inward force is that due to ram pressure, which we parameterize in terms of gas infall rate in units of star formation rate: Frp = \u02d9 Minfvinf = \u03b7 \u00d7 SFR \u00d7 \u0012GMgas,0 rh \u00131/2 , (4) where \u03b7 is the ratio of mass infall rate to SFR. The relevant physical regime in hand is how to drive the gas by the combined force of supernovae and radiation against the combined force of gravitational force and ram-pressure. A key physical requirement, we propose, is that the feedback process needs to promptly lift the entire remaining gas cloud to a su\ufb03cient height such that it piles itself into a (thin) shell that subsequently fragments while continuing moving out, in order to make a copious LyC leaker. It seems appropriate to de\ufb01ne \u201ca su\ufb03cient height\u201d as a height on the order of rh, which we simplify to be just rh. The above \f\u2013 4 \u2013 de\ufb01nition may be expressed as (FSN + Frad \u2212Fg \u2212Frp)(th \u2212tL) = Mgas(tL)vh and 2rh = (th \u2212tL)vh, (5) where vh is the shell velocity when reaching rh at time th, and Mgas(tL) is the gas cloud mass at tL when it begins its ascent. 0 1 2 3 log t (Myr) -5 -4 -3 -2 log NccSN (yr-1) per SFR=1Msun/yr from t=0 Fig. 1.\u2014 shows the supernova rate for a star formation event at a star formation rate of 1 M\u2299/yr starting at time t = 0 as a function of time. This plot is produced using Eq (A.2) of Zapartas et al. (2017) of the core-collapse supernova rate including both single stars and binary mergers. We note that at t \u2265200Myr the saturation rate corresponds to one supernova per 78 M\u2299of stars formed, approximately in agreement with what a Kroupa IMF gives. We may relate the initial gas mass Mgas,0 to SFR that is observable by using an empirically found relation: SFR = c\u2217Mgas,0/tdyn, (6) where G is the gravitational constant, tdyn = q 3\u03c0 32G\u03c1t is the dynamical time of the system with \u03c1t being the total density, the sum of gas and stars within rh, star formation e\ufb03ciency per dynamical time is found to be c\u2217= 0.01 (Krumholz et al. 2012). Note that the SFR above is the SFR up to the time tL, when it is shut down upon the uplift of the gas cloud. \f\u2013 5 \u2013 We compute the rate of supernova explosion more precisely. This is needed because as soon as the combined outward force of supernova feedback and radiation pressure is stronger than inward forces at time tL, we need to stop star formation then. It is possible in some cases that the star formation has not lasted long enough to reach the saturation supernova rate. We use a recent, comprehensive analysis of Zapartas et al. (2017) that takes into account both single and binary stellar populations, including supernovae due to binary mergers. We convolve the \ufb01tting formula (A.2) in Zapartas et al. (2017) that is composed of three separate temporal segments, 3 \u221225Myr and 25 \u221248Myr due to massive single stars and 448 \u2212200Myr due to binary merger produced corecollapse supernovae, with a constant star formation rate SFR (Eq 6) starting at time t = 0. Figure (1) shows the resulting instantaneous supernova rate as a function of time for a star formation event at a constant SFR = 1 M\u2299/yr. Then, MSN(t) = 1 M\u2299/NccSN (where NccSN is the y-axis shown in Figure 1) as a function of time since the start of the starburst, in lieu of a constant value of MSN that is the saturation value at t \u2265200Mpr, in Eq (1), where appropriate. We note that at t \u2265200Myr the saturation rate corresponds to one supernova per 78 M\u2299of stars formed, approximately corresponding to a Kroupa IMF. Figure (2) shows the results by integrating Eq (5). The solid red contours labelled in units of Myr shows the time, th \u2212tL, which it takes to drive the gas to an altitude of rh. Earlier we have mentioned the need of \u201cpromptly\u201d driving the gas away, which we now elaborate. For any starburst event, massive O stars formed that dominate the LyC radiation die in about 5Myr. Therefore, the time elapsed since the end of the starburst of the observed prodigious LyC leakers should not be longer than that time scale, i.e., th \u2212tL \u22645Myr. Comparing the th \u2212tL = 5Myr contour with the black solid triangles indicates that all the observed LyC leakers lie in the parameter region with th \u2212tL \u22645Myr, except J0921 with rh = 0.78kpc, SFR = 7.68 M\u2299yr\u22121 and M\u2217= 6.3 \u00d7 1010 M\u2299 and J0926 with rh = 0.69kpc, SFR = 3.47 M\u2299yr\u22121 and M\u2217= 1.3 \u00d7 109 M\u2299(Alexandro\ufb00et al. 2015). In the entire region of possible prodigious LyC leakers we point out that the outward force is dominated by supernova driven momentum, although in a thin top-left wedge region the radiation pressure alone is also able to counter the gravity. It is very clear that all LyC leakers live in a parameter space generally denoted as Lyman alpha emitters, as indicated by the large, cyan-shaded region (e.g., Gawiser et al. 2007; Bond et al. 2009). However, it is also clear that not all LAEs are LyC leakers, as noted by the magenta dots that are observed to be LyC non-leakers. We interpret this as that the gas being lifted by the supernova driven momentum is fragmentary such that obscuration or transparency of the LyC sources are sightline dependent even when the gas cloud as a whole is expelled to a high altitude. We note that, if one includes binary evolution e\ufb00ects, such as merger produced blue stragglers or stripped hot helium stars, additional O stars like stars will emerge with some delay of order 10Myr. Each of these two delayed components may mount to about 10% of LyC photons produced by initial starburst (Eldridge et al. 2017). This may be a signi\ufb01cant addition of LyC sources. Nevertheless, given the closely spaced red contours in Figure (2), we see that none of our conclusions will be signi\ufb01cantly altered, if we use th \u2212tL = 10Myr instead of th \u2212tL = 5Myr. In the right-side, large gulf region occupying about one half of the plot area, gravity dominates over the combined outward force of supernova explosion driven momentum \ufb02ux and radiation pressure. In this region, no complete lift-up of gas to rh is possible regardless of the duration of \f\u2013 6 \u2013 Fig. 2.\u2014 shows the time that it takes to evacuate the gas to an altitude of rh, th\u2212tL, as the solid red contours labelled in units of Myr, with labels \u201c1\u201d, \u201c3\u201d, \u201c5\u201d, \u201c10\u201d, \u201c30\u201d and \u201c100\u201d. Shown as dotted black contours are the log of the stellar mass in units of M\u2299formed from this episode, with labels \u201c8\u201d, \u201c9\u201d, \u201c10\u201d and \u201c11\u201d. The dashed blue contours depict the radial velocity of gas being lifted in units of km/s, with labels \u201c10\u201d, \u201c30\u201d, \u201c100\u201d, \u201c300\u201d and \u201c600\u201d. The shaded light blue, light green, light red and dark blue regions indicate approximately regions normally referred to Lyman alpha emitters (LAEs), Lyman break galaxies (LBGs) at high redshift, ultra luminous infrared galaxies (ULIRGs), and z \u223c1 star-forming but non-LyC leaking dwarf galaxies, respectively. The LAE region is obtained by using a radius range of 0.1 \u22121.4kpc and a range of SFR of 1 \u2212100 M\u2299yr\u22121 (e.g., Gawiser et al. 2007; Bond et al. 2009). The LBG region is obtained by using a radius range of 1.2 \u22122.5kpc and a range of SFR of 5 \u2212100 M\u2299yr\u22121 (Giavalisco 2002). The ULIRG region is approximately delineated by a radius range of 0.1\u22121.5kpc and a range of SFR of 120\u22121200 M\u2299yr\u22121 (e.g., Spence et al. 2018). The location of the Milky Way galaxy is indicated by a black star near the lower-right corner. The sample of star-forming but non-LyC leaking dwarf galaxies at z \u223c1 with SFR < 10 M\u2299yr\u22121 (Rutkowski et al. 2016) is the blue shaded region labelled as \u201cz \u223c1 SFR < 10 M\u2299/yr\u201d. Finally, the observed galaxies with large LyC escape fractions are shown as black downward-pointing triangles from various sources (e.g., Alexandro\ufb00et al. 2015; Izotov et al. 2016a,b, 2018a,b; Wang et al. 2019), where some galaxies known as LAEs but with little LyC escape are shown as solid magenta dots (Alexandro\ufb00et al. 2015). In all cases for M\u2217, rh and SFR of observed LyC leakers and non-leakers we use updated values from Wang et al. (2019). the star formation episode. This region contain the blue shaded region labelled as \u201cz \u223c1 SFR \f\u2013 7 \u2013 < 10 M\u2299/yr\u201d, which is a sample of star-forming dwarf galaxies at z \u223c1 with SFR < 10 M\u2299yr\u22121 that do not show signi\ufb01cant LyC leakage (Rutkowski et al. 2016). The fact that this region lies in the region of the parameter space that is part of the LAE region and indeed is expected not to have large LyC escape is quite remarkable, because the author was not aware of this data set until was brought attention to it by the referee. Also in the large gulf region are the LBGs, as indicated by the green-shaded region (Giavalisco 2002), suggesting that LBGs are not likely to be copious LyC leakers. However, recent observations (Steidel et al. 2018) indicate a mean fesc = 0.09 \u00b1 0.01 for a subsample of LBGs. This directly contradicts our conclusions. One possible way of reconciliation is that the observed e\ufb00ective radii of LBGs in UV may be over-estimates of the e\ufb00ective radii of the star-forming regions; if the actual radii of star-forming regions are in the range of 300\u2212500pc, LBGs would be located in the region of LyC leakers. Alternatively, star-forming regions of LBGs may be composed of much more compact sub-regions. While not direct proof, it is intriguing to note that Overzier et al. (2009) \ufb01nd that the three brightest of their sample of thirty galaxies low-redshift analogs of LBGs at z = 0.10.3 that they examine in detail indeed have very compact sizes, with e\ufb00ective radii no larger than 70 \u2212160 pc. Thus, it would be signi\ufb01cant to carry out high resolution FIR observations, such as by ALMA, of LBGs to verify if the total star-forming regions are in fact more compact. As another example, the star near the bottom-right is the location of the Milky Way, which is also inside the LyC non-leaker region. So our Galaxy is unlikely to be a very good LyC leaker for an extragalactic observer. On the other hand, a class of very luminous galaxies ULIRGs occupies a region that may straddle the LyC leaker and non-leaker region. ULIRGs are in a special region of the parameter space. It is known that ULIRGs are copious FIR emitters, not known to be LyC leakers. We suggest that ULIRGs may belong to a class of its own, where ram-pressure due to gas infall may have helped con\ufb01ne the gas to (1) make them LyC non-leakers and (2) allow for star formation to proceed over a much longer period than indicated by the red contours, despite the strong outward momentum \ufb02ux driven by ongoing star formation. One way to test this scenario is to search for redshifted 21 cm absorption lines in ULIRGs, if suitable background radio quasars/galaxies or intrinsic central radio quasars/galaxies or possible other bright radio sources. Nevertheless, we would like to point out that ULIRGs should vary as well. Imagine a merger or other signi\ufb01cant event drives an episode of cold gas in\ufb02ow. The episode spans a period and the starburst triggered goes from the initial phase of buildup when the star formation rate is extremely subdominant to the in\ufb02ow gas rate. An estimate of possible gas in\ufb02ow rates is in order to illustrate the physical plausibility of this scenario. Let us assume that a merger of two galaxies each of halo mass of 1012 M\u2299and gas mass 1.6 \u00d7 1011 M\u2299triggers a ULIRG event and that 10% of the total gas mass falling onto the central region of size 1kpc at a velocity of 300 km/s. Then we obtain a gas infall rate of \u02d9 Min = 1.0\u00d7104 M\u2299yr\u22121, which would correspond yield \u03b7 = (100, 10) for SFR equal to (100, 1000) M\u2299yr\u22121, respectively. In Figure (3) we see that, once the infall rate drops below about 30 times the SFR, gas in ULIRGs would be lifted up by supernovae. This leads to a maximum SFR in ULIRGs that is estimated as follows using this speci\ufb01c merger example. During the buildup phase of the ULIRG, since the gas infall rate exceeds greatly the SFR, one can equate the gas mass to the total dynamical mass. Thus, we have SFR = c\u2217Mgas[r/(GMgas/r)1/2]\u22121. Equating \u03b7SFR (with \u03b7 = 30) to \u02d9 Min, we \ufb01nd the amount of gas accumulated at the maximum gas mass is \f\u2013 8 \u2013 Mgas,max = 6.3 \u00d7 1010 M\u2299, corresponding to a maximum SFR SFRmax = 330 M\u2299yr\u22121 in this case. Thus, our analysis indicates that the physical reason for an apparent maximum SFR in ULIRGs and SMGs may be due to a competition between the maximum ram-pressure con\ufb01nement of gas and internal supernovae blastwave and radiation pressure. This contrasts with and calls into question the conventional view of radiation-pressure alone induced limit on maximum SFR (e.g., Thompson et al. 2005). We deferred a more detailed analysis on this subject to a separate paper. At a later point in time it may be transitioned su\ufb03ciently rapidly, at least for a subset of ULIRGs, to a phase that is ubiquitous in out\ufb02ows. Some ULIRGs at this later phase may become signi\ufb01cant LyC leakers, if and when the gas in\ufb02ow rate drops below about 10 times SFR, as shown in Figure (3) by varying the ram-pressure (the \u03b7 parameter, see Eq 4). This new prediction is in fact consistent with some observational evidence that shows signi\ufb01cant Ly\u03b1 and possibly LyC escape fractions in the advanced stages of ULIRGs (e.g., Martin et al. 2015). These ULIRGs also seem to show blueshifted out\ufb02ow. It ought to be noted that their measured fesc is relative to the observed FUV luminosity (i.e., the unobscured region) but not relative to the total SFR, which is di\ufb03cult to measure. Thus, the escape of LyC in these late stage ULIRGs is a relative statement compared to ULIRGs that are ram-pressure con\ufb01ned and are not LyC leakers in the sense that, although in the former the stellar feedback processes may be able to lift gas up, likely still substantial in\ufb02ow gas may be able to continue to provide a large amount of obscuring material, albeit less than at earlier phase with a stronger ram-pressure con\ufb01nement and heavier obscuration. Let us now turn to the black dotted contours showing the log of the stellar mass in units of M\u2299formed from this episode presumably triggered by a gas accretion event. Two points are worth noting here. First, in the region where LyC leakers are observed, the expected stellar mass formed in a single star formation episode is in the range of 108 \u22121010 M\u2299. The observed green-peas galaxies (e.g., Schaerer et al. 2016; Izotov et al. 2016a,b, 2018a,b, 2019) have stellar masses indeed falling in this range. This suggests that a large fraction or all of the stars in green-peas galaxies may be formed in this most recent star formation episode. However, some of the [SII]-weak selected galaxies have stellar masses signi\ufb01cantly exceeding 1010 M\u2299(Wang et al. 2019). We suggest that in those cases a large fraction of the stars are formed in previous star formation episodes and spatially more extended than the most recent episode. In both cases green-peas galaxies and [SII]-weak galaxies given the central concentration of this most recent star formation episode, it is likely triggered by a low-angular momentum gas in\ufb02ow event. It would be rewarding to searches for signs of such a triggering event, such as nearby companions or post merger features. Second, there are discontinuities of the contour lines going from the gravity-dominated lower-right region to the outward force dominated upper-left region. This is because, while the gas forms to stars unimpeded in the former, a portion of the gas is blown away in the latter. Finally, let us turn our attention to the velocity of the gas moving out, as indicated by the blue dashed contours. We see that the outward velocity is in the range of 100 \u2212600 km/s. This is a prediction that can be veri\ufb01ed by observations when a reasonably large set of data becomes available. Worth noting is that LyC leakers do not necessarily possess outsized out\ufb02ow velocities. At the present time, the sample of LyC leakers is still relative small but the approximate range of wind speeds in the range of 150\u2212420 km/s if one uses directly the separation of Ly\u03b1 peaks as a proxy \f\u2013 9 \u2013 Fig. 3.\u2014 Top panel is similar to Figure (2) with one change: \u03b7 = 10 is used here instead of \u03b7 = 0 (see Eq 4) in Figure (2). Bottom panel is similar to the top panel with \u03b7 = 30. \f\u2013 10 \u2013 Fig. 4.\u2014 shows the median velocity as a function of stellar mass formed in the episode, along with lower and upper quartiles shown as the errorbars, for two cases with \u03b7 = 0 and \u03b7 = 30. (Izotov et al. 2016b,a). We note that given the scattering e\ufb00ects of Ly\u03b1 photons the separation of Ly\u03b1 peaks generally may only represent an upper limit on the velocity dispersion, which in turn may be on the same order of the out\ufb02ow velocity. For a general comparison to young star-forming galaxies without considering LyC escape, Bradshaw et al. (2013) \ufb01nd out\ufb02ow velocities typically in the range of 0 \u2212650 km/s for young star-forming galaxies with stellar mass of \u223c109.5 M\u2299, which is consistent with predicted velocity range. Finally, Chisholm et al. (2017) \ufb01nd that LyC leakers (with fesc \u22655%) spans an out\ufb02ow velocity range of 50\u2212500 km/s (probed by Si II), consistent with our model. Henry et al. (2015) show out\ufb02ow velocities probed by a variety of ions from Si II to Si IV of a range of 50 \u2212550 km/s for green pea galaxies, consistent with our model once again. Because the velocity contours are more parallel than perpendicular to the stellar mass contours, a related prediction is that the out\ufb02ow velocity is expected to be positively correlated with the newly formed stellar mass. Figure (4) shows the median velocity as a function of stellar mass formed in the episode, along with lower and upper quartiles shown as the errorbars, for two cases with \u03b7 = 0 and \u03b7 = 30. We see clearly a positive correlation between the out\ufb02ow velocity of LyC leakers and the amount of stars formed in the episode, with median velocity going from \u223c100 km/s at 108 M\u2299 to 600 \u2212700 km/s at 1010 M\u2299. For the very high end of the stellar mass of 1011 M\u2299formed in the episode, the out\ufb02ow velocities are expected to exceed 103 km/s. With more data this unique prediction should be testable. \f\u2013 11 \u2013 3. Comparisons to Some Previous Works We thank the author for the very detailed comparison with the work I suggested. Since the reader might be unfamiliar with the details of the work which spans observations, simulations, and semi-analytical techniques, I suggest an introductory sentence for each paragraph. Heckman (2001) are among the \ufb01rst to attempt to infer the physical conditions of LyC escape in starburst galaxies combining observational evidence with basic physical considerations in the context of a superbubble driven by supernova explosions. They propose that strong starbursts clear channels through the neutral ISM to facilitate LyC escape. They ultimately reach the conclusion that the empirical evidence does not demonstrate that galactic winds inevitably produce large values of LyC escape fraction in local starbursts. In other words, galactic out\ufb02ows appear to be a necessary but not su\ufb03cient condition that creates an ISM porous to ionizing radiation. This idea is advanced here in a quantitative fashion. We show that only very compact, high surface density starbursting regions are capable of evacuating embedding gas su\ufb03ciently promptly to allow for an environment where a signi\ufb01cant amount of LyC escape becomes possible. We argue that this may apply to both a compact starburst at the center of a galaxy or a high density patch of a spatially extended starburst, because the dynamics are the same in both cases. Nevertheless, we agree with Heckman (2001) that even in this case the condition created by compact strong starbursts may be a necessary one, due to variations of obscurations along lines of sight, because in most cases gas is only lifted to a limited altitude forming a gas shell that is presumably prone to fragmentation. In a semi-analytic treatment of escape fraction as a function of star formation surface density, applied to the Eagle simulation, Sharma et al. (2016) adopt a threshold star formation surface density \u03a3SFR = 0.1 M\u2299yr\u22121 kpc\u22122 on a scale of \u223c1kpc, motivated by an apparent threshold for driving galactic winds. Our analysis shows that on 1kpc scale, such a star formation surface density falls short by a factor of 1000 for making conditions to allow for a high LyC escape fraction (see Figure 2). However, when one moves to a smaller size of 0.5kpc, this threshold star formation surface density lands in the region where gas may be driven away but on a time scale much longer than 5Myr. In fact, at \u03a3SFR = 0.1 M\u2299yr\u22121 kpc\u22122 there is no parameter space for a high LyC escape fraction regardless of size. For a star formation surface density \u03a3SFR = 1 M\u2299yr\u22121 kpc\u22122, a region of size \u223c0.1kpc can now possess necessary conditions for a high LyC escape fraction. Thus, the overall LyC escape fraction in Eagle simulation they analyze may have been over-estimated. On the other hand, limited numerical resolution may have caused an underestimation of the star formation surface density in the simulated galaxies there. Thus, the overall net e\ufb00ect is unclear, if all galaxies were resolved and a correct threshold star formation surface density applied. What is likely is that their assessment of the relative contributions of large and small galaxies may have been signi\ufb01cantly biased for large ones due to the lenient condition. Based on an empirical model introduced in (Tacchella et al. 2018) that stipulates the SFR to be dependent on halo accretion rate with a redshift-independent star formation e\ufb03ciency calibrated by N-body simulations, Naidu et al. (2019) analyze how observations of electron scattering optical depth and IGM ionization states may be used to constrain cosmological reionization. Their main assumption is that the LyC escape fraction is constant for all galaxies. Their main conclusion is that bright galaxies (MUV < \u221216) are primarily responsible for producing most of the ionizing photons, \f\u2013 12 \u2013 in order to produce a rapid reionization process consistent with observations. Our analysis indicates that the assumption of a constant LyC escape fraction for all galaxies may be far from being correct. However, if the bright galaxies are dominated by strong compact central starbursts with high star formation surface densities, an assumed constant LyC escape fraction for all galaxies may lead to a conclusion, as they do, that faint galaxies make minor contribution to reionization; this conclusion itself ultimately may not be incorrect, though. It is also worth noting that the galaxy luminosity in their model is substantially shallower than observations below MUV > \u221218. This discrepancy may have, in part, contributed to the more diminished role of faint galaxies in their modeling. These coupled e\ufb00ects suggest an improved, more detailed analysis may be desirable, to better learn the intricate physics. The dynamics for a central starburst analyzed here in principle is applicable to a compact starbursting subregion within a more extended starbursting disk. The complication in the latter case is that neighboring regions on the disk would unavoidably elevate some gas to varying altitudes, resulting in an environment for the compact starbursting region in question that is subject to more obscuring gas, in lines of sight deviated from the polar direction. Nevertheless, we do expect that the LyC escape is, on average, an increasing function of the star formation surface density within an extended starburst, unless ram-pressure becomes a dominant con\ufb01ning process, as likely in the case of most ULIRGs with respect to the star formation rate surface density. 4. Discussion and" + }, + { + "url": "http://arxiv.org/abs/1912.04372v1", + "title": "On Post-Starburst Galaxies Dominating Tidal Disruption Events", + "abstract": "A starburst induced by a galaxy merger may create a relatively thin central\nstellar disk at radius $\\le 100$pc. We calculate the rate of tidal disruption\nevents (TDEs) by the inspiraling secondary supermassive black (SMBH) through\nthe disk. With a small enough stellar velocity dispersion ($\\sigma/v_c \\le\n0.1$) in the disk, it is shown that $10^5-10^6$ TDEs of solar-type main\nsequence stars per post-starburst galaxy (PSB) can be produced to explain their\ndominance in producing observed TDEs. Although the time it takes to bring the\nsecondary SMBH to the disk apparently varies in the range of $\\sim 0.1-1$Gyr\nsince the starburst, depending on its landing location and subsequently due to\ndynamical friction with stars exterior to the central stellar disk in question,\nthe vast majority of TDEs by the secondary SMBH in any individual PSB occurs\nwithin a space of time shorter than $\\sim 30$Myr. Five unique testable\npredictions of this model are suggested.", + "authors": "Renyue Cen", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "astro-ph.GA" + ], + "main_content": "Introduction When a star happens to plunge inside the tidal radius of a supermassive black hole, it will be torn apart, producing a tidal disruption event (TDE) that provides a useful tool to probe gas and stellar dynamics around SMBH, and galaxy formation process potentially. For a solar mass main sequence star, the tidal radius is greater than the Schwarzschild radius for a SMBH less massive than 108 M\u2299. For sub-giant or giant stars, still more massive SMBHs are also able to produce TDEs, although their observable time scales become impractically long. Post-starburst galaxies, sometimes called E+A or K+A\u2019s, are characterized by spectra that are consistent with a starburst 0.1 \u22121Gyr ago followed by dormancy. They constitute a fraction of 0.2 \u22122%, depending on the observational de\ufb01nition, of all galaxies of comparable stellar masses at low redshift (e.g., Pattarakijwanich et al. 2016). Yet, current observations indicate that an overwhelming fraction of tidal disruption events (TDEs), presumably normal main sequence stars tidally torn apart by SMBHs at the center of galaxies, appear to occur in PSBs. For example, all six TDEs observed by ASASSN survey appear to occur in galaxies with spectral characteristics of PSBs (French et al. 2016; Law-Smith et al. 2017; Graur et al. 2018). This suggests that stars in PSBs have a factor about 100 more likely to provide TDEs. For a recent survey of models, see an excellent review by Stone et al. (2018). In this Letter a solution to this puzzle is sought and found. We show that an inspiraling SMBH plowing through the stellar disk that is part of the starburst can produce a su\ufb03cient number of TDEs to explain the observations. We also suggest several tests for the model. 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1912.04372v1 [astro-ph.HE] 9 Dec 2019 \f\u2013 2 \u2013 2. Inspiral of Secondary SMBH Through a Nuclear Stellar Disk The physical setting of the problem in hand is as follows. Two gas-rich galaxies each with a SMBH at their respective centers merge. A starburst occurs in the process, peaking at the time of the coalescence of the two galaxies, followed by a rapid decline in star formation rate (e.g., Hopkins et al. 2006). The merger of the two SMBHs may be delayed in time, relative to the starburst peak, as simulations have shown. The typical time delay is in the range of 0.1 \u22121Gyr, not including an additional possible barrier at about parsec scale. For the TDE rates derived in the present model, the parsec barrier has no e\ufb00ect. We adopt a \ufb02at rotation curve throughout. High resolution cosmological zoom-in simulations covering galactic and central regions with a resolution as high as 0.1pc (Hopkins & Quataert 2010, 2011) support this assumption. For the present purpose there is little to be gained by attempting to treat the situation with additional nuance than this. While the stars dominate the gravity in this radial range, exterior to rout we assume the dark matter conspires to guarantee a continuous \ufb02at rotation curve for simplicity and and we do not treat the region interior to rin. We assume that the stellar subsystem is composed of a geometrically \ufb02at stellar disk with a mass fraction \u03b7 and a spherical component with a mass fraction 1 \u2212\u03b7. In the radial range of [rin, rout], the stellar volume mass density in the disk can be expressed as \u03c1\u2217(r) = \u03b7v2 c 4\u03c0Gr2(vc \u03c3 ), (1) where vc and \u03c3 are the rotation velocity of and velocity dispersion (assumed to be isotropic) in the disk, respectively, at the cylindrical radius, r. The Mestel stellar disk\u2019s mass surface density is \u03a3\u2217(r) = \u03b7v2 c 2\u03c0Gr. (2) Let us for simplicity assume a single population of solar mass stars to yield the stellar number density in the disk n\u2217(r) = \u03b7v3 c 4\u03c0Gr2\u03c3 M\u2299 . (3) Given this physical backdrop, the process that we are interested in is the inspiral of the secondary SMBH through the \ufb02at stellar disk. We denote the inspiraling SMBH as \u201cthe secondary\u201d of mass M2, as opposed to the central SMBH denoted of mass M1. To present a concrete set of quantitative results we shall choose a \ufb01ducial case of two merging galaxies each with a SMBH of mass M1 = M2 = 107 M\u2299and vc = 159 km/s to denominate relevant terms, following the relation between galaxy stellar mass and SMBH mass. The merged galaxy is assumed to slide along the Tremaine et al. (2002) relation so to have a rotation velocity of vc = 21/4 \u00d7 159 km/s = 189 km/s, which is assumed to have achieved after the merger of the galaxies but prior to inspiral of the secondary through the central stellar disk. The total stellar mass interior to r (including both the disk and bulge): M(< r) = 107( r 1.2pc)( vc 189 km/s)2 M\u2299. (4) \f\u2013 3 \u2013 On the grounds that dynamical friction induced inspiral stalls at the radius where the interior stellar mass on the disk is equal to the mass of the inspiraling SMBH, we de\ufb01ne the inner radius rin as rin = 1.2( M2 107 M\u2299 )( vc 189 km/s)\u22122 pc. (5) When the secondary, if with zero orbital eccentricity, moves at the circular velocity vc at any given radius, the stars at the same radius moves at a lower azimuthal velocity v\u03c6. The asymmetric drift, v\u03c6 \u2212vc, the relative velocity of stars to a notional circular velocity at the radius, is governed physically by the Jeans third equation (Binney & Tremaine 1987) and observed in our solar neighborhood (e.g., Golubov et al. 2013; Sharma et al. 2014). For an isotropic velocity dispersion of stars in the disk with a local dispersion \u03c3 \u2261\u03c3R = \u03c3\u03c6 = \u03c3z \u226avc, which we shall assume, and for a \ufb02at rotation curve, we have v2 \u2261vc \u2212v\u03c6 = \u03c32 vc = 0.1\u03c3(10\u03c3 vc ), (6) For a relatively thin disk with \u03c3 \u22640.1vc that are of relevance here, v2 \u226a\u03c3. Note that v\u03c6 < vc, i.e., stars collectively move more slowly than the circular velocity at that location. The physical meaning of the asymmetric drift is easily understood in terms of the presence of an equivalent negative radial pressure gradient in the stars due to local velocity dispersion, as the Jeans equation displays. This lag, direction-wise, may be understood in another intuitive way. Stars with nonzero velocity dispersion, i.e., not strictly on circular orbits, have non-zero eccentricities. In any non-Keplerian orbit, which is the case here for a \ufb02at rotation curve, the epicyclic frequency is larger than the azimuthal frequency, causing perigalacticon to precess backwards relative to zero eccentricity orbits. Because of \ufb01nite v2, the secondary experiences a dynamical friction force. This is important because it means that the secondary in a circular orbit in a disk in the absence of any bulge component can still experience a dynamical friction and move inward. In a two-dimensional con\ufb01guration the primary dynamical e\ufb00ect is due to close encounters between the secondary and stars (Rybicki 1972), as opposed to the usual three-dimensional con\ufb01guration where distant encounters dominate (Chandrasekhar 1943). If the in\ufb02uence radius of the secondary, de\ufb01ned as r2 \u2261GM2/\u03c32, is greater than the half-thickness of the stellar disk, h, then the situation is considered to be two-dimensional. We have r2 h = 1.2(10\u03c3 vc )\u22123( M2 107 M\u2299 )( vc 189 km/s)\u22122( r 1kpc)\u22121. (7) Thus, for the \ufb01ducial case considered of M2 = 107 and vc = 189 km/s, in the regime of interest here with \u03c3/vc \u22640.1, the two-dimensional condition is satis\ufb01ed for radius r \u22641kpc. The two-dimensional dynamical friction force is (Quinn & Goodman 1986, Eq III.3) Fd f = \u22122\u03c0G\u03a3\u2217M2 (\u221a 2\u03c0 4 v \u03c3 exp (\u2212v2 4\u03c32) \u00d7 \u0014 I0( v2 4\u03c32) + I1( v2 4\u03c32) \u0015) , (8) \f\u2013 4 \u2013 where v is the velocity of the secondary relative to the stars, I0 and I1 are modi\ufb01ed Bessel functions of the \ufb01rst kind (Abramowitz & Stegun 1972). In the limit v \u226a\u03c3, Fd f = \u2212 r \u03c03 2 G\u03a3\u2217M2v \u03c3 . (9) In this limit, for our case, a rotating disk, we may follow the procedure of Chandrasekhar (1943) by elementarily integrating the spatial range on the disk from \u2212h to +h over which the shear velocity is subdominant to the velocity dispersion (along with the integrations over the distribution over the angle between velocity vectors and the Maxwellian velocity distribution) to derive the frictional force: F \u2032 d f = \u22123 \u221a 2\u03c0h\u03a3\u2217\u03c3v = \u22123 \u221a 2\u03c0G\u03a3\u2217M2v \u03c3 . (10) It is seen that Eq (10) and Eq (9) di\ufb00er only by a factor of unity (\u03c0/6), re\ufb02ecting again close encounters being largely responsible for dynamical frictional force in the two-dimensional case. For simplicity, without introducing a large error, and given the ambiguity in choosing the radial extent of integration used to derive Eq (10), we just use Eq (9) for all subsequent calculations. The dynamical friction time for the two-dimensional component is then t2d \u2261(d ln r dt )\u22121 = M2vc Fd f v v2 = rv \u03b7vcv2 (\u221a 2\u03c0 4 v \u03c3 exp (\u2212v2 4\u03c32) \u00d7 \u0014 I0( v2 4\u03c32) + I1( v2 4\u03c32) \u0015)\u22121 , (11) where the v2/v factor is the tangential fraction of the dynamical friction force, and v is the total velocity of the secondary relative to local stars, v = q v2 2 + v2 r with vr = r td f (12) being the radial drift velocity of the secondary and v2 the asymmetric drift velocity (Eq 6). In addition, the dynamical friction time due to the three-dimensional component is t3d = 2v3r2[erf(X) \u22122X exp (\u2212X2)/\u221a\u03c0]\u22121 3v2 c ln \u039bGM2(1 \u2212\u03b7) (13) (Binney & Tremaine 1987), where X = v/ \u221a 2\u03c3 and we adopt a Coulomb logarithm ln \u039b equal to three. Then, the overall dynamical friction time is td f = (t\u22121 2d + t\u22121 3d )\u22121, (14) which will be used throughout our subsequent calculations. Figure 1 shows the dynamical friction time (tdf, Eq 14) for two cases: \u03b7 = 0.5 with \u03c3/vc = 0.1 (solid thin black curve) and \u03b7 = 0.9 with \u03c3/vc = 0.1 (solid thick black curve), both with vc = 189 km/s and M2 = 107 M\u2299, along with the breakdowns due to the two-dimensional and three-dimensional components. We see that for \u03b7 < 0.9 and \u03c3/vc = 0.1 the overall dynamical friction induced inspiral is due to the three-dimensional component at r \u226410pc; in fact, for any applicable cases (see \ufb01gures below), in the inner region \f\u2013 5 \u2013 0 1 2 3 4 log r (pc) -1 0 1 2 3 4 log tdf (Myr) t2d =0.5, /vc=0.1 t3d =0.5 tdf =0.5, /vc=0.1 t2d =0.9, /vc=0.1 t3d =0.9 tdf =0.9, /vc=0.1 Fig. 1.\u2014 shows the dynamical friction time (tdf, Eq 14) for two cases: \u03b7 = 0.5 with \u03c3/vc = 0.1 (solid thin black curve) and \u03b7 = 0.9 with \u03c3/vc = 0.1 (solid thick black curve), both with vc = 189 km/s and M2 = 107 M\u2299. Also shown as the dotted blue and dashed red curves are their respective two-dimensional (t2d, Eq 11) and three-dimensional dynamic friction time (t3d, Eq 13). In the two-dimensional case, the \ufb02at regime at the small radius end is due to the dynamical friction time that is constrained by the limited radial range due to the rotational velocity shear (tdf,2d, Eq 11), whereas the ascending portion at the large radius end is determined by td f,2d (Eq 11). of r = 1 \u221210pc the three-dimensional dynamical friction dominates and sets the time scale of the inspiral in that radial range. It is important to note, however, that the TDE rate is mainly due to the interaction of the inspiraling secondary and stars in the disk, as we show below, thanks to its high volume density of stars. Let us now examine the TDE rate by the secondary during its inspiral. Because the stars are essentially collisionless, they can accrete onto the secondary only at a rate about equal to the cross section of the secondary times the mass \ufb02ux, v\u03c1\u2217(Eddington 1926), as opposed to a higher, Bondi rate for collisional matter. The e\ufb00ective cross section of the secondary may be identi\ufb01ed with the tidal capture cross section, which is larger than but on the same order as the tidal disruption cross section, although what happens to the stars once captured is complex. We estimate the TDE rate based on stars that directly plunge into the radius twice the tidal radius. We now derive a general expression of TDE events for both non-zero relative bulk velocity of the secondary to stars and non-zero velocity dispersion of stars with the latter being assumed to already have a relaxed Maxwellian distribution. If the secondary moves through a static \f\u2013 6 \u2013 sea of stars of density n\u2217at a velocity v, the rate of stars entering the loss cone would be R(v|\u03c3 = 0) = \u03c0r2 t n\u2217v(1 + 2GM2 v2rt ), (15) where rt is the tidal radius of the loss cone surface: rt = r\u2217(M2 m\u2217 )1/3 = 1.5 \u00d7 1013( M2 107 M\u2299 )1/3( r\u2217 R\u2299 )( m\u2217 M\u2299 )\u22121/3 cm = 1.09rsch(M2)( M2 108 M\u2299 )\u22122/3( r\u2217 R\u2299 )( m\u2217 M\u2299 )\u22121/3, (16) with m\u2217and r\u2217being the stellar mass and radius, respectively, and rsch(M2) the Schwarzschild radius of the secondary. For the secondary moving through stars with a Maxwellian velocity distribution of dispersion \u03c3 at a mean relative velocity v, one may convolve R(v|\u03c3 = 0) in Eq (15) with the velocity distribution to obtain the overall rate. Choosing the direction of v in plus x-direction, we have R(v, \u03c3) = Z +\u221e \u2212\u221e Z +\u221e 0 \u03c0r2 t n\u2217 q (v \u2212vx)2 + v2 t \u001a 1 + 2GM2 [(v \u2212vx)2 + v2 t ]rt \u001b \u00d7 1 \u221a 2\u03c0\u03c33 exp \u0002 \u2212(v2 x + v2 t )/2\u03c32\u0003 vtdvxdvt, (17) where v2 t = v2 y + v2 z, and the outer and inner integrals are for vx and vt, respectively. With a bit manipulation one \ufb01nds R(v, \u03c3) = \u03c0r2 t n\u2217\u03c3 (r 2 \u03c0 exp (\u2212v2/2\u03c32) + v \u03c3 erf ( v \u221a 2\u03c3) + 2\u03c3 v \u0014 exp (\u2212v2/2\u03c32) \u22121 + erf ( v \u221a 2\u03c3) \u0015) + \u03c0n\u2217j2 lc v \u0014 erf ( v \u221a 2\u03c3) + 1 \u2212exp (\u2212v2/2\u03c32) \u0015 , (18) where the \ufb01rst and second terms correspond to their counterparts in Eq (15) with the latter due to gravitational focusing, and jlc is the angular momentum at loss cone surface about the secondary on a circular orbit: jlc = q GM 4/3 2 m\u22121/3 \u2217 r\u2217= 1.4 \u00d7 1023( M2 107 M\u2299 )2/3( m\u2217 M\u2299 )\u22121/6( r\u2217 R\u2299 )1/2cm2/s. (19) For extreme events like TDEs the orbital velocity at tidal radius rt is much larger than typical velocity of stars at in\ufb01nity relative to the secondary, so the second term in Eq (18) dominates. Thus, for the sake of conciseness we shall neglect the \ufb01rst term with negligible loss of accuracy in our case. The radial distribution of TDEs may be expressed as dNtde d ln r = R(v, \u03c3)td f = \u03b7v3 cj2 lctd f 4Gm\u2217\u03c3vr2 \u0014 erf ( v \u221a 2\u03c3) + 1 \u2212exp (\u2212v2/2\u03c32) \u0015 . (20) \f\u2013 7 \u2013 A key notable point in terms of the time scale is that the vast majority of TDEs in a PSB in our model likely occur within a time scale that is signi\ufb01cantly less than the age since starburst of PSBs of 0.1 \u22121Gyr. Thus, if our model were to explain the observed TDEs in PSBs, which show an apparent delay, relative to the starburst event itself, of up to \u223c1 Gyr, this indicates that it is the time that it takes to bring the secondary into the central stellar disk region and to be co-planar that determines the observed temporal distribution relative to the starburst, before the secondary interacts with the central stellar disk that subsequently dominates the TDE events. Such an expectation is quite plausible in the context of galaxy mergers, as evidenced by galaxy merger simulations. A systematic simulation survey of black hole mergers in the context of galaxy mergers is not available, due to the computational cost, daunting physical complexity and a large parameter space. Nonetheless, valuable information from existing simulations may be extracted. Our survey of literature is by no means exhaustive but hoped to be representative. In the merger simulations of Hopkins et al. (2006) it is seen in their Figure 13 that the \ufb01nal starburst occurs at 1.5Gyr since the beginning of the merger for a black hole pair of mass 3 \u00d7 107 M\u2299each. We can not \ufb01nd information about the black hole separation at this time. But from their visualization plots it seems that by this time the galaxies are largely merged, with separations likely less than a few kpc at most. In Johansson et al. (2009, Figure 14) one sees that by the time the starburst ends at simulation time t \u223c1.8Gyr, the separation of the binary BHs is \u223c1kpc. Using the three-dimensional dynamical friction time formula (Chandrasekhar 1943), we \ufb01nd tDF = 1.7Gyr and 0.43Gyr for a 1 \u00d7 107 M\u2299 black hole at 1kpc and 0.5kpc, respectively, in a spherical system with a circular velocity of 189 km/s. In the 1:4 merger simulations Callegari et al. (2009) \ufb01nd that once the separation of the galaxy pair (and BH pair) reaches 10kpc, it takes about 0.5Gyr to reach \u223c0.1kpc. This suggests that so long as the starburst does not end before the BH reaches 10kpc separation, the BH merger would occur in the time frame of 0.1 \u22121Gyr. In the most comprehensive study so far Tamburello et al. (2017) \ufb01nd that the black hole pair reaches a separation of \u223c100pc in the range of 0.11 \u22120.79Gyr from a sample of about two dozen merger simulations (see Table 2 in their paper), although there is a small fraction of cases where mergers never occur. Observationally, French et al. (2017) infer a post-starburst age in the range 0.06\u22121Gyr from eight TDE cases; when 1\u03c3 error bars are included, the range of post-starburst ages extends to 0.05 \u22121.2Gyr. This range of PSB age of \u22641Gyr seems accommodatable by the galaxy merger dynamics to bring the secondary close to the central region from extant simulations. As it is clear now that it is the total number of TDEs per galaxy that is predicted for a given physical con\ufb01guration of the system, including \u03c3/vc, \u03b7, vc and M2. If, for some reason, the secondary black hole reaches the central disk in a shorter span of time since the starburst for some subset of starburst galaxies, then their apparent rate will be inversely proportional to time interval between the starburst to the arrival at the central disk. Perhaps the apparently higher rate of TDEs in ULIRGs (Tadhunter et al. 2017) is due to this reason. One important requirement concerns bringing the secondary to be co-planar with the central stellar disk. Mergers of two galaxies possess some axisymmetry dictated by the orbital angular momentum of the merger and formation of a disky component due to gas dissipational processes. Thus, it is likely that the orientation of the orbit of the secondary may be largely co-planar initially. Tamburello et al. (2017) show that a \ufb02at disk is formed in the central region due to gas in\ufb02ow, \f\u2013 8 \u2013 although the exact scale height is likely limited by their \ufb01nite resolution. Without rigorous proof one has to contend with the possibility that the secondary is not exactly co-planar with the central stellar disk, when it is still at some large radius. Even in this case, the orbital plane of the secondary will be re-aligned with the central stellar disk during the inward migration via dynamical friction. Binney (1977) shows that in an oblate system with anisotropic velocity distribution, the dynamical friction drag tends to align the inspiraling object with the disk plane, so long as not on a polar orbit initially. The timescale on which this occurs is precisely the timescale for the action of dynamical friction. The basic analytic framework of Binney (1977) is shown to provide a much better agreement with simulations for inclination dependent dynamical friction time scale than the classic formulation of Chandrasekhar (1943) for \ufb02attened systems. More importantly, the decay rate of the orbital inclination that is not observed using the classic approach is quantitatively reproduced in simulations (Pe\u02dc narrubia et al. 2004) when anisotropic dynamical friction formulae (Binney 1977) are used. In the simulations (Pe\u02dc narrubia et al. 2002, 2004) a relatively modest amount of anisotropy (q = 0.6) is employed for the dark matter halo to show the e\ufb03cacy of the inclination decay of satellite orbits in a \ufb02attened host system. In the inner regions of interest here, baryons dominate dynamically and starburst is presumably triggered by a strong gas in\ufb02ow due to galaxy merger, and turbulent dissipation and gas cooling are likely strong to yield \ufb02attened systems. This is of course fully in accord and self-consistent with the presumed existence of a thin \ufb02at central stellar disk that is the foundation of our working hypothesis. The inclination decay of the secondary, if initially exists, can be due to dynamical friction with the stars on a larger spatial scale with an overall anisotropic velocity distribution, i.e., larger than the central stellar disk of size of \u223c100pc, that operate on a time scale likely in the range of 0.1 \u22121Gyr. Note for example the dynamical friction time is we \ufb01nd tDF = 1Gyr and 0.1Gyr for a 1 \u00d7 107 M\u2299black hole at 0.77kpc and 0.24kpc, respectively, in a spherical system with a circular velocity of 189 km/s. Thus, the co-planar condition for the orbital plane of the secondary and the central stellar disk is physically plausible, when it reaches the outer edge of the central disk. Even if the central disk and the orbit of the secondary is misaligned when the latter reaches the outer edge of the former, dynamical friction from that point onward will subsequently align it with the disk on the dynamical friction time scale, i.e., order of an e-folding in radius. Since most of the TDEs occur in the innermost region, one or two e-folding in radius can be spent to re-align the secondary with the central stellar disk with little e\ufb00ect on the overall TDE rate (and repeating time scale). Another issue worth clarifying is the orbital eccentricity of the inspiraling black hole, since we have implicitly assumed zero eccentricity in the derivation of Eq (8,9,10). However, this assumption serves only as a su\ufb03cient but not necessary condition for dynamical friction to operate. That it, even in a zero eccentricity orbit, the secondary still experiences dynamical friction force due to the non-zero asymmetric drift velocity v2 = \u03c32/vc. Any signi\ufb01cant eccentricity would render the relative velocity of the inspiraling black hole to the embedding stars possibly signi\ufb01cantly above \u223c\u03c32/vc, which would increase an additional dynamic friction force in the radial direction, leaving the tangential dynamic friction force unchanged. Nonetheless, one notes that if the secondary were in a radial orbit, then the \u201ccruise\u201d radial velocity due to the balance between gravity and the dynamical friction force due to the three-dimensional component (that dominates at small radii) can be shown to be equal to vc. In this case, we \ufb01nd that the total number of TDEs per PSB is \f\u2013 9 \u2013 in the range of \u2264103 for the \ufb01ducial case of M2 = 107 and vc = 189 km/s. Such a case would be much lower and hence inconsistent with the observationally inferred TDE rate of 105 \u2212106 per PSB (French et al. 2016). Therefore, one needs to make sure that increasing radialization of the orbit of the secondary is avoided, if the initial eccentricity is not identically zero. We now check two approximately bracketing cases to settle the issue. First, let us continue to consider the case of an isothermal sphere density pro\ufb01le. The apsides in a gravitational potential \u03c6(r) with speci\ufb01c energy E and speci\ufb01c angular momentum J are the two roots of the following equation (Eq 3-13 in Binney & Tremaine 1987): De\ufb01ning the orbital eccentricity e as ra/rp = (1 + e)/(1 \u2212e) with ra = r0(1 + e) [and rp = r0(1 \u2212e)], where rp and ra are the perigalacticon and apogalacticon distance, respectively, it can then be shown that, to the lowest order in e, the speci\ufb01c total energy and speci\ufb01c angular momentum are E = (1 2 + 7e2 6 )v2 c and J = r 1 \u22125e2 3 vcr0, (21) where we have de\ufb01ned the normalization of the logarithmic gravitational potential energy for an isothernal density pro\ufb01le such that \u03c6(r) = v2 c ln r r0 without loss of generality. Note that additional, higher order terms in e would be needed when e \u2192 p 3/5 as Eq (21) shows and it is also possible that orbits become unstable when e becomes too large. We consider here that e is not too large initially. From Eq (21) it is seen that E is a function of and decreases with decreasing eccentricity e. This indicates that in the presence of any energy dissipation, the orbit tends to zero eccentricity. It is also seen that the rate of decrease of eccentricity is de/dE \u221d1/e hence the time scale of circularization takes place on the similar time scale as the energy dissipation time scale (i.e, the dynamical friction time scale) when e \u226b0 but accelerates when e \u21920. Thus, the circularization time scale is about equal to dynamical friction time scale, if the orbit starts with a signi\ufb01cant eccentricity but may take a much shorter time scale for an initially nearly circular orbit. We stress that this outcome of circularization is derived based on a logarithmic potential corresponding to a \ufb02at rotation curve. While it is a good assumption, as simulations have shown, it is still prudent to stress that circularization is not necessarily the only outcome in general, as we show now. Consider next the following simpli\ufb01ed problem: the black hole moving in an eccentric orbit about a dominant point mass is subject to a frictional force that is a function of both the distance to the center and velocity. We assume that the gravitational e\ufb00ect due to the frictional matter is negligible. A further simplication is made for the convenience of calculation: the dynamical e\ufb00ect due to the frictional force is small enough so that a Keplerian (closed) orbit remains a good approximation for each full orbit. We adopt the units such that the speci\ufb01c total energy of the orbiting black hole is E = \u22121 2, and the speci\ufb01c angular momentum is J = \u221a 1 \u2212e2. With the familiar expressions for the distance to the focus r, the tangential velocity v\u03c6 and the magnitude of the total velocity v: r = (1 \u2212e2) (1 + e cos \u03b8), v\u03b8 \u2261rd\u03b8 dt = (1 + e cos \u03b8) (1 \u2212e2)1/2 , v = r 2 r \u22121, (22) where \u03b8 is the true anomaly, being zero at perigalacticon. Since e2 = 1 + 2EJ2, utilizing various expressions above, we have \u2206e = (1 \u2212e2)1/2e\u22121 \u0002 (1 \u2212e2)1/2\u2206E \u2212\u2206J \u0003 \u2261(1 \u2212e2)1/2e\u22121I, (23) \f\u2013 10 \u2013 where we shall de\ufb01ne \u2206e, \u2206E and \u2206J as the change of eccentricity, speci\ufb01c total energy and speci\ufb01c angular momentum, respectively, per full radial orbit. We now examine the term I de\ufb01ned by the last de\ufb01nition equality in Eq (23). To be tractable, let the acceleration due to frictional force have the following powerlaw velocity and radial dependencies: \u20d7 a = \u2212Ar\u03b1v\u03b2\u20d7 v, (24) where A is a positive constant, \u03b2 a constant, \u03b1 a constant slope, and \u20d7 v and v the velocity vector and its magnitude, noting that the radial dependence is inherited from the density\u2019s radial pro\ufb01le, \u03c1(r) \u221dr\u03b1. While \u03b1 may be non-positive in most physical contexts, our derivation does not impose any constraint. Gathering, we express I \u2261(1 \u2212e2)1/2\u2206E \u2212\u2206J = \u22122(1 \u2212e2)1/2A Z P/2 0 r\u03b1v\u03b2(\u20d7 v \u00b7 \u20d7 v)dt + 2A Z P/2 0 r\u03b1v\u03b2|\u20d7 r \u00d7 \u20d7 v|dt = 2A Z \u0398 0 r\u03b1\u2212\u03b2/2+1(2 \u2212r)\u03b2/2(r \u22121)d\u03b8, (25) where P is the period of a full radial orbit with the integration going from perigalacticon to apogalacticon, and \u0398 the azimuthal advance per half radial period, equal to \u03c0 in this case of closed orbits. To proceed, we change the integration element from d\u03b8 to the length element along the ellipse dl = rd\u03b8. Now the last equality in Eq (25) becomes I = 2A Z C/2 0 r\u03b1\u2212\u03b2/2(2 \u2212r)\u03b2/2(r \u22121)dl, (26) where C/2 is the half circumference of the orbit with the integration going from perigalacticon to apogalacticon. With the integration variable now changed to l that is invariant of the vantage point, one is free to move the center from one focus to the other by switching the radius from r to 2 \u2212r to obtain an identity I = \u22122A Z C/2 0 r\u03b2/2(2 \u2212r)\u03b1\u2212\u03b2/2(r \u22121)dl. (27) Taking the arithmetic average of Eq (26) and Eq (27), one obtains I = Z C/2 0 A(r \u22121)[r\u03b1\u2212\u03b2/2(2 \u2212r)\u03b2/2 \u2212r\u03b2/2(2 \u2212r)\u03b1\u2212\u03b2/2]dl. (28) One sees that for \u03b1 = \u03b2, I in Eq (28) is identically zero, meaning \u2206e = 0 in Eq (23) for any initial e. This thus indicates that the orbital eccentricity of a slowly inspiraling black hole under a frictional force of the form \u2212A(rv)\u03b1\u20d7 v (where A is a positive constant) is non-changing. For \u03b1 > \u03b2, \u2206e will be greater than zero, meaning that the orbit will be increasingly radialized during the inspiral, whereas for \u03b1 < \u03b2 the orbit will be increasingly circularized. Physically, this can be understood as a result of relatively higher loss of angular momentum per unit loss of energy hence gain of eccentricity at the perigalacticon as compared to a lower gain of angular momentum per unit gain of energy hence gain \f\u2013 11 \u2013 of eccentricity at the apogalacticon, for \u03b1 > \u03b2, thus leading to a net radialization over a complete orbit. For \u03b1 < \u03b2, the opposite holds. Let us consider two relevant applications of this result. First, in the standard three-dimensional dynamical friction case (Chandrasekhar 1943), \u03b2 = \u22123. Thus, unless the density slope is steeper than \u22123, the eccentricity is to increase under such frictional force, thus leading to radialization of the orbit spiraling inward. Second, in the standard two-dimensional dynamical friction case (Eq 8), we \u03b2 = \u22121 for v/\u03c3 \u226b1 and \u03b2 = 0 for v/\u03c3 \u226a1. Therefore, in this case, for any density pro\ufb01le that increases with decreasing radius, the orbit tends to circularize with time. In a more detailed calculation using dynamical friction formula that includes e\ufb00ects due to stars moving faster than the inspiraling black hole, Dosopoulou & Antonini (2017) conclude that for \u03b1 < \u223c\u22122, the orbit of the inspiraling black hole tends to radialization, overlapping with the radialization range of \u03b1 \u2264\u22123 found here. Overall, considerations of two bracketing examples suggest that, on the one hand, orbital circularization is likely achieved if the density pro\ufb01le is close isothermal regardless whether the medium for dynamical friction is also gravitationally dominant. On the other hand, at the other end of the spectrum where the central mass gravitationally dominates, the dynamical friction may lead to radialization if the velocity distribution of the medium is largely three-dimensional, whereas it leads to circularization if the velocity distribution of the medium is largely two-dimensional. Since in an oblate velocity distribution, dynamical friction leads to inspiraling black hole becoming co-planar, circularization should also ensue in this case so long as enough dynamical friction takes place after becoming co-planar. Thus, in the physical con\ufb01guration of an overall oblate stellar distribution along with a thin stellar disk in the central region that we propose here, the only likely situation where circularization does not occur is when the secondary black hole directly lands at a radius to which the interior stellar mass is not signi\ufb01cantly greater than the mass of the inspiraling black hole. Such a situation is not expected to happen in practice. Anyway, since we have already assumed that the inner rin (Eq 5) is where the interior stellar mass of the disk is equal to the mass of the inspiraling black hole, such a situation is moot. 3. Predictions 3.1. TDE Repeaters To illustrate, in the limit v \u226b\u03c3, which is the case when the secondary has migrated into the inner region of the disk, the second term of Eq (18) gives R(v, \u03c3) = \u03b7v3 cj2 lc 2Gm\u2217\u03c3vr2 = 0.034\u03b7( vc 189 km/s)( vc 10\u03c3)(vc v )( jlc 1.2 \u00d7 1023cm2/s)2( r 1pc)\u22122 yr\u22121, (29) which is indicative that TDEs may re-occur in the same PSBs within an accessible time scale. To gain a more quantitative assessment, we have performed a simple analysis with the following steps. (1) We use (the inverse of) Eq (18) to obtain the mean expectation value of time interval between two successive TDEs, \u00af \u2206t, when one just occurred at a radius r. (2) With the expectation value \u00af \u2206t we use the normalized Poisson distribution to obtain the probability distribution function as a function of time interval (\u2206t) between the TDE that just occurred at r and the next one, P(r, \u2206t). \f\u2013 12 \u2013 (3) We convolve P(r, \u2206t) with Eq (20) to obtain the overall mean probability distribution function as a function of time interval, P(\u2206t). Generally, P(\u2206t) is a function of three variables, \u03b7, \u03c3/vc and M2 (if M1 can be related to M2 or expressed by vc). The total number of TDEs per PSB, Ntde (Eq 20), is a function of \u03b7, \u03c3/vc and M2 as well. Therefore, if observations can provide constraints on Ntde, only two degrees of freedom are left. 0.001 0.001 0.001 0.001 0.001 0.1 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 10 10 10 25 25 0.1 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 10 25 25 25 50 50 4 4 4 4 4 4.5 4.5 4.5 4.5 4.5 5 5 5 5 5 6 6 6 6 6 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 /vc repeat probability in 1 yr repeat probability in 5 yr total TDE per galaxy, M2=107 cs=10km/s Fig. 2.\u2014 shows contours of probability P(\u2206t) in percent for repeating TDEs within a time interval of \u2206t = 1yr (blue dotted contours) and \u2206t = 5yr (red dot-dashed contours), respectively, on the two-dimensional parameter plane of (\u03b7, \u03c3/vc), where \u03c3 is the velocity dispersion of stars in the disk and \u03b7 is the fraction of stellar mass on the disk. The black contours are log Ntde per PSB. Also shown as horizontal magenta dot-dashed line is an indicative case where the vertical velocity dispersion is equal to sound speed of atomic cooling gas gas of temperature 104K, out of which stars in the disk may have formed. The \ufb01ducial values used are M2 = 107 M\u2299and rin = 1.2pc. Note that in computing the cross section of TDEs, we remove the area inside the event horizon of the secondary assuming a Schwarzschild black hole for all calculations. In Figure 2 we place contours of P(\u2206t) for \u2206t = 1yr (blue dotted contours) and \u2206t = 5yr (red dot-dashed contours), respectively, on the two-dimensional parameter plane of (\u03b7, \u03c3/vc). The two black contours are the current observational constraint of Ntde = 105 \u2212106 per PSB, corresponding to 10\u22124 \u221210\u22123yr\u22121 per PSB with a time span of 1Gyr (French et al. 2016) [also see Law-Smith et al. (2017); Graur et al. (2018)]. Also shown as horizontal magenta dot-dashed line is an indicative case where the vertical velocity dispersion is equal to sound speed of atomic cooling gas gas of \f\u2013 13 \u2013 temperature 104K, out of which stars in the disk may have formed. Several points are noted. First, as expected, the total number of TDEs per PSB tends to increase towards the lower-right corner of high \u03b7 and low \u03c3/vc, due primarily to the increase of the number density of stars in the disk. Second, if disk thickness is not less than 10 km/s, due to either fragmentation of gas disk at atomic cooling temperature and/or possible additional heating subsequent to formation of the stellar disk including heating by the secondary itself during its inspiral, then, an observational constraint of Ntde > 105 per PSB would require \u03b7 \u22650.87 (where the purple line intersects that black contour curve), i.e., the disk component is dominant in the inner region. Third, an observational constraint of Ntde > 105 per PSB also indicates that the thickness of the disk cannot exceed \u03c3/vc \u223c0.1, a limiting case when \u03b7 = 1. Finally, for Ntde = 105 per PSB, we see that there are regions where a repeater could occur with 2 \u221210% probability within a year per PSB in this particular case. Within \ufb01ve years, there is parameter space where 12 \u221228% probability is seen in this particular case. While rin may be low-bounded by Eq (5), it is possible that star formation may be truncated or \ufb02attned at a larger radius. Thus, we check how results depend on this. In the top-left panel of Figure 3 show a case that is the same as that shown in Figure 2 except one di\ufb00erence: rin = 3.6pc instead of 1.2pc. It is seen that the available parameter space for producing Ntde = 105 \u2212106 per PSB is compressed towards lower \u03c3/vc and higher \u03b7. But there is parameter space still available for explaining the observed abundance of TDEs even in this case. A large change is in TDE repeat frequencies: it is seen that there is no parameter space where a TDE may repeat at a probability greater than 0.1% within one year. There is a limited region in the parameter space where 0.1% probability exists for a TDE to repeat within 5 yr. It is possible to argue both ways as to which physical con\ufb01guration of the two cases shown is more \ufb01ne-tuned. Absence of some introduced scale, it seems more natural to suppose that the stellar disk could extend to some small radii of no particular choice, with rin imposed only because of the dynamical reason for the secondary to inspiral, as in Eq (5). Thus, we suggest that rin = 1.2pc in this case is a less \ufb01ne-tuned outcome. Recall that the maximum black hole mass for disrupting a main sequence star is about 108 M\u2299 for a Schwarzschild black hole (see Eq 5). The top-right panel of Figure 3 displays the case for M2 = 5 \u00d7 107 M\u2299with an appropriate rin according to Eq (5). A comparison between it and Figure 2 indicates that a more massive black hole tends to only slightly enhance both the overall rate of TDEs per galaxy and the probability of repeaters on relevant times scales. However, the range of \u03b7 for achieving the same Ntde = 105 is enlarged, when constraining \u03c3 \u223c10 km/s, to \u03b7 \u22650.5. But the overall rate and repeater probability contours do not change dramatically. The reason for this week dependence on M2 is due to a larger, removed cross section inside the event horizon that almost compensates the increased tidal radius for a larger black hole, among other factors. Next, we consider a case of merger of two lower mass galaxies, with M1 = M2 = 4 \u00d7 106 M\u2299, and rin determined according to Eq (5). The bottom-left panel of Figure 3 shows the result, for which we note three points. First, the model can no longer accommodate the observed > 105 TDEs per PSB, except in a very small parameter space at \u03b7 > 0.98 and \u03c3/vc \u223c0.07 \u22120.08. Second, in the available parameter space, the repeating rate is, however, comparable to the \ufb01ducial case shown in Figure 2. Combining results for the models, we conclude that, while the overall abundance of \f\u2013 14 \u2013 0.001 0.001 0.001 0.001 0.1 0.1 0.1 0.1 1 1 1 1 3 3 3 10 10 25 0.1 0.1 0.1 0.1 1 1 1 1 3 3 3 3 10 10 10 25 25 50 4 4 4 4 4 4.5 4.5 4.5 4.5 5 5 5 5 6 6 6 6 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 /vc repeat probability in 1 yr repeat probability in 5 yr total TDE per galaxy, M2=107, 3 rin cs=10km/s 0.001 0.001 0.1 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 25 25 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 10 10 25 25 25 50 50 4 4 4 4.5 4.5 4.5 4.5 4.5 5 5 5 5 5 6 6 6 6 6 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 /vc repeat probability in 1 yr repeat probability in 5 yr total TDE per galaxy, M2=5 107 cs=10km/s 0.001 0.001 0.001 0.001 0.1 0.1 0.1 0.1 0.1 1 1 1 1 3 3 3 10 10 25 0.1 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 25 25 50 4 4 4 4 4 4.5 4.5 4.5 4.5 4.5 5 5 5 5 5 6 6 6 6 6 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 /vc repeat probability in 1 yr repeat probability in 5 yr total TDE per galaxy, M2=4 106 cs=10km/s 0.001 0.001 0.001 0.001 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 10 10 25 25 25 25 0.1 0.1 0.1 0.1 1 1 1 1 1 3 3 3 3 3 10 10 10 10 10 25 25 25 25 25 50 50 50 50 50 4 4 4 4.5 4.5 4.5 4.5 4.5 5 5 5 5 5 6 6 6 6 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 /vc repeat probability in 1 yr repeat probability in 5 yr total TDE per galaxy, (r/100pc)-0.1 cs=10km/s Fig. 3.\u2014 Top-left: the physical parameters of this model are identical to those used for Figure 2 except one di\ufb00erence: rin = 3.6pc instead of 1.2pc. Top-right: the physical parameters of this model are identical to those used for Figure 2 except one di\ufb00erence: M2 = 5\u00d7107 M\u2299with an appropriate rin according to Eq (5). Bottom left: the physical parameters of this model are identical to those used for Figure 2 except one di\ufb00erence: M2 = 4 \u00d7 106 M\u2299with an appropriate rin according to Eq (5). Bottom right: the physical parameters of this model are identical to those used for Figure 2 except one di\ufb00erence: the mass fraction in the disk is allowed to increase slowly inward, equal to lesser of \u03b7(r/100pc)\u22120.1 and unity. TDEs increases with the black hole mass, the repeating rate per PSB depend weakly on the SMBH mass for a given Ntde, as long as the inner radius of the central stellar disk is not cuto\ufb00. Finally, the bottom-right panel of Figure 3 shows the result for a case where we let the mass fraction of the disk component to increase inward from 100pc to rin as \u03b7(r/100)\u22120.1, capped of course at unity. We see that the available parameter space is signi\ufb01cantly enlarged compared to the \ufb01ducial case, with the shape of contours seen to \ufb02atten out horizontally, while the repeater probability at a given Ntde remains roughly in the same range. \f\u2013 15 \u2013 To summarize, in our model the overall rate of TDEs per PSB, averaged over time, is set by the long dynamical friction process for the secondary to inspiral following galaxy merger. A unique characteristics of our model is that once having reached and aligned with the central stellar disk, the overall migration time interval over which the bulk of the TDEs occur is much shorter than the typical lifetime of PSBs of \u223c1Gyr. Consequently, one important prediction of this model is that TDEs may repeat on a reasonable time scale. While a precise repeating rate is di\ufb03cult to nail down, because we are not certain about the parameter space of (\u03b7, \u03c3/vc) that nature picks, we see that within (1,5) years the repeating probability falls in the range of (0.1 \u221210%, 3 \u221230%) if Ntde = 105, under the condition that M2 = 4 \u00d7 106 \u22125 \u00d7 107 M\u2299and no inner cutto\ufb00of stellar disk (i.e., rin is determined by Eq 5). Thus, assuming Ntde = 105 and with a sample of 1000 TDEs, it appears that at least one repeat may be detected within one year; alternatively, with a sample of 30 TDEs, at least one repeat may be detected within \ufb01ve years. If the current observationally inferred Ntde range of 105 \u2212106 indeed holds up, the above estimated range of repeating probability would be an underestimate. If observations do \ufb01nd such repeaters, they would provide strong support for this model. With enough statistics and time baseline, it may then be possible to tease out useful information on the physical con\ufb01guration of the central disk in terms of parameter space of (\u03b7, \u03c3/vc, M2). A statistical comparison between the number of PSBs with TDEs and those without may additionally shed luminous light on the temporal distribution of TDEs in PSBs and the distribution of the time for the secondary to land on the central stellar disk, which may be ultimately linked to galaxy formation process. As a reference, in a model with a delay time distribution (DTD) of t\u22120.5 (Stone et al. 2018), generously extending to 1Myr at low end and normalized to Ntde = 106 TDEs over 1Gyr in a PSB, the probability of repeaters within \ufb01ve years is practically zero (2.1 \u00d7 10\u221232). It is appropriate to prudently ask the following question: is the condition that required to accommodate the observed TDE rates in PSBs physically plausible? In particular, is \u03c3/vc \u223c0.1 viable? Let us examine what this means with respect to the column density, volumetric density and temperature of the gas disk forming the disk stars. Adopting vc = 189 km/s for a Mestel disk, we \ufb01nd that surface density \u03a3(r) = 276(r/1pc)\u22121g cm\u22122 and a volumetric density nH = 2.7 \u00d7 108(v/10\u03c3)(r/1pc)\u22122 cm\u22123. The mid-plane pressure due to gravitational mass above is p = \u03c0G\u03a32(r)/2 = 8.0 \u00d7 10\u22124(r/1pc)\u22122dyn cm\u22122. This means that, if the downward gravity is balanced by thermal pressure, the gas temperature would have to be 2.2 \u00d7 104(10\u03c3/vc) K, where a molecular weight of unity is used for simplicity. We see that \u03c3/vc = 0.1 and 0.05 would imply a gas temperature of 2.2 \u00d7 104 K and 1.1 \u00d7 104 K, respectively. This is in the exact regime where gas has been cooled rapidly by atomic cooling processes after infall shock but has yet to be cooled further down by molecular cooling (and low temperature metal cooling) processes. At a density of 5.4 \u00d7 108 cm\u22123 and T = 1.1 \u00d7 104K for \u03c3/vc = 0.05 at r = 1pc, the Jeans mass is 7.3 \u00d7 102 M\u2299. It indicates that the gas disk at r \u223c1pc would fragment at T \u223c104K, which may subsequently form stars directly from atomic cooling gas or may go through the molecular phase \ufb01rst and then form stars. In either case, it appears quite plausible that a disk of height to radius ratio of 0.05 \u22120.1 for vc = 189 km/s at r \u223c1pc and larger radii (note the weaker increase of Jeans mass than the mass on the Mestel disk with increasing radius at a given gas temperature). It is in fact quite remarkable that this completely independent assessment of the likely \u03c3/vc from a physical point of view of gas cooling and fragmentation is almost exactly what is required for producing the observed abundance \f\u2013 16 \u2013 of TDEs in PSBs. 3.2. TDEs Spatially O\ufb00set from Center and Complexities of Debris Dynamics 0 1 2 3 log r (pc) 2 3 4 5 log Ntde(>r) for solid curves -2 -1 0 1 black dot in Figure 2 red square in Figure 2 -2 -1 0 1 log t( 0.1L\u2217(red, blue) galaxies cold (T < 105K) gas is the primary component in the inner regions, with its mass comprising 50% of all gas within r = (30, 150) kpc. At r > (30, 200) kpc for (red, blue) galaxies, the hot (T > 107K) component becomes the majority component. The warm (T = 105\u22127K) component is, on average, a perpetual minority in both red and blue galaxies, with its contribution peaking at \u223c30% at r = 100\u2212300 kpc in blue galaxies and never exceeding 5% in red galaxies. These \ufb01ndings are in agreement with recent observations in many aspects, in particular with respect to the amount of warm gas in star forming galaxies and the amount of cold gas in early type galaxies at low redshift, both of which are physically intriguing and at \ufb01rst instant less than intuitive. In light of a new observational development with respect to the NV to OVI absorption line ratio and in particular the apparent need of seemingly complicated, perhaps contrived, models to explain the data, we here perform a detailed analysis of our high resolution cosmological hydrodynamic simulations to assess whether ab initio cosmological simulations are capable of accounting for this particular observation, in the larger context of the success of the model able to match the overall composition of halo gas, among others. It is particularly relevant to note that the good agreement between our simulations and observations with respect OVI \u03bb\u03bb1032, 1038 absorption lines, presented earlier in Cen (2012a), suggests that the statistical description of the properties of the warm component in the simulations mass, spatial distribution, density, temperature, metallicity, and their environmental dependences has now been \ufb01rmly validated and provides a critical anchor point for our model. Consequently, this additional, independent analysis with respect to NV/OVI ratio and other ratios becomes very powerful to further strengthen or falsify our model or our simulations. Our \ufb01ndings here are both encouraging and intriguing. If one uses a \ufb01xed, solar N/O ratio regardless of the O/H ratio, our model is acceptable, with all 4 KS (Kolmogorov-Smirnov) test p-values greater than 0.28 for either Haardt-Madau (Haardt & Madau 2012, HM hereafter) or HM+local radiation \ufb01eld, where the local radiation \ufb01eld is due to hot gas in the host galaxy. If one allows for a dependence of the N/O ratio on the O/H ratio, both measured by independent observations and motivated by theoretical considerations of two different sources of N, then our model is able to account for the observations highly successfully, with all KS test p-values exceeding 0.9. We additionally examine the following absorption line column density ratios where comparisons to observations may be made in a reasonable statistical fashion, SiIV/OVI, NII/OVI and NIII/OVI, and \ufb01nd that the ratios from our simulations are fully consistent with observations. We also investigate the model where UV radiation from local shock heated gas in concerned galaxies are added to the HM background, which is found to also agree with observations with comparable p-values for all line ratios examined. However, these good agreements come about because observational data points are dominated by upper and lower \f\u2013 3 \u2013 limits instead of actual detections. We discuss how some moderate improvment in observational sensitivity may provide much stronger tests of models. 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the AMR Eulerian hydro code, Enzo (Bryan et al. 2014). We use the following cosmological parameters that are consistent with the WMAP7-normalized (Komatsu et al. 2011) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. These parameters are also consistent with the latest Planck results (Planck Collaboration et al. 2014), if one adopts the Hubble constant that is the average between Planck value and those derived based on SNe Ia and HST key program (Riess et al. 2011; Freedman et al. 2012). We use the power spectrum transfer functions for cold dark matter particles and baryons using \ufb01tting formulae from Eisenstein & Hu (1999). We use the Enzo inits program to generate initial conditions. First we ran a low resolution simulation with a periodic box of 120h\u22121Mpc on a side. We identi\ufb01ed two regions separately, one centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299and the other centered on a void region at z = 0. We then re-simulate each of the two regions separately with high resolution, but embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. We name the simulation centered on the cluster \u201cC\" run and the one centered on the void \u201cV\" run. The re\ufb01ned region for \u201cC\" run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and that for \u201cV\" run is 31 \u00d7 31 \u00d7 35h\u22123Mpc3. At their respective volumes, they represent 1.8\u03c3 and \u22121.0\u03c3 \ufb02uctuations. The root grid has a size of 1283 with 1283 dark matter particles. The initial static grids in the two re\ufb01ned boxes correspond to a 10243 grid on the outer box. The initial number of dark matter particles in the two re\ufb01ned boxes correspond to 10243 particles on the outer box. This translates to initial condition in the re\ufb01ned region having a mean interparticleseparation of 117h\u22121 kpc comoving and dark matter particle mass of 1.07 \u00d7 108h\u22121 M\u2299. The re\ufb01ned region is surrounded by two layers (each of \u223c1h\u22121Mpc) of buffer zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 83 times that in the re\ufb01ned region. The initial density \ufb02uctuations are included up to the Nyquist frequency in the re\ufb01ned region. The surrounding volume outside the re\ufb01ned region is also followed hydrodynamically, which is important in order to properly capture matter and energy exchanges at the boundaries of the re\ufb01ned region. Because we still can not run a very large volume simulation with adequate resolution and physics, we choose these two runs of moderate volumes to represent two opposite environments that possibly bracket the universal average. We choose a varying mesh re\ufb01nement criterion scheme such that the resolution is always \f\u2013 4 \u2013 better than 460/h proper parsecs within the re\ufb01ned region, corresponding to a maximum mesh re\ufb01nement level of 9 above z = 3, of 10 at z = 1 \u22123 and 11 at z = 0 \u22121. The simulations include a metagalactic UV background (Haardt & Madau 2012), and a model for shielding of UV radiation (Cen et al. 2005). The simulations also include metallicity-dependent radiative cooling and heating (Cen et al. 1995). The Enzo version used includes metallicity-dependent radiative cooling extended down to 10K, molecular formation on dust grains, photoelectric heating and other features that are different from or not in the public version of Enzo code. We clarify that our group has included metal cooling and metal heating (due to photoionization of metals) in all our studies since Cen et al. (1995) for the avoidance of doubt (e.g., Wiersma et al. 2009; Tepper-Garc\u00eda et al. 2011). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c105\u22126 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell (i.e., weighting is equal to the inverse of density), which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the whole feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating, as in nature. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported into right directions in a physically sound (albeit still approximate at the current resolution) way, at least in a statistical sense. In our simulations metals are followed hydrodynamically by solving the metal density continuity equation with sources (from star formation feedback) and sinks (due to subsequent star formation). Thus, metal mixing and diffusion through advection, turbulence and other hydrodynamic processes are properly treated in our simulations. The primary advantages of this supernova energy based feedback mechanism are threefold. First, nature does drive winds in this way and energy input is realistic. Second, it has only one free parameter eSN, namely, the fraction of the rest mass energy of stars formed that is deposited as thermal energy on the cell scale at the location of supernovae. Third, the processes are treated physically, obeying their respective conservation laws (where they apply), allowing transport of metals, mass, energy and momentum to be treated self-consistently and taking into account relevant heating/cooling processes at all times. We use eSN = 1 \u00d7 10\u22125 in these simulations. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier IMF translates to eSN = 6.6 \u00d7 10\u22126. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy (due to Type II supernovae) is used to power galactic superwinds (e.g., Heckman 2001). Given the uncertainties on the evolution of IMF with redshift (i.e., possibly more top heavy at higher redshift) and the fact that newly discovered prompt Type I supernovae contribute a comparable amount of energy compared to Type II supernovae, it seems that our adopted value for eSN is consistent with observations \f\u2013 5 \u2013 and physically realistic. The validity of this thermal energy-based feedback approach comes empirically. In Cen (2012b) the metal distribution in and around galaxies over a wide range of redshift (z = 0 \u22125) is shown to be in excellent agreement with respect to the properties of observed damped Ly\u03b1 systems (Rafelski et al. 2012), whereas in Cen (2012a) we further show that the properties of OVI absorption lines at low redshift, including their abundance, Doppler-column density distribution, temperature range, metallicity and coincidence between OVII and OVI lines, are all in good agreement with observations (Danforth & Shull 2008; Tripp et al. 2008; Yao et al. 2009). This is non-trivial by any means, because they require that the transport of metals and energy from galaxies to star formation sites to megaparsec scale be correctly modeled as a function of distance over the entire cosmic timeline, at least in a statistical sense. 2.2. Analysis Method We identify galaxies at each redshift in the simulations using the HOP algorithm (Eisenstein & Hut 1998) operating on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy down to mass of \u223c109 M\u2299are clearly identi\ufb01ed separately in most cases. The luminosity of each stellar particle in each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, SFR, luminosities in \ufb01ve SDSS bands (and various colors) and others. We show, among others, that the simulated luminosity functions of galaxies at z = 0 are reasonably matched to observations (Cen & Chisari 2011). In the analysis presented here we choose randomly ten galaxies from our simulation that have properties that are similar to observed galaxies in the COS-HALO program (Werk et al. 2013, 2014) with respect to the star formation rate (SFR) and stellar mass. Some relevant properties of the ten simulated galaxies are tabulated in Table 1. A central galaxy is de\ufb01ned to be one that is not within the virial radius of a larger (halo-mass-wise) galaxy. To assist performing post-simulation analysis of the galaxies, we construct lookup tables of the abundances of various ions of elements nitrogen, oxygen and silicon as a function of logarithm of temperature (log T) and logarithm of ionization parameter (log U), for solar metallicity, using the photoionization code CLOUDY (v13.03; Ferland et al. 2013). For each selected simulated galaxy, we construct a cube with size of 320 kpc centered on the galaxy with resolution of 625 pc. We make the simplifying but reasonable assumption that the relevant absorbers are optically thin. Our calculations are performed for two cases of ionizing radiation \ufb01eld that the CGM in each galaxy is assumed to be exposed to. In the \ufb01rst \f\u2013 6 \u2013 Stellar M [1010 M\u2299] Halo M [1011 M\u2299] SFR [ M\u2299/yr] [104K] (gas)[ Z\u2299] central galaxy 1.37 1.37 1.0 27.95 0.23 yes 1.56 1.17 1.5 13.77 0.31 yes 2.25 10.4 1.6 36.76 0.20 no 3.00 4.36 1.7 16.89 0.27 no 2.91 2.05 1.0 10.10 0.23 yes 3.03 2.45 1.6 14.74 0.30 yes 3.63 3.89 2.2 93.48 0.27 yes 3.24 4.92 1.8 25.05 0.16 yes 3.68 6.22 2.6 15.44 0.18 yes 3.97 3.19 1.6 20.79 0.13 yes Table 1: Properties of 10 simulated galaxies used in this study. case, we only use the HM background UV radiation \ufb01eld at z = 0. In the second case, we compute the ionizing UV radiation due to local, shocked heated gas within each concerned galaxy and use the sum of that and the HM background. The local UV ionizing radiation is computed as follows. We compute the emissivity (e\u03bd) [ergs\u22121cm3Hz\u22121sr\u22121] for each cell given its temperature and metallicity at the relevant energies, E = 47.3eV for SiIV, E = 97.88 eV for NV, E = 47.4 eV for NIII, E = 29.6 eV for NII and E = 138.1 eV for OVI ion. This is done by integrating the diffuse spectrum from CLOUDY between 97.88 eV, 1.2 \u00d7 97.88 eV for NV, as an example (similarly for other ions). The diffuse emission includes all gas processes, the free-free emission, radiative free-bound recombination, two-photon emission, and electron scattering, among others, for all elements in the calculation. For each cell the total luminosity density is computed as n2 \u00d7 \u2206V \u00d7 L\u03bd where n is the density and \u2206V its volume. The sum of local ionizing UV radiation luminosity density at a relevant wavelength is L\u03bb. To approximately account for the spatial distribution of local UV radiation sources without the expense of detailed radiative transfer, we compute the half luminosity radius (Re) of a galaxy, within which half of the local radiation luminosity in that galaxy originates. Then, we assign the local \ufb02ux to each cell with distance r from the center of the galaxy, approximately, as F\u03bb,r = L\u03bb 4\u03c0r2[1 + 2e\u2212r/(2Re)]. (1) The new ionization radiation \u201cbackground\" at each cell with the inclusion of the local emission is computed as Fnew = FHM,\u03bb+Fr,\u03bb, where FHM,\u03bb is the \ufb02ux of the HM background radiation at the relevant wavelength \u03bb. Since the local radiation is mostly dominated by dense hot gaseous regions that tend to be spatially centrally concentrated, our neglect of its possible attenuation likely makes the second case of radiation \ufb01eld (HM+local) an upper limit. Thus, the two choices of radiation \ufb01eld likely bracket all possible cases. Each cell has a size of 625 pc within a cube of 320 kpc centered on each selected sim\f\u2013 7 \u2013 ulated galaxy, we convert the density nH and the radiation F at the cell to the ionization parameter U = F/cnH at the radiation energy in question, where c is speed of light. Using U and temperature of the cell, we \ufb01nd the abundances of various ions for each cell using the pre-computed CLOUDY lookup tables, which is then multiplied by the metallicity of the cell in solar units. We use the updated solar abundances of these elements from Asplund et al. (2009), in the notation of log \u03f5x = log(Nx/NH) + 12 listed in Table 2. Table 2 lists the UV lines analyzed in this paper, where each doublet is listed using two rows. The information for each line is listed, including wavelength (column 2), oscillator strength (column 3), lower and upper energy levels of the transition (columns 4,5) and abundance of the element (column 6). In column 7, we list the lower column density threshold in constructing covering fractions of the lines (see Figure 4). Each of the lower column density thresholds is chosen to be the minimum of the upper limits for each respective ion. For computing the frequency of the line ratios, essentially some line to the OVI line in all cases, we choose the cut for the OVI column density at log NOVI > 14, which approximately corresponds to the lowest column density of detected OVI absorbers. Note that all of the lines studied here are resonant lines. Ion wavelength[\u00c5] oscillator strength lower level upper level log \u03f5 log Ncut NV 1238.8 1.56e-1 2S1/2 2P1/2 7.83 13.42 NV 1242.8 7.8e-2 2S1/2 2P3/2 7.83 13.42 NIII 989.7 1.23e-1 2P1/2 2D3/2 7.83 13.50 NII 1083.9 1.11e-1 3P0 3D1 7.83 13.46 OVI 1031.9 1.33e-1 2S1/2 2P3/2 8.69 13.27 OVI 1037.6 6.6e-2 2S1/2 2P1/2 8.69 13.27 SiIV 1393.7 5.13e-1 2S1/2 2P3/2 7.51 12.38 SiIV 1402.7 2.55e-1 2S1/2 2P1/2 7.51 12.38 Table 2: Properties of UV lines analyzed in this study. Unlike oxygen and silicon, nitrogen stems from both primary and secondary producers and consequently nitrogen abundance is theoretically expected to be a function of overall metallicity, for which oxygen abundance is a good proxy. This theoretical expectation is con\ufb01rmed by observations. We use the \ufb01tting formula of Moll\u00e1 et al. (2006), which is normalized at solar value, to express the N/O ratio as a function of O/H ratio: log(N/O) = \u22121149.31 + 1115.23x \u2212438.87x2 + 90.05x3 \u221210.20x4 + 0.61x5 \u22120.015x6, (2) where x = 12 + log(O/H). In subsequent analysis, where nitrogen is concerned, we perform the analysis twice, one assuming N/O to be independent of O/H and another using Eq (2). To give the magnitude of the effect, we note that at (0.03, 0.1, 0.3) Z\u2299for oxygen abundance, N/O value is (0.27, 0.28, 0.4) in solar units. \f\u2013 8 \u2013 3. Results Total gas Temp Metallicity SiIV NV OVI 19 20 21 22 4 5 6 \u22122 \u22121 0 10 11 12 13 14 15 16 9 10 11 12 13 14 15 10 11 12 13 14 15 16 Fig. 1.\u2014 shows projection plots along one of the axes of the 320 kpc cube for one of the galaxies listed in Table 1. From top-left in clockwise direction are logarithm of total hydrogen column density (top-left), logarithm of the density-weighted gas temperature (top-middle), logarithm of the density-weighted gas metallicity in solar units (top-right), logarithm of OVI column density (bottom-right), logarithm of NV column density (bottom-middle) and logarithm of SiIV column density (bottom-left). Before presenting quantitative results, we show visually some basic quantities for a few galaxies in Figures1, 2, 3. Some features are easily visible just from these three random examples. First, large variations from galaxy to galaxy are evident, for each of the displayed variables. Physically, this stems from density and thermodynamic structures of each galaxy being subject to its unique exterior and interior forces, including halo mass, gas in\ufb02ow and associated dynamic and thermodynamic effects, feedback from star formation and related dynamic and thermodynamic effects, As an illustrative example, in Figure1, we see the temperature at the lower-left triangle mostly in the range of 105.5 \u2212106K, compared to the temperature at the upper-right triangle mostly in the range of 104.5 \u2212105K. We do not investigate here further into the dynamic causes of such temperature patterns with possible physical processes including merger shocks and stellar feedback (i.e., supernova) shocks. Second, the temperature distribution of the CGM is far from uniform. Indeed, the CGM is of multi-phase in nature, typically spanning the range of 104 \u2212106K within the \u223c150 kpc radial range for the galax\f\u2013 9 \u2013 Total gas Temp Metallicity SiIV NV OVI 18 19 20 21 22 4 5 6 \u22123 \u22122 \u22121 0 10 11 12 13 14 15 16 8 9 10 11 12 13 14 9 10 11 12 13 14 15 Fig. 2.\u2014 shows the same as in Figure 1 but for another galaxy. Total gas Temp Metallicity SiIV NV OVI 18 19 20 21 22 4 5 6 \u22122 \u22121 0 9 10 11 12 13 14 15 9 10 11 12 13 14 15 10 11 12 13 14 15 16 Fig. 3.\u2014 shows the same as in Figure 1 for yet another galaxy. \f\u2013 10 \u2013 ies examined. This property is of critical importance to the line ratios that we obtain in the simulations. Third, the metallicity distribution in the CGM is highly inhomogeneous, typically spanning 10\u22122 \u2212100 Z\u2299. Fourth, although the number of galaxies looked at is small, we \ufb01nd that star-forming galaxies, as those selected in this investigation, tend to be involved in signi\ufb01cant mergers. This in turn suggests that signi\ufb01cant mergers may be a necessary ingredient in driving signi\ufb01cant star formation activities in galaxies at low redshift. We now turn to quantitative results. The top-left panel of Figure 4 shows the column density distributions of the \ufb01ve lines. Due to our projection method, the number of weaker lines are underestimated due to blending. The turndown of the number of the OVI lines below column density 1013cm\u22122 is probably due to that. This is unlikely to signi\ufb01cantly affect our results below, since our coincidence analysis is focused on OVI absorbers with column density higher than 1014cm\u22122. The covering fractions shown in Figure 4 may be somewhat overestimated, since the column density cutoff for OVI is 1013.27cm\u22122; a comparison between the cutoff column densities listed in Table 2 and the behavior of the column density histograms shown in the top-left panel of Figure 4 for the other four lines (SiIV, NV, NIII and NII) suggests that the covering fractions for these four lines are unlikely affected signi\ufb01cantly due to blending. The remaining three panels of Figure 4 show covering fraction of OVI and SiIV (top-right panel), NV, NIII, NII lines with constant N/O ratio (bottom-left panel) and with N/O as a function of O/H (Eq 2, bottom-right panel). Several interesting properties may be noted. First, there is a signi\ufb01cant drop of covering fraction, by a factor of 2 \u221210, from the central regions (a few kpc) to \u223c150 kpc. This is likely due primarily to a combination of the general trend of decreasing gas density and decreasing metallicity of the CGM with increasing galacto-centric radius. In spite of this covering fraction decrease with radius, most of the absorbers are located at large impact parameters, since the area increases with radius at a higher rate, for example, by a factor of 64 from 20 kpc to 160 kpc. Second, the OVI covering fraction (top panel) is large and largest among the examined lines, ranging from 80 \u221290% at \u226410 kpc to \u223c50% at \u2264150 kpc, given the chosen column density thresholds listed in Table 2. This is in good agreement with observations (e.g., Chen & Mulchaey 2009; Prochaska et al. 2011). Third, it is particularly noteworthy that there is essentially no difference between HM and HM+local cases for the OVI covering fraction. This indicates that photoionization plays a negligible role in the abundance of OVI. In other words, OVI is produced by collisional processes, powered by feedback and gravitational shocks, which will be veri\ufb01ed subsequently [see Cen (2013) for a detailed discussion on the varying contributions of stellar feedback versus gravitational shocks in different types of galaxies]. Fourth, a stronger radiation \ufb01eld tend to increase the abundance of NII and NIII but the opposite is true for NV. But the difference between HM and HM+local cases for both NII and NIII are fairly minor, indicating that collisional processes are the primary powering source for producing NII and NIII. This is not the case for SiIV and NV, where the differences between HM and HM+local cases are substantial and the differences are larger toward small impact parameters. This suggests that a higher HM+local radiation is able to produce NV for high density gas in the inner regions of star-forming galaxies, while \f\u2013 11 \u2013 11 12 13 14 15 16 logN[cm\u22122] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 PDF HM OVI SiIV NV NIII NII 0 20 40 60 80 100 120 140 160 ImpactParameter[kpc] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 CoveringFraction OVI[HM] OVI[HM+Local] SiIV[HM] SiIV[HM+Local] 0 20 40 60 80 100 120 140 160 ImpactParameter[kpc] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 CoveringFraction constant[N/O] NV[HM] NV[HM+Local] NIII[HM] NIII[HM+Local] NII[HM] NII[HM+Local] 0 20 40 60 80 100 120 140 160 ImpactParameter[kpc] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 CoveringFraction varying[N/O] NV[HM] NV[HM+Local] NIII[HM] NIII[HM+Local] NII[HM] NII[HM+Local] Fig. 4.\u2014 Top-left panel: shows the column density distributions for all \ufb01ve lines in the case with HM, and for the three nitrogen lines with varying N/O (Eq 2). Top-right panel: shows the covering fraction as a function of the galacto-centric impact parameter for OVI absorbers with column density above 1013.27cm\u22122 with HM (solid blue curve) and with HM+local (dotdashed blue curve), for SiIV absorbers with column density above 1012.38cm\u22122 with HM (solid green curve) and with HM+local (dot-dashed green curve). Bottom-left panel: shows the same as in the top panel but for the three nitrogen absorption lines with column density above 1013.43cm\u22122 for NV (blue curves), 1013.50cm\u22122 for NIII (red curves) and 1013.46cm\u22122 for NII (green curves), under the assumption that N/O ratio is independent of metallicity. The solid curves correspond to the case with only HM radiation background, whereas the dot-dashed curves are for the case with HM+local radiation. Bottom-right panel: shows the same as for the bottom-left panel, except that we use Eq (2) for nitrogen abundance as a function of oxygen abundance. the outer regions are mainly dominated by collisional processes, consistent with a trend of increasing temperature with increasing galacto-centric radius found in Cen (2013). Finally, comparing the bottom-left and bottom-right panels, we see signi\ufb01cant differences between constant N/O case (left) and varying N/O case (Eq 2, right). A closer look reveals that the \f\u2013 12 \u2013 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 logN(NV)/N(OVI) 0.0 0.5 1.0 1.5 2.0 PDF [HM, constantN/O] [HM+local, constantN/O] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 logN(NV)/N(OVI) 0.0 0.5 1.0 1.5 2.0 PDF [HM, varyingN/O] [HM+local, varyingN/O] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 logN(NIII)/N(OVI) 0.0 0.2 0.4 0.6 0.8 1.0 PDF [HM, constantN/O] [HM+local, constantN/O] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 logN(NIII)/N(OVI) 0.0 0.2 0.4 0.6 0.8 1.0 PDF [HM, varyingN/O] [HM+local, varyingN/O] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 logN(NII)/N(OVI) 0.0 0.2 0.4 0.6 0.8 1.0 PDF [HM, constantN/O] [HM+local, constantN/O] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 logN(NII)/N(OVI) 0.0 0.2 0.4 0.6 0.8 1.0 PDF [HM, varyingN/O] [HM+local, varyingN/O] Fig. 5.\u2014 Top row: shows the probability distribution function (PDF) of the ratio of N(NV)/N(OVI) for all OVI absorbers with N(OVI) > 1014cm\u22122 with constant N/O ratio (left) and varying N/O as a function of O/H (Eq 2). Middle row: the same for NIII/OVI. Bottom row: the same for NII/OVI. (Blue, red) histograms are for (HM, HM+local) radiation \ufb01eld. The observational data are shown for three separate types: black dots are those with both lines detected, left green arrows are those where the numerator line is an upper limit and the denominator line a detection, and brown left arrows are those where the numerator line is an upper limit and the denominator line a lower limit. The y coordinates of the points are arbitrary. \f\u2013 13 \u2013 difference increases with increasing impact parameter, re\ufb02ecting the trend of decreasing gas metallicity (O/H) with increasing impact parameter. Also revealed is that the decreases in covering fraction from constant N/O to varying N/O case for different lines differ signi\ufb01cantly, re\ufb02ecting the complex multi-phase medium with inhomogeneous, temperature-and-densitydependent metallicity distribution; while the decrease for NIII is relatively small (a factor of less than two for all impact parameters), the decreases for NII and NV are quite large, a factor of larger than two at < 150 kpc. As we will quantify using KS tests subsequently, the signi\ufb01cant reduction in nitrogen abundance with the varying N/O case results in noticeably better KS test p-values with respect to NII/OVI, NIII/OVI, NV/OVII column density ratios. We now make direct comparisons to observations with respect to the ratio of column densities for four absorption line pairs. Figure 5 shows the probability distribution functions of N(NV)/N(OVI) (top row), N(NIII)/N(OVI) (middle row) and N(NII)/N(OVI) (bottom row), with each row further separated for constant N/O (left) and varying N/O cases (right). Due to the fact that the vast majority of NII, NIII and NV absorbers have metallicities in the range of [O/H] = \u22121 to \u22120.5 (see Figure 7 below), the horizontal shifts of the peaks of the PDFs for the ratios of all three nitrogen lines to OVI are substantial, of order 0.5 dex. The shifts have noticeable effects on the KS tests between simulations and observations below given in Table 3. An examination by eye between the simulation results and observational data comes with the visual impression that all cases agree reasonably well with observations, which will be veri\ufb01ed quantitatively. Figure 6 shows the probability distribution functions of N(SiIV)/N(OVI) column density ratios. Once again, visual examination suggests agreement with simulations and observations. Line ratio HM HM+Local NV/OVI[constant N/O] 0.33 0.37 NII/OVI[constant N/O] 0.93 0.99 NIII/OVI[constant N/O] 0.28 0.49 NV/OVI[varying N/O] 0.99 0.99 NII/OVI[varying N/O] 0.99 0.99 NIII/OVI[varying N/O] 0.91 0.96 SiIV/OVI 0.9 0.98 Table 3: Two-sample KS test p-values for column density ratio distributions of four absorption line pairs, including cases with constant and varying N/O ratios and HM versus HM+local radiation \ufb01eld To gain a more quantitative statistical test between simulations and observations, we perform two-sample KS tests between simulated and observed column density ratios for four pairs of lines, NV/OVI, NII/NV, NIII/OVI and SiIV/OVI. Since most of the observational data are upper and lower limits, instead of actual detections, our analysis is performed as follows. For the case where the numerator line is an upper limit and the denominator line is a detection, we allow the ratio to be drawn from the simulation distribution with value upper-bounded by \f\u2013 14 \u2013 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 logN(SiIV)/N(OVI) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 PDF [HM] [HM+local] Fig. 6.\u2014 shows the PDF of the ratio of N(SiIV)/N(OVI) for all OVI absorbers with N(OVI) > 1014cm\u22122. The points are observational data divided into three separate types: the black dots are those with both lines in the ratio detected, the left green arrows are those where SiIV line is an upper limit and OVI line is a detection, and the brown left arrows are those where SiIV line is an upper limit and OVI line is a lower limit. The blue histograms are for HM radiation \ufb01eld, whereas the red histograms are for HM+local radiation \ufb01eld. The y coordinates of the points are arbitrary. the upper limit. The same is done for the case where the numerator line is an upper limit and the denominator line is a lower limit. Then, in conjunction with detections, where both lines are detected, we perform a two-sample KS test for each of the four line pairs, NV/OVI, NII/NV, NIII/OVI and SiIV/OVI, between simulations and observations. Needless to say, our presently adopted procedure to treat the upper and lower limits cases favors agreement with observations and simulations. Nevertheless, the procedure is consistent with the current data. The results are tabulated in Table 3. Clearly, no major disagreements can be claimed as to reject the simulation results in all four cases, (constant N/O, varying N/O) times (HM, HM+local). However, there are hints, at face value, that the constant N/O cases are less preferred than the varying N/O cases. Nonetheless, it is premature to make any \ufb01rm statistical conclusion on that at this juncture. Thus, the only robust conclusion we can reach at this time is that our simulation predictions are fully consistent with extant observational data with respect to the four line ratios, NV/OVI, NII/NV, NIII/OVI and SiIV/OVI. What would exponentially increase the statistical power of testing the models is to turn these current upper limits into real detections. We have performed the following exercise to demonstrate this point. Let us assume that all current upper limits of the column density ratios become detections and the detection values are lower than current upper limits uniformly by a factor of \u2206dex. We \ufb01nd that, if \u2206= (0.18, 0.16), the KS p-values for the NV/OVI line ratio become (0.05, 0.05) for the (HM, HM+local) cases with constant N/O; with \u2206= (0.21, 0.20), the KS p-values for the NV/OVI line ratio become (0.01, 0.01) for the (HM, HM+local) cases with \f\u2013 15 \u2013 constant N/O. For the varying N/O cases, the situation is non-monotonic in the following sense: the KS p-values for the NV/OVI line ratio are (0, 0) for the (HM, HM+local) with \u2206= (0, 0), increasing to a maximum of (0.5, 0.8) with \u2206= (0.70, 0.63), then downturning to (0.01, 0.01) with \u2206= (0.83, 0.78). Obviously, a uniform shift is an oversimpli\ufb01cation. Nevertheless, this shows clearly an urgent need to increase observational sensitivity in order to place signi\ufb01cantly stronger constraints on models than currently possible. When all line pairs are deployed, the statistical power will be still, likely much, greater. 11 12 13 14 15 16 logNOVI[cm\u22122] \u22123.5 \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 logZ[Z\u2299] [HM] 11 12 13 14 15 16 logNNV[cm\u22122] \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 logZ[Z\u2299] [HM] 11 12 13 14 15 16 logNSiIV[cm\u22122] \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 logZ[Z\u2299] [HM] 11 12 13 14 15 16 logNNIII[cm\u22122] \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 logZ[Z\u2299] [HM] Fig. 7.\u2014 shows the number density of absorption lines in the column density-metallicity plane for OVI (top-left panel), NV (top-right panel), NIII (bottom-right panel) and SiIV (bottomleft panel). The contour levels are evenly spaced in log-scale spanning the range of 0.5 and half of maximum density in each panel. Only the HM case with varying N/H case is shown for all lines, because the difference between HM and HM+local cases is found to be relatively small. Finally, we turn to a closer examination of the physical conditions that give rise to the various absorption lines in our simulations shown above. In Figure 7 we show the number density of lines in the column density-metallicity plane for OVI (top-left panel), NV (top-right panel), NIII \f\u2013 16 \u2013 11 12 13 14 15 16 logNOVI[cm\u22122] 4.5 5.0 5.5 6.0 6.5 logT[K] [HM] 11 12 13 14 15 16 logNNV[cm\u22122] 4.0 4.5 5.0 5.5 6.0 logT[K] [HM] 11 12 13 14 15 16 logNSiIV[cm\u22122] 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 5.6 logT[K] [HM] 11 12 13 14 15 16 logNNIII[cm\u22122] 4.0 4.2 4.4 4.6 4.8 5.0 5.2 5.4 logT[K] [HM] Fig. 8.\u2014 shows the number density of absorption lines in the column density-temperature plane for OVI (top-left panel), NV (top-right panel), NIII (bottom-right panel) and SiIV (bottomleft panel). The contour levels are evenly spaced in log-scale spanning the range of 0.5 and half of maximum density in each panel. Only the HM case with varying N/H case is shown for all lines, because the difference between HM and HM+local cases is found to be relatively small. (bottom-right panel) and SiIV (bottom-left panel). Overall, while there is a signi\ufb01cant span in metallicity, with as low as [Z/H] = \u22123.5 at the low column density end for OVI, the vast majority of absorbers have metallicities falling into the range [O/H] = \u22122 to \u22120.5 for OVI, [O/H] = \u22122 to \u22120.5 for NV, [O/H] = \u22122 to \u22120.5 for NV and [O/H] = \u22121 to \u22120.5 for SiIV. At the high column density end, we see [O/H] \u223c\u22120.5 to 0 for OVI, [O/H] \u223c\u22120.5 to 0.5 for NV, [O/H] \u223c0 to 0.5 for NIII and [O/H] \u223c0 to 0.5 for SiIV. These trends and signi\ufb01cant disparities between different lines are a results of complex multi-phase CGM with a very inhomogeneous metallicity distribution. Simplistic collisional excitation/ionization models are unlikely to be able to capture all of the key elements of the physical processes involved and may lead to conclusions that are not necessarily conformal to direct analyses of the simulations. \f\u2013 17 \u2013 Figure 8 shows the number density of lines in the column density-temperature plane for OVI (top-left panel), NV (top-right panel), NIII (bottom-right panel) and SiIV (bottom-left panel). To set the context of collisionally dominated ionization processes, we note that, under the assumption of collisional ionization equilibrium, as in CLOUDY, the peak temperature for the element in question with Half-Width-Half-Maximum is approximately (3.0\u00b10.5)\u00d7105K for OVI, (2.0\u00b10.5)\u00d7105K for NV, 7.5+5.0 \u22123.0\u00d7104K for NIII and 7.3+1.8 \u22122.2\u00d7104K for SiIV (e.g., Gnat & Sternberg 2007). A one-to-one comparison between each of these four peak temperature (and its width) and the contour levels indicates that for OVI and NV lines the collisional ionization dominates the process for creating OVI at NOVI \u22651014cm\u22122 and NVI at NNV \u22651013cm\u22122, respectively, manifested in the horizontal extension of the contours pointing to the right at the temperature (with an appropriate width) in question. The same can be said about the SiIV line at the high column end NSiIV \u22651015cm\u22122; however, at lower NSiIV values (\u22641015cm\u22122), the contours are no longer aligned horizontally, indicative of enhanced contribution of photoionization due to lower ionization potential of SiIII (33.49eV) versus say OV (77.41eV). Similar statements about NIII lines to those for SiIV can be made due to similar reasons. Overall, the similarity between OVI and NV lines suggests that collisional ionization processes are dominant and results with respect to these two lines are relatively immune to uncertainties in the radiation \ufb01eld used. However, the apparent insensitivity of results on the radiation \ufb01eld with detailed calculation we have performed, in the way of comparing HM and HM+local results, indicates that the net effect due to an increase of radiation \ufb01eld is relatively small due to the large ranges of density and metallicity of gas involved, although the actual situation appears to be more intertwined because of nonlinear relationships between density, metallicity, ionization parameter and column density. As an example, as shown earlier in Figure 4, a stronger radiation \ufb01eld tend to increase the abundance of NIII, although the difference between HM and HM+local cases for is apparently minor, seeming to suggest con\ufb02ictingly that collisional processes are the primary powering source for producing NIII. A more thorough theoretical study and a more detailed comparison to observations will be desirable, when a larger observational sample with more sensitive column density detection limits becomes available. As we have demonstrated, a fraction of a dex increase in sensitivity may warrant a revisit to a detailed comparison. 4." + }, + { + "url": "http://arxiv.org/abs/1606.05930v2", + "title": "Constraint on Matter Power Spectrum on $10^6-10^9M_\\odot$ Scales from ${\\large\u03c4_e}$", + "abstract": "An analysis of the physics-rich endgame of reionization at $z=5.7$ is\nperformed, utilizing jointly the observations of the Ly$\\alpha$ forest, the\nmean free path of ionizing photons, the luminosity function of galaxies and new\nphysical insight. We find that an upper limit on ${\\rm \\tau_e}$ provides a\nconstraint on the minimum mean free path (of ionizing photons) that is\nprimarily due to dwarf galaxies, which in turn yields a new and yet the\nstrongest constraint on the matter power spectrum on $10^6-10^9M_\\odot$ scales.\nWith the latest Planck measurements of ${\\rm \\tau_e = 0.055 \\pm 0.009}$, we can\nplace an upper limit of $(8.9\\times 10^6, 3.8\\times 10^7, 4.2\\times\n10^8)M_\\odot$ on the lower cutoff mass of the halo mass function, or equivalent\na lower limit on warm dark matter particle mass ${\\rm m_x \\ge (15.1, 9.8,\n4.6)keV}$ or on sterile neutrino mass ${\\rm m_s \\ge (161, 90, 33)keV}$, at $(1,\n1.4, 2.2)\\sigma$ confidence level, respectively.", + "authors": "Renyue Cen", + "published": "2016-06-20", + "updated": "2016-09-14", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction The Gunn & Peterson (1965) optical depth of Ly\u03b1 photons provides the strongest and most sensitive constraint on the neutral hydrogen fraction of the intergalactic medium (IGM). The integrated electron scattering optical depth of the universe provides a complementary constraint on the ionized fraction of the IGM, but is insensitive to the neutral hydrogen fraction as long as the IGM is mostly ionized. Recent measurements of the electron scattering optical depths of the IGM by the cosmic microwave background radiation experiments (e.g., Hinshaw et al. 2013; Planck Collaboration et al. 2015) suggest that it may be signi\ufb01cantly below redshift z = 12 before the universe becomes half reionized. The observations of the high redshift (z > 6) quasar absorption spectra from the Sloan Digital Sky Survey (SDSS) and others (e.g., Fan et al. 2006) and arguments based on the slowly and continuously evolving IGM opacity (e.g., Becker et al. 2007) suggest that only at z = 5.7 the universe is suf\ufb01ciently ionized to allow for detectable transmission of Ly\u03b1 photons hence de\ufb01nitive measurements of (low enough) Ly\u03b1 (and higher order Lyman series) optical depth. It is generally accepted that stars are primarily responsible for producing most of the ionizing photons for cosmological reionization. While it seems relatively secure to further suggest that the reionization process has begun at z \u226510 based on analysis of expected emergence of \ufb01rst galaxies in the standard cold dark matter model (e.g., Trac et al. 2015), the combination of these independent observational indications now paints a reionization picture that is rapidly arXiv:1606.05930v2 [astro-ph.CO] 14 Sep 2016 \f\u2013 2 \u2013 evolving at z = 6\u221210. Two important implications are that the so-called \ufb01rst galaxies that form out of primordial gas may be closer to us than thought before and that Popolation III (Pop III) stars formed with metal-free gas may extend to more accessible redshifts. In this contribution we perform a detailed analysis of the endgame of the cosmological reionization at z = 5.7. We examine joint constraints on the IGM from considerations of both global and local ionization balances observationally and, for the \ufb01rst time, self-consistently in the context of the standard cold dark matter model. We \ufb01nd reasonable concordance between Ly\u03b1 optical depth, Lyman continuum (LyC) mean free path (mfp) \u03bbmfp and global recombination rate of hydrogen observationally and theoretically. We solve the global reionization equation, given the emissivity evolution in the context of the standard cold dark matter model normalized to the boundary conditions of required emissivity at z = 5.7 and reionization completing at z = 5.7. We provide a detailed analysis of the attainable solutions of reionization histories to shed light on the overall topological evolution of the HII regions, the evolution of the Ly\u03b1 emitters, the neutral fraction of the IGM, and a new and powerful constraint on the matter power spectrum on small scales hence dark matter particle properties. Our focus here is on placing a yet the strongest constraint on the scale-scale power in the cosmological model and, speci\ufb01cally, the strongest lower bound on the mass of warm dak matter particles. The physical insight on this particular point is new and may be described brie\ufb02y as follows. The state of the IGM at z = 5.7 is well \ufb01xed by the Gunn & Peterson (1965) optical depth of Ly\u03b1 photons, which in turn provides a tight constraint on the photoionization rate \u0393 at z = 5.7 in the post-reionization epoch. Since \u0393 at z = 5.7 is equal to \u02d9 Nion,IGM\u03bbmfp\u00af \u03c3ion, where \u02d9 Nion,IGM is the global mean of effective ionization photon emissivity at z = 5.7, \u03bbmfp is the mean free path of ionizing photons at z = 5.7 and \u00af \u03c3ion is the spectrum-weighted mean photoionization cross section, a constant. Thus, a tight constraint on \u0393 at z = 5.7 is equivalent to an equally tight constraint on the product \u02d9 Nion,IGM\u03bbmfp at z = 5.7. Note that \u02d9 Nion,IGM already takes into account the escape fraction of ionizing photon from ionization sources (e.g., galaxies and others). The degeneracy between \u02d9 Nion,IGM and \u03bbmfp can be broken, if one considers, jointly, a separate constraint placed by an upper limit on the integrated electron scattering optical depth of the universe \u03c4e from the latest cosmic microwave background radiation experiments (e.g., Planck Collaboration et al. 2016). This is where our new physical insight comes in. We point out that, when the product \u02d9 Nion,IGM\u03bbmfp is \ufb01xed, a higher \u03bbmfp would require a lower \u02d9 Nion,IGM, which in turn would cause the reionization process to shift to lower redshift hence give rise to a lower \u03c4e. In other words, there is a negative correlation between \u03bbmfp and \u03c4e. Since more small-scale power results in a lower \u03bbmfp, there is then a negative correlation between the amount of small-scale power and \u03c4e more small-scale power leads to lower \u03c4e. As a result, an upper bound on \u03c4e placed by the latest CMB observations would translate to a lower bound on the amount of small-scale power hence a lower bound on the particle mass in the context of the warm dark matter model. This is the scienti\ufb01c focus of this paper. \f\u2013 3 \u2013 2. On Sinks and Sources of Lyman Continuum at z = 5.7 2.1. Global Balance of Emission and Recombination The hydrogen recombination rate per unit comoving volume at redshift z is \u02d9 Nrec = CHII\u03b1B(T)[1 + Yp/4(1 \u2212Yp)]n2 H,0(1 + z)3 (1) and the corresponding helium I recombination rate is \u02d9 NHeI,rec = CHII\u03b1B(HeI, T)[1 + Yp/4(1 \u2212Yp)][Yp/4(1 \u2212Yp)]n2 H,0(1 + z)3, (2) where nH,0 = 2.0 \u00d7 10\u22127(\u2126B/0.048)cm\u22123 is the mean hydrogen number density at z = 0, Yp = 0.24 the primordial helium mass fraction, CHII is the clumping factor of the recombining medium. The case B recombination coef\ufb01cient \u03b1B(T) = (2.59, 2.52) \u00d7 10\u221213 cm3s\u22121 at T = (104, 2 \u00d7 104)K (Osterbrock 1989). The case B He I recombination coef\ufb01cient is \u03b1B(HeI, T) = (2.73, 1.55) \u00d7 10\u221213 cm3 s\u22121 at T = (104, 2 \u00d7 104)K (Osterbrock 1989). To prevent the already ionized IGM from recombining, the amount of ionizing photons entering the IGM has to be, at least, equal to the total recombination rate, resulting in the well known minimum requirement of ionizing photon production rate (e.g., Madau et al. 1999) \u02d9 Nion,global\u2265\u02d9 Nrec + \u02d9 NHeI,rec = 3.4 \u00d7 1050(CHII/3.2)(\u2126b/0.048)2((1 + z)/6.7)3cMpc\u22123s\u22121 for T = 104K = 3.2 \u00d7 1050(CHII/3.2)(\u2126b/0.048)2((1 + z)/6.7)3cMpc\u22123s\u22121 for T = 2 \u00d7 104K, (3) assuming that helium II is not ionized. We shall call this constraint expressed in Eq 3 \u201cglobal constraint\". For clarity we will adopt the convention to use cMpc and pMpc to denote comoving and proper Mpc, respectively. Early hydrodynamical simulations suggest CHII \u223c10 \u221240 at z < 8 (e.g., Gnedin & Ostriker 1997). More recent simulations that separate out dense interstellar medium (ISM) from the IGM indicate a lower CHII \u223c1 \u22126 at z \u223c6 (e.g., Sokasian et al. 2003; Iliev et al. 2006; Pawlik et al. 2009; Shull et al. 2012; Finlator et al. 2012). Pawlik et al. (2009) give CHII = 3.2 for z \u226410 = 1 + exp (\u22120.28z + 3.59) for z > 10, (4) which we will use in the calculations below. As we demonstrate later, the value CHII = 3.2 at z = 5.7 is concordant between considerations of global and local ionization balances. 2.2. Local Balance of Ionization and Recombination A second, independent determination of ionizing photon production rate can be obtained from the Ly\u03b1 optical depth around cosmic mean density, \u03c4Ly\u03b1, i.e., the Gunn & Peterson (1965) \f\u2013 4 \u2013 optical depth, at z = 5.7, where observational measurements are available. Because of the large cross section of neutral hydrogen for Ly\u03b1 scattering, \u03c4Ly\u03b1 is the most sensitive probe of neutral medium in the low neutral-fraction regime. From the SDSS observations of high redshift quasar absorption spectra \u03c4Ly\u03b1 is directly measured (Fan et al. 2002; Fan et al. 2006). When analyzed in conjunction with density distributions of the IGM from hydrodynamic simulations, one can infer both the volume weighted neutral fraction and the ionization rate \u0393, expressed in units of 10\u221212s\u22121, \u0393\u221212. Because the mean density regions that determine the volumeweighted neutral fraction are well resolved in simulations (i.e., the simulation resolution is much \ufb01ner than the Jeans scale of the photoionized IGM), the uncertainty on the determined volume-weighted neutral fraction is small and does not depend sensitively on cosmological parameters, either. The analysis performed by Cen & McDonald (2002) uses a smaller sample of SDSS quasars coupled with simulations of Cen et al. (1994). The analysis performed by Fan et al. (2006) utilizes a larger quasars sample and the density distribution function of MiraldaEscud\u00e9 et al. (2000). Both studies derive, independently, \u0393\u221212 \u223c0.20. For the subsequent calculations, we will use \u0393\u221212 = 0.20+0.11 \u22120.06 (5) at z = 5.7 from Fan et al. (2006). Under the assumption that the spatial scales of \ufb02uctuations (or clustering scales) for both sources and sinks are substantially smaller than the mean free path \u03bbmfp of LyC photons, then the (approximately uniform) ionizing \ufb02ux at any spatial point is Fion = Z \u221e 0 \u02d9 Nion,IGM 4\u03c0r2 exp (\u2212r/\u03bbmfp)4\u03c0r2dr = \u02d9 Nion,IGM\u03bbmfp, (6) where \u02d9 Nion,IGM is the mean emissivity of ionizing photons entering the IGM. We note that the 2-point correlation length of galaxies at z = 5.7 is 4 \u22125cMpc (e.g., Ouchi et al. 2010), much smaller than \u03bbmfp \u223c30\u221260cMpc, which we will discuss later. Therefore, the above assumption is a good one, so long as stellar sources are the main driver of cosmological reionization. We expect that radiation \ufb02ux \ufb02uctuations would be on the order of the ratio of the two lengths scales above, i.e., \u223c10%. As we will show later that, in the context of the \u039bCDM model, \u03bbmfp depends on \u0393 approximately as \u03bbmfp \u221d\u0393\u22120.28. Thus, we expect that the uniform radiation assumption is accurate statistically for computing the mean \u03bbmfp at 1\u22123% level, with negligible systematic biases. The hydrogen ionization rate \u0393 = Fion\u00af \u03c3ion = \u02d9 Nion,IGM\u03bbmfp\u00af \u03c3ion, (7) where \u00af \u03c3ion is the spectrum-weighted mean photoionization cross section, \u00af \u03c3ion \u2261 R \u221e 13.6eV f\u03bd h\u03bd\u03c3H(\u03bd)d\u03bd R \u221e 13.6eV f\u03bd h\u03bdd\u03bd , (8) where \u03c3H(\u03bd) is the photon energy-dependent hydrogen ionization cross section, f\u03bd is the ionizing photon spectrum. We will use f\u03bd for Pop II stars of metallicity Z = 0.05 Z\u2299from Tumlinson \f\u2013 5 \u2013 et al. (2001), which may be approximated as f\u03bd\u221d\u03bd0 for \u03bd = 13.6 \u221224.6eV \u221d\u03bd\u22121 for \u03bd = 24.6 \u221246eV \u221d\u03bd\u2212\u221e for \u03bd > 46eV, which results in the \ufb01ducial value that we will use in our calculations at z = 5.7, \u00af \u03c3ion = 3.16 \u00d7 10\u221218 cm2. (9) Combining Eq (5, 7, 9) gives the constraint on comoving emissivity at z = 5.7 from GunnPeterson optical depth, named \"local constraint\", \u02d9 Nion,local = 2.7 \u00d7 1050(\u0393\u221212 0.2 )( \u00af \u03c3ion 3.16 \u00d7 10\u221218cm2)\u22121( \u03bbmfp 7.6pMpc)\u22121cMpc\u22123s\u22121. (10) In Eq 10 it is seen that there is a signi\ufb01cant, linearly inverse dependence of \u02d9 Nion,local on \u03bbmfp, which we now discuss in length observationally here and theoretically in the next subsection. Traditionally, \u03bbmfp is determined by counting the incidence frequency of Lyman limit systems (LLSs) (e.g., Storrie-Lombardi et al. 1994; Stengler-Larrea et al. 1995; Songaila & Cowie 2010; Ribaudo et al. 2011; O\u2019Meara et al. 2013) and generally found to be in the range of \u03bbmfp = 5 \u221210 pMpc at z = 5.7, when extrapolated from lower redshift trends. This method to determine \u03bbmfp contains some ambiguity as to the dependence of the incidence frequency on exact choice of column density threshold of LLSs, and uncertainties related to absorption system identi\ufb01cations (such as line blending) and collective absorption due to clustering of absorbers. A more direct approach to determining \u03bbmfp is to measure the optical depth at Lyman limit directly, as pioneered by Prochaska et al. (2009). A recent application of that technique to a large sample of (163) high redshift quasars is cast into \ufb01tting formula \u03bbmfp = 37[(1 + z)/5]\u22125.4\u00b10.4pMpc that covers up to redshift z = 5.5 (Worseck et al. 2014). Extrapolating this formula to z = 5.7 results in a median value of 7.6 pMc, \u03bbmfp = 7.6+1.0 \u22120.8 pMpc, (11) with the 1 and 2\u03c3 range of 6.8 \u22128.6 pMpc and 6.0 \u22129.6 pMpc, respectively. It is seen that the directly measured \u03bbmfp are in broad agreement with those based on counting LLSs, which is reassuring. Nevertheless, it is prudent to bear in mind a signi\ufb01cant caveat that \u03bbmfp at z = 5.7 is not directly observed but requires extrapolation from lower redshift data. 2.3. Concordance of Independent Observations at z = 5.7 We now combine three independent sets of observational constraints on \u02d9 Nion, \u0393 and \u03bbmfp on the \u0393 \u2212\u03bbmfp plane, shown in Figure 1: (1) the observed \u03bbmfp from Worseck et al. (2014) \f\u2013 6 \u2013 -13.5 -13 -12.5 -12 log ! (s-1) 5 6 7 8 9 10 11 12 6mfp (pMpc) Worseck + 14 : '2<(6mfp) Fan + 06 : '1<(!) _ Nion;global CHII = 3:2 _ Nion;global CHII = 4:5 _ Nion;global CHII = 9:6 LCDM w= Mcut = 1:6 # 108MLCDM w= Mcut = 5:8 # 107MLCDM w= Mcut = 2:7 # 107MLCDM w= Mcut = 8:6 # 106MFig. 1.\u2014 shows four independent sets of constraints on the \u0393 \u2212\u03bbmfp plane: (1) the observed \u03bbmfp from Worseck et al. (2014) based on LyC optical depth observed at z < 5.5 extrapolate to z = 5.7 (see Eq 11) shown as the red solid curve (mean), thick red dashed curves (1\u03c3) and thin red dashed curves (2\u03c3); (2) the observationally inferred 1\u03c3 range of \u0393 based on measurement of Ly\u03b1 absorption optical depth at z = 5.7 from Fan et al. (2006) shown as the two vertical green dashed lines (see Eq 5); (3) lower bound based on a global balance between emissivity and recombination with Eq 3 assuming clumping factor CHII = (3.2, 4.5, 9.6) and gas temperature T = 104 K, shown as dotted black (thick, median thick, thin) curves; (4) the selfconsistently calculated relation between \u0393 and \u03bbmfp in the standard \u039bCDM model with a lower halo mass cutoff of (1.6 \u00d7 108, 5.8 \u00d7 107, 2.7 \u00d7 107, 8.6 \u00d7 106) M\u2299, respectively, corresponding to a virial temperature cutoff of Tv,cuto\ufb00= (104, 5 \u00d7 103, 3 \u00d7 103, 1.4 \u00d7 103)K. based on Lyman continuum radiation optical depth at z = 5.7 (see Eq 11) are shown as the red solid curve (mean), thick red dashed curves (1\u03c3) and thin red dashed curves (2\u03c3); (2) the observationally inferred 1\u03c3 range of \u0393 based on measurement of Ly\u03b1 absorption optical depth at z = 5.7 from Fan et al. (2006) are shown as the two vertical green dashed lines (see Eq 5); (3) lower bound based on a global balance between emissivity and recombination with Eq 3 assuming clumping factor CHII = (3.2, 4.5, 9.6) and gas temperature T = 104 K, shown as dotted black (thick, median thick, thin) curves. To be conservative, we will use the 2\u03c3 range of \u03bbmfp from Worseck et al. (2014) for our \f\u2013 7 \u2013 discussion, because of the possible additional, systematic uncertainty of using an extrapolated value from the observed highest redshift of z = 5.5 to z = 5.7. Thus, the allowed parameter space is enclosed by the two thin dashed red horizontal lines and the two vertical dashed green lines. This space is then further constrained by the requirement that only to the right of each of the dotted black curves is attainable, depending on the assumed clumpying factor CHII. The placement of this additional requirement on the plane suggests that CHII > 5 at z = 5.7 may not be feasible but the values in Eq 4 that is obtained from recent radiation hydrodynamic simulations and adopted here are fully consistent with this constraint. It is by no means guaranteed a priori that there is any parameter space left when all these three independent observational constraints are considered, due to uncertainties in individual observations. Hence, the fact that there is suggests a concordance among the independent observations. 2.4. Global Stellar Emissivity of Ionizing Photons at z = 5.7 Figure 1 in \u00a72.4 summarizes the current state of constraints on the required emissivity of ionizing photons in the IGM at z = 5.7, in order to (1) keep the IGM ionized globally, (2) keep the IGM ionized locally as demanded by the optical depths probed by the hydrogen Lyman series absorption lines. The multi-faceted agreement is indeed quite remarkable, providing a validation of the different observations at z = 5.7 (in some cases extrapolation is needed) in the post-overlap epoch. We now address \u201csources\" of ionizing photons, in a fully self-consistent fashion, in the standard cold dark matter model. We follow the approach taken by Trac et al. (2015), to which the reader is referred for a more detailed description. Brie\ufb02y, the method uses direct observations of galaxy luminosity functions at high redshift in the Hubble UDF to calibrate the star formation parameters in the model based on halo mass accretion rate functions in the \u039bCDM model. Figure 2 shows a comparison of rest-frame FUV luminosity functions between the model based on the most recent cosmological parameters and observations at various redshifts. The observed LFs are most reliable at z \u22646 and become less so towards higher redshifts, and perhaps less than trustworthy beyond z = 8 due to lack of spectroscopic con\ufb01rmation at present. For a given small region/area, such as the UDF , cosmic variance becomes more problematic towards higher redshift. Additionally, it is possible that the observed LFs at high redshifts, in the midst of reionization, may be masked by possible reionization effects; this issue is signi\ufb01cantly more acute for Ly\u03b1 emitting galaxies (e.g., Mesinger et al. 2004; Haiman & Cen 2005; Dijkstra et al. 2007). These problems can be circumvented, if we normalize the model at z = 6 and use the \u201cglobal\" LFs from the model at high redshifts where direct observations lack or are unreliable. We take this approach. From Figure 2 we see that the model LFs match observations well at z = 6, 7. The agreement is still good at z = 8, albeit with \u201cnoisier\" observational data. There is very little to \f\u2013 8 \u2013 -24 -22 -20 -18 -16 -14 -12 -10 MUV -7 -6 -5 -4 -3 -2 -1 0 log dn/dMUV (cMpc-3) z=6, H0 = 70, \u2126M = 0.30 z=7, n=0.96, \u03c38 = 0.82 z=8 z=10 z=15 z=6, Bouwens+15 z=7, Bouwens+15 z=8, Bouwens+15 z=10, Bouwens+15 Fig. 2.\u2014 shows the galaxy luminosity functions predicted by the \u039bCDM model at z = 6 (red solid curve), 7 (blue dashed curve), 8 (magenta dotted curve), 10 (cyan dot-dashed curve) and 15 (black dotted curve), which are compared to the observations at the four corresponding redshifts, shown as various symbols with corresponding colors. The observational data are from Bouwens et al. (2015). glean from the comparison at z = 10, simply because the observational data lack both quantity and quality. Integrating the Schechter \ufb01ts of the Bouwens et al. (2015) LF at z = 6 yields the intrinsic ionizing photon production rate from galaxies of \u02d9 Nion,int = 1051.52cMpc\u22123 s\u22121 for MUV,limit = \u221212 = 1051.57cMpc\u22123 s\u22121 for MUV,limit = \u221210 = 1051.61cMpc\u22123 s\u22121 for MUV,limit = \u22128. (12) In obtaining \u02d9 Nion,int, we have used a relation between ionizing photo production rate per unit FUV spectral density from (Robertson et al. 2013), \u03beion \u2261 \u02d9 Nion/cMpc\u22123 s\u22121 LUV/erg s\u22121 Hz\u22121 cMpc\u22123 = 1025.2, (13) which is based on the observed FUV spectral index \u03b2 \u223c\u22122 for high redshift galaxies. Note \u03b2 is in de\ufb01ned in spectrum f\u03bbd\u03bb \u221d\u03bb\u03b2d\u03bb, or f\u03bdd\u03bd \u221d\u03bd\u22122\u2212\u03b2d\u03bd, in the FUV spectral range. The accuracy of the normalization of our model is such that the model LF at z = 6 gives the same integrated light density as the observed one to the third digit. \f\u2013 9 \u2013 Integrating the LF based on the \u039bCDM model yield \u02d9 Nion,int(z = 5.7) = 1051.6cMpc\u22123 s\u22121, weakly dependent on MUV lower limit. Dividing \u02d9 Nion,IGM in Eq 1 by \u02d9 Nion,int(z = 5.7) gives the mean luminosity-weighted escape fraction of Lyman continuum fesc,z=5.7 \u2261 \u02d9 Nion,IGM \u02d9 Nion,int = 10 \u02d9 Nion,IGM 1050.6cMpc\u22123 s\u22121 ! \u0012 \u03beion 1025.2 \u0013\u22121 %. (14) We will show in \u00a74 how \u02d9 Nion,IGM plays a key role in determining a lower bound on \u03c4e and how that in turn allow for a strong constraint on \u03bbmfp hence Mcut. 3. Reionization Histories Constrained by the State of IGM at z = 5.7 Any reionization history must satisfy the state of the IGM at z = 5.7 and the fact that the IGM is opaque to Ly\u03b1 photon at just above that redshift. In this sense, the history of cosmological reionization becomes a boundary value problem, where we solve the evolution of HII volume fraction QHII with the following equation: dQHII(z) dt = \u02d9 Nion,IGM(z) nH,0 \u2212QHII(z) trec(z) , (15) where nH,0 is the comoving mean number hydrogen density, and trec(z) = [CHII(z) \u03b1B(T) (1 + Yp/4[1 \u2212Yp]) nH,0 (1 + z)3]\u22121 is the mean recombination time of ionized hydrogen in HII regions. Any solution to Eq 15 satis\ufb01es the following two boundary conditions: fesc \u02d9 Nion,int\u00af \u03c3ion\u03bbmfp|z=5.7 = \u02d9 Nion,IGM\u00af \u03c3ion\u03bbmfp|z=5.7 = 0.20+0.11 \u22120.06 \u00d7 10\u221212s\u22121 (16) and QHII|z=5.7 = 1.0. (17) In Eq 15 at z > 5.7, since \u02d9 Nion,int(z) is \ufb01xed by the \u039bCDM model (see Figure 2), we are left with only one degree of freedom, namely, the evolution of fesc with redshift. We model the redshift evolution of fesc using a simple powerlaw form: fesc(z) = fesc,z=5.7 \u00121 + z 6.7 \u0013\u03c7 . (18) Note that fesc(z) in Eq 3, like fesc,z=5.7 in Eq 14, is averaged over all the galaxies at a given redshift; in other words, fesc(z) is the ratio of the total number of ionizing photons entering the IGM to the total number of ionizing photons produced. There is one additional physical process that is largely unconstrained by the state of the IGM at z = 5.7 but is important for the overall reionization history and integral electron scattering optical depth. That is, a change of IMF at some high redshift from regular Pop II stars to a perhaps more top-heavy and/or metal-free IMF , which may lead to a quantitative transition in ionizing photon production ef\ufb01ciency per unit \f\u2013 10 \u2013 stellar mass, \u03f5ion. Thanks to our lack of knowledge with regard to this process, we choose to model \u03f5ion generally, albeit in a simple way, as \u03f5ion = \u03f5ion,PopII + (\u03f5ion,PopIII \u2212\u03f5ion,PopII)H(\u2126\u2217[z] \u2212\u2126PopIII,crit), (19) where \u03f5ion,PopIII and \u03f5ion,PopII are ionizing photon production ef\ufb01ciency per unit stellar mass for Pop III and Pop II IMF , respectively. We adopt \u03f5ion,PopII = 3500 photons/baryon and \u03f5ion,PopIII = 70000 photons/baryon (e.g., Bromm et al. 2001), resulting in ratio of \u03f5ion,PopII/\u03f5ion,PopIII = 20, which enters our calculations. The transition between Pop III and Pop II is modeled by a smoothed Heavyside step function H(\u2126\u2217[z] \u2212\u2126PopIII,crit) = (1 + exp [\u22122(\u2126\u2217(z)/\u2126PopIII,crit \u22121)/\u03c3PopIII])\u22121, (20) where \u2126\u2217(z) is the amount of stars formed by redshift z computed in the \u039bCDM model in units of critical density, \u2126PopIII,crit, controls the transition from Pop III to Pop II when the amount of stars formed by some redshift in units of critical density has reached this value, and \u03c3PopIII controls the width of this transition in units of \u2126PopIII,crit; when \u03c3PopIII = 0, one recovers the unsmoothed Heavyside step function. So far, we have three parameters to model the evolution of ionizing photon beyond z = 5.7, \u03c7, \u2126PopIII,crit and \u03c3PopIII. As we will show later, the dependence of results on \u03c3PopIII is suf\ufb01ciently weak that \u03c3PopIII can effectively be considered \ufb01xed, as long as its value is not too large. Therefore, we effectively have two free parameters in our model, \u03c7 and \u2126PopIII,crit. Given that we have one equation, Eq 15, the general expectation is that there will be a family of solutions that will be able to meet the two boundary conditions, Eq 16, 17. Conversely, though, solving Eq 15 to obtain QHII(z = 5.7) = 1 does not necessarily result in an IGM at z = 5.7 that is consistent with the constraint imposed by the observations of Ly\u03b1 optical depth, i.e., Eq 3, a point already noted by others (e.g., Robertson et al. 2013). For each solution of QHII(z), we compute the total electron scattering optical depth from z = 0 to recombination redshift zrec by \u03c4e = Z zrec 0 fe(1 \u2212fs \u2212fn)QHII\u03c3TnH,0[c/H(z)](1 + z)\u22121dz, (21) where fe accounts for redshift evolution of helium contribution, we use fe = (0.76 + 0.24/0.76/4) for z > 2.8 and fe = (0.76 + 0.24/0.76/2) for z \u22642.8, approximating He II reionization as a step function at z = 2.8, which is consistent with the observed He II absorption optical depth data of (Worseck et al. 2011), interpreted in the context of He II reionization simulations of (McQuinn et al. 2009). And fs and fn account for stellar density and neutral hydrogen density, respectively, which do not contribute to electron density. Wilkins et al. (2008) give \u2126\u2217(z = 0) = 2.5 \u00d7 10\u22123, while (Grazian et al. 2015) yield \u2126\u2217(z = 6) = 3.7 \u00d7 10\u22125. We interpolate between these two points to \ufb01nd an approximate stellar evolution \ufb01t as \u2126\u2217(z) = 2.5 \u00d7 10\u22123(1 + z)\u22122.1, translating to fs = 0.052(1 + z)\u22122.1. Post-reionization most of the neutral hydrogen resides in DLAs and the observational data on the evolution of DLAs are available, albeit with signi\ufb01cant errorbars. We approximate the data presented in Noterdaeme et al. (2009) by piece-wise powerlaws as follows: \u2126HI = 0.4 \u00d7 10\u22123 at z = 0, which evolves linearly to \u2126HI = 0.9 \u00d7 10\u22123 at z = 0.5, \f\u2013 11 \u2013 0.047 0.047 0.047 0.047 0.047 0.047 0.047 0.047 0.047 0.047 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.055 0.055 0.055 0.055 0.055 0.055 0.055 0.055 0.064 0.064 0.064 0.064 0.064 0.064 0.064 0.064 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.073 0.082 0.082 0.082 0.082 0.082 50.55 50.55 50.55 50.55 50.55 50.55 50.55 50.55 50.55 50.66 50.66 50.66 50.66 50.66 50.66 50.66 50.66 50.71 50.71 50.71 50.71 50.71 50.71 50.71 50.71 50.765 50.765 50.765 50.765 50.765 50.765 50.765 50.851 50.851 50.851 50.851 50.851 50.851 50.916 50.916 50.916 50.916 50.916 50.916 -6 -5 -4 -3 -2 -1 0 1 2 3 4 \u03c7 -10 -9 -8 -7 -6 -5 -4.5 log \u2126PopIII,crit Fig. 3.\u2014 shows the contours of \u03c4e (red) and \u02d9 Nion,IGM(z = 5.7) (black) in the \u03c7 \u2212 \u2126PopIII,crit plane for \u03c3PopIII = 0.25. The red contours are labelled with \u03c4e values, whereas the black contours are labelled with log \u02d9 Nion,IGM(z = 5.7) values. The four blue solid dots indicate four possible solutions of QHII(z) that yield total electron optical depths of \u03c4e = (0.055, 0.064, 0.073, 0.082), respectively, from left to right. The three green solid dots indicate another set of three possible solutions of QHII(z) that yield total electron optical depths of \u03c4e = (0.082, 0.073, 0.064), respectively, from top to bottom. The black solid dot is a solution with \u03c4e = 0.055. These speci\ufb01c solutions are discussed in the text. which remains at \u2126HI = 0.9\u00d710\u22123 at z = 0.5\u22123, after which it linearly rises \u2126HI = 1.2\u00d710\u22123 at z = 3.5, followed by a constant \u2126HI = 1.2 \u00d7 10\u22123 at z = 3.5 \u22125.7. Figure 3 shows the case with \u03c3PopIII = 0.25, to be examined in greater details. We have examined cases with \u03c3PopIII = 0.5, 0.25, 0.05, 0.01 and \ufb01nd that the results, as displayed in Figure 3 in terms of the contours, depend weakly on \u03c3PopIII. We note that the conclusions obtained are generic and more importantly, the solution family obtained that is still viable is very insensitive to the choice of \u03c3PopIII. It proves useful for our discussion to rewrite one of the boundary value constraints, namely, Eq 16, as \u02d9 Nion,IGM(z = 5.7) = (1.8 \u22124.1) \u00d7 1050 \u0014\u03bbmfp(z = 5.7) 7.6pMpc \u0015\u22121 cMpc\u22123 s\u22121, (22) \f\u2013 12 \u2013 where the range inside the \ufb01rst pair of parentheses on the right hand side corresponds to 1\u03c3 lower and upper limits of Eq 5. In this parameter space of \u03c7 \u2212\u2126PopIII,crit shown in Figure 3 we have solutions to Eq 15 that satisfy Eq 17, i.e., the universal reionization completes exactly at z = 5.7 with varying \u02d9 Nion,IGM(z = 5.7) shown as the black contours. Superimposed as the red contours are values of \u03c4e for each solution. 0 2 4 6 8 10 12 14 16 18 20 z 10-3 10-2 10-1 100 Q(z) & \u03c4e(>z) Q(z) and \u03c4e solutions: four blue dots in Fig 5 red: \u03c4e; blue: Q \u03c4e(tot)=0.055 \u03c4e(tot)=0.064 \u03c4e(tot)=0.073 \u03c4e(tot)=0.082 Fig. 4.\u2014 shows each of the four solutions of QHII(z) (blue curves) indicated by the four blue solid dots in Figure 3, along with the respective cumulative \u03c4e (red curves). It is now clear that the value of \u02d9 Nion,IGM(z = 5.7) plays a key role in determining the viability of each solution of QHII(z). Under the two boundary conditions, Eq 16 and 17, two families of solutions are possible, each of which is simultaneously consistent with the latest values of \u03c4e from Planck Collaboration et al. (2016) observations. Indicated by the four blue dots in Figure 3 are four solutions in the (we call) \u201cPop III-supported\" family with \u03c4e = (0.055, 0.064, 0.073, 0.082) corresponding to the (central, +1\u03c3, +2\u03c3, +3\u03c3) values from Planck Collaboration et al. (2016). Figure 4 shows each of the four solutions of QHII(z) (blue curves) indicated by the four blue solid dots in Figure 3, along with the respective cumulative \u03c4e (red curves). The common characteristics of these solutions in this solution family are that (1) \u03c7 < 0, indicating that the escape fraction decreases with increasing redshift, (2) the Pop III stars make a signi\ufb01cant and late contribution to the overall ionizing photon budget. The combination of negative \u03c7 and late, signi\ufb01cant Pop III contribution permits a slight dip in ionized fraction at a redshift slightly higher \f\u2013 13 \u2013 than z = 5.7, to satisfy 17. This set of solutions, however, may be inconsistent with some other independent observations. Here we provide some notable examples. 15 15 15 15 30 30 30 30 40 40 40 40 50 50 50 50 70 70 70 70 100 100 100 100 150 150 150 150 50.56 50.56 50.56 50.56 50.56 50.71 50.71 50.71 50.71 50.71 50.765 50.765 50.765 50.765 50.765 0.046 0.046 0.055 0.055 0.055 0.055 0.055 0.055 0.064 0.064 0.064 0.064 0.064 0.064 0.073 0.073 0.073 0.073 0.073 0.082 0.082 0.082 0.082 -6 -5 -4 -3 -2 -1 0 1 2 3 \u03c7 -6 -5 -4.5 log \u2126PopIII,crit nion/nH \u03c4e \u02d9 Nion,IGM@z = 5.7 Fig. 5.\u2014 shows contours of the ratio of the number of ionizing photon produced per hydrogen atom (red), along with contours of \u03c4e (blue) and of log \u02d9 Nion,IGM(z = 5.7) (black). Figure 5 shows contours of the ratio of the number ionizing photon produced per hydrogen atom (red). Fang & Cen (2004) perform a detailed analysis of metal enrichment history and show that Pop III to Pop II transition occurs when 3 \u221220 ionizing photons per hydrogen atom, depending on the model for the IMF , have been produced by Pop III stars, based on considerations of primary atomic cooling agents, CII and OI, at low temperature, corresponding to [C/H]crit = \u22123.5 and [O/H]crit = \u22123.1 (Bromm & Loeb 2003). For the four solutions, indicated by the four blue dots in Figure 5, we see much higher, 80 \u2212110 ionizing photons per hydrogen atom have been produced at the model transition \u2126PopIII,crit, in order to attain the solutions. Note that in the scenario of dust cooling induced fragmentation (Schneider & Omukai 2010), the critical transition metallicity is 1 \u22123 orders of magnitude lower that is still more stringent. These considerations indicate that these QHII(z) solutions are self-inconsistent, in the sense that the required Pop III contribution in order for the solutions to be possible is unattainable. A second example concerns the neutral fraction of the IGM during the epoch of reionization at z > 6. In a recent careful analysis of possible signatures of damping wing absorption pro\ufb01les of the Ly\u03b1 emission line of quasar J1120+0641 at z = 7.1, under the assumption that DLAs, being suf\ufb01ciently rare, are not responsible for the absorption of the Ly\u03b1 emission \f\u2013 14 \u2013 redward of the line, Greig et al. (2016) conclude that the mean neutral fraction of the IGM is 0.40+0.41 \u22120.32 (2\u03c3). All of the four solutions shown in Figure 4 have the mean neutral fraction signi\ufb01cantly less than a few percent, thus are ruled out at > 2.5\u03c3 level. 0 2 4 6 8 10 12 14 16 18 20 z 10-3 10-2 10-1 100 Q(z) & \u03c4e(>z) Q(z) and \u03c4e solutions: three green and one black dots in Fig 4 red: \u03c4e; blue: Q \u03c4e(tot)=0.082 \u03c4e(tot)=0.073 \u03c4e(tot)=0.064 \u03c4e(tot)=0.055 1-xHI(z=7): Greig etal 2016 Fig. 6.\u2014 shows each of the four solutions of QHII(z) (blue curves) indicated by the three green (for \u03c4e = (0.064, 0.0740.082)) and one black (for \u03c4e = 0.055) solid dots in Figure 3, along with the respective cumulative \u03c4e(> z) (red curves). Indicated by the magenta solid dot is an observational measurement of neutral fraction of the IGM at z = 7.1 by Greig et al. (2016) based on the damping wing signature imprinted on the red side of the Ly\u03b1 emission line of quasar J1120+0641. Let us now turn to the other solution family with reduced Pop III contribution that is additionally con\ufb01ned to much higher redshift. Figure 6 shows each of the three solutions of QHII(z) (blue curves) indicated by the three green and one solid dots in Figure 3, along with the respective cumulative \u03c4e (red curves). Several trends shared by solutions in this solution family may be noted. First, QHII(z) increases exponentially as a function of redshift in the range of z = 5.7 to z = 9 \u221214, depending on the value of total \u03c4e; a lower total \u03c4e corresponds to a higher redshift, but lower value of QHII(z) base, from which the exponential growth starts. All four solutions are consistent with the observationally inferred mean neutral fraction of the IGM at z = 7.1, shown as a magenta dot with 1\u03c3 range (Greig et al. 2016). Second, there is a distinct, separate peak QHII(z) at z = 14\u221218, for \u03c4e = 0.082 \u22120.064 (in that order) with height of (0.4 \u22120.07) (in the same order). This high redshift peak of QHII(z) is due to contributions from Pop III stars. The exact height and duration of this peak may depend on the assumptions concerning the transition from Pop III to Pop II temporally and spatially, that will require \f\u2013 15 \u2013 detailed modeling beyond the scope of this work. We note, however, that the results do not change signi\ufb01cantly when values of \u03c3III = 0.01 \u22121 are used (0.25 is used for the case shown in Figure 6), suggesting that the existence, the QHII(z) value of the peak and the peak redshift are fairly robust. We also note that all these solutions lie below \u2126PopIII,crit = 10\u22126.4, which, when compared with Figure 5, indicates a consistency in terms of Pop III stars forming in the metallicity regime that is physically plausible, if low temperature atomic cooling, not dust cooling, dictates fragmentation of star-forming gas clouds. Finally, it is seen that these solutions have \u03c7 \u22650, indicating that the escape fraction increases with increasing redshift, perhaps not an unexpected result based on physical considerations that galaxies at high redshifts are less massive, their star-formation episodes more bursty and consequently their interstellar medium more porous to allow for more ionizing photons to escape. Simulation results are consistent with this trend (e.g., Kimm & Cen 2014). In summary, this solution family are self-consistent. If, however, \u03c4e = 0.055 holds up, there is no solution of QHII(z) with log \u02d9 Nion,IGM(z = 5.7) = 50.71. In order to get a solution with \u03c4e = 0.055, one requires log \u02d9 Nion,IGM(z = 5.7) = 50.765, which, with the conservative choice of +1\u03c3 value \u0393\u221212 = 0.31 (see Eq 5), in turn requires \u03bbmfp(z = 5.7) = 5.3pMpc, which would be at about 2.9\u03c3 lower bound of the observationally inferred value. In combination with the +1\u03c3 value of \u039312 used, such an event would be a 3.0\u03c3 occurrence, suggesting tension, which we examine in the next section. 4. \u03bbmfp(z = 5.7): A Strong Test of Matter Power Spectrum on Small Scales We were left in a state of signi\ufb01cant tension between accommodating \u03c4e = 0.055 and \u03bbmfp(z = 5.7) based on the extrapolated observational data at z < 5.5 in \u00a73. The tension may be alleviated, if one chooses not to strongly advocate the central value of \u03c4e = 0.055 (Planck Collaboration et al. 2016) but instead emphasize the harmonious concordance between \u03bbmfp(z = 5.7), \u0393(z = 5.7) and \u03c4e \u22650.64. We take this discrepancy in a somewhat different way and suggest that the extrapolation of the lower redshift measurement of \u03bbmfp should be taken with caution, despite the smooth trend seen in the observed redshift range (z = 2.3\u22125.5). We take a step further yet to perform a theoretical analysis to better understand the physical origin of \u03bbmfp(z = 5.7) in the context of the standard cosmological model. It is useful to separate out the overall \u03bbmfp into two components in the post-overlap epoch at z = 5.7, one due to the \u201ctranslucent\", general volume-\ufb01lling low density IGM that collectively attenuates ionizing photons and the other due to \"opaque\" disks (like LLSs) that block entirely all incident ionizing photons. We shall denote them \u03bbmfp,IGM and \u03bbmfp,halo, respectively. The total \u03bbmfp is \u03bbmfp = (\u03bb\u22121 mfp,halo + \u03bb\u22121 mfp,IGM)\u22121. (23) \f\u2013 16 \u2013 The \u03bbmfp,IGM can be approximated by the volume-weighted neutral fraction of the IGM as \u03bbmfp,IGM = (\u00af \u03c3ionfHI,volnH,0(1 + z)3)\u22121 = 19.5(1 + z 6.7 )\u22123( \u00af \u03c3ion 3.16 \u00d7 10\u221218cm2)\u22121( fHI,vol 0.9 \u00d7 10\u22124)\u22121 pMpc, (24) where fHI,vol = 0.9 \u00d7 10\u22124 is the volume-weighted neutral fraction of the IGM, inferred by the directly observed Ly\u03b1 (and higher order Lyman transitions) optical depth at z = 5.7 (Fan et al. 2006). As we have argued earlier, while the mass-weighted neutral fraction determined from such a method may be signi\ufb01cantly model-dependent, the volume-weighted neutral fraction is not expected to be, because it is free from clumping factor dependence and most of the optical depth contributions stem from low-density regions of optical depth of order unity whose Jeans scales are typically resolved in most simulations used. \u03bbmfp,halo stems from self-shielding dense gas in halos. A computation of \u03bbmfp,halo may not seem a well posed problem at \ufb01rst sight, because it would appear to depend on both the abundance of halos and their cross sections (the sizes of radiation blocking disks). It is not immediately obvious how one may precisely speci\ufb01y their cross sections, even if their abundance is known. We show that this ambiguity can be removed, when considerations are given to the physical conditions of halo gas as a function of halo-centric radius and a \u201ccorrect\" de\ufb01nition of \u03bbmfp,halo is adopted, which we now describe. After the HII regions have overlapped in the aftermath of reionization, neutral gas in halos essentially becomes a set of disconnected isolated islands that are increasingly self-shielded and optically thick to ionizing photons toward to the centers of halos. Under the assumption of spherical symmetry, for a given halo, we can compute the column density as a function of halo-centric radius r outside-in as NHI(r) = Z \u221e r xHI(r\u2032)\u03b4(r\u2032)nH,0(1 + z)3dr\u2032, (25) where \u03b4(r) \u2261n(r)/\u00af n is overdensity, for which we use the universal halo density pro\ufb01le (NFW, Navarro et al. 1997) with gas following mass over the relevant radial range (e.g., Komatsu & Seljak 2001). In the core region of a halo the gas density is constrained such that the gas entropy does not fall below the entropy of the gas at the mean density and cosmic microwave background temperature. In practice, the upper limit of the integral in Eq 25 is chosen when \u03b4 = 1 (i.e., the mean density) but its precise value makes no material difference to the calculated NHI(r) in the range of relevance. The local neutral fraction xHI(r) at radius r can be computed using the local balance between recombination and photoionization through a spherical radiative transfer: \u0393 exp [\u2212NHI(r)\u00af \u03c3ion]xHI(r) = [1 \u2212xHI(r)]2[1 + Yp/4(1 \u2212Yp)]\u03b1B(T)\u03b4(r)nH,0(1 + z)3, (26) where \u0393 is the \u201cbackground\" ionization rate prior to signi\ufb01cant attenuation when approaching the halo. We solve Eq (25,26) numerically to obtain NHI(r) and xHI(r), for a given \u0393. \f\u2013 17 \u2013 -1 -0.5 0 0.5 log r/rv 14 15 16 17 18 19 20 21 22 log NHI (cm-2) rv=1 pkpc rv=10 pkpc -2 -1 -0.5 0 0.5 log r/rv 0 0.5 1 1.5 ALL(Mh) halo MF Fig. 7.\u2014 Top-left panel: shows the integrated column density (from outside inward down to the halo-centric radius r) as a function of r (in units of virial radius rv), for two cases with virial radius rv equal to 1 pkpc (black solid curve) and 10 pkpc (red dashed curve) at z = 5.7 [with corresponding virial (temperature, mass) of (1.5 \u00d7 103K, 9.7 \u00d7 106 M\u2299) and (1.5 \u00d7 105K, 9.7 \u00d7 109 M\u2299), respectively], using \ufb01ducial values for various parameters: \u0393\u221212 = 0.2, \u00af \u03c3ion = 3.16 \u00d7 10\u221218 cm2. For the NFW pro\ufb01le we use a concentration parameter C = 5 in both cases. Top-right panel: shows the cumulative cross section for ionizing photons of a halo ALL(< rp) in units of the virial area (\u03c0r2 v) as a function of halo-centric radius in units of the virial radius rv for the two halos shown in the top-left panel of Figure 7. Bottom-left panel: shown the effective total cross section for Lyman continuum photons in units of halo virial area as a function of halo virial radius at z = 5.7. Bottom-right panel: shows the differential function dALL,tot/d log Mh \u2261n(Mh)M ln 10ALL(Mh) as a function of Mh (solid blue curve), its cumulative function ALL,tot(> Mh) (dotted blue curve), along with the halo mass function n(Mh)M ln 10 as a function of Mh (dashed red curve). In the top-left panel of Figure 7 we show the integrated column density (from outside inward down to the radius r) as a function of halo-centric radius r (in units of virial radius rv) for two cases with virial radius rv equal to 1 pkpc (black solid curve) and 10 pkpc (red dashed \f\u2013 18 \u2013 curve), respectively. We see that at about r/rv \u223c3 the column density is well below 1017cm\u22122, con\ufb01rming that the exact integration starting radius is not important for column densities in the relevant range for signi\ufb01cant attenuation of LyC photons. In both cases we also see that there is a rapid upturn of the column density starting around \u223c1018cm\u22122, indicating the radial location of the beginning of self-shield and transition from a highly ionized to an increasingly neutral medium. The rapid ascent suddely \ufb02attens out at \u223c1020cm\u22122, sigalling the arrival of a largely neutral medium, coincidental with column density similar to that of the damped Lyman alpha systems (DLAs). It is instructive to note that the transition from ionized to an increasingly neutral medium is halo virial radius (or halo mass) dependent, with a larger halo transitioning at a larger radius in units of its virial radius. This indicates that the density of the ionizing front propagating into halos is halo mass dependent, suggesting that the common practice of using a constant density as a proxy for the density of ionization front (e.g., Miralda-Escud\u00e9 et al. 2000) could potentially be slightly extended, although a more detailed analysis should be performed to assess this. To devise an appropriate method to compute the effective cross section ALL for LyC photons for a given halo, it is useful to gain a more clear understanding of the physical meaning of \u03bbmfp,halo. For a line of sight cross area of size \u2206A, if it is completely opaque to ionizing photons, then the effective area for intercepting LyC photons would be just equal to \u2206A. For a cross area of size \u2206A that is not completely opaque to LyC photons, one may de\ufb01ne the effective area for intercepting ionizing photons \u2206ALL, which is \u2206ALL = \u2206A[1 \u2212exp (\u2212NHI\u00af \u03c3ion)], (27) where NHI is the column density integrated along that line of sight (not the radially integrated column density shown in the top-left panel of Figure 7), which is computed using NHI(r) and xHI(r) that we have numerically obtained solving Eq (25,26). Upon integrating the projected area of a halo, we obtain the cumulative cross section for ionizing photons of a halo as a function of projected radius rp ALL(< rp) = Z rp 0 2\u03c0r\u2032 p[1 \u2212exp (\u2212NHI(r\u2032 p)\u00af \u03c3ion)]dr\u2032 p. (28) The top-right panel of Figure 7 shows ALL(< rp) in units of the virial area (\u03c0r2 v) as a function of halo-centric radius in units of the virial radius rv for the two halos shown in the top-left panel of Figure 7. To re-iterate a point made earlier, the total effective cross section is larger for larger halos in units of the virial area, shown quantitatively in the bottom-left panel of Figure 7. In the calculations performed involving the NFW pro\ufb01le, one needs to specify the concentration parameter c, which has been computed by a number of groups (e.g., Bullock et al. 2001; Wechsler et al. 2002; Angel et al. 2016; Ricotti et al. 2007). We adopt the results of Dolag et al. (2004): c = 9.6(Mh/1014 M\u2299)\u22120.10(1 + z)\u22121; the results obtained do not sensitively depend on slightly different formulae of c in the literature. \f\u2013 19 \u2013 We compute \u03bbmfp,halo by \u03bb\u22121 mfp,halo = Z \u221e Mcut n(Mh)Mh ln 10 ALL(Mh)d log Mh, (29) where ALL(Mh) is the total cross section of LyC photons for a halo of mass Mh; n(Mh) is the halo mass function at the redshift in question. The bottom-right panel of Figure 7 shows cross section function, n(Mh)Mh ln 10 ALL(Mh) (solid blue curve), its cumulative function ALL,tot(> Mh) (dotted blue curve), along with mass function, n(Mh)M ln 10(dashed red curve), as a function of Mh. We see that the cross section function is signi\ufb01cantly \ufb02atter than the halo mass function, due to the fact that the cross section in units of virial area is higher with increasing halo mass, i.e., ALL(Mh)/M2/3 h correlates positively with Mh, shown in the bottom-left panel of Figure 7. Nonetheless, ALL scales still sub-linearly with Mh, causing n(Mh)M ln 10ALL(Mh) to increase with decreasing halo mass Mh. The \u0393 \u2212\u03bbmfp relation in the standard \u039bCDM model for four cases of Mcut = (1.6 \u00d7 108, 5.8 \u00d7 107, 2.7 \u00d7 107, 8.6 \u00d7 106) M\u2299, corresponding to a halo virial temperature cutoff of Tv,cuto\ufb00= (104, 5 \u00d7 103, 3 \u00d7 103, 1.4 \u00d7 103)K, are shown also in Figure 1 as the blue curves. First of all our results af\ufb01rm a general self-consistency between radiation \ufb01eld and ionization structures around halos in the \u039bCDM model, since the theoretically predicted relation (the blue curves) can go through this already tightly constrained parameter space. This is a strong and unique support for the \u039bCDM model with respect to its matter density power spectrum (both amplitude and shape) on small scales corresponding to halo masses approximately in the range of 107 \u22121010 M\u2299. It is noted that this constraint on matter power spectrum is based entirely on the consideration of the halos as \u201csinks\" of ionizing photons. We point out the fact that \u03bbmfp,halo depends sensitively on the lower mass cutoff Mcut in the integral in Eq 29, as shown in the bottom-right panel of Figure 7. We show that this dependence provides a new, sensitive probe of the small-scale power in the cosmological model, when confronted with measurements of \u03c4e. It is useful to note that in computing \u03bbmfp,halo we have neglected possible constribution due to collisional ionization in halos with virial temperature signi\ufb01cantly above 104K. Thus, our computed \u03bbmfp,halo is somewhat overestimated and our subsequent conclusion drawn on small-scale power conservative. Figure 8 shows \u03bbmfp as a function of the lower mass cutoff Mcut in the integral in Eq 29 (blue solid curve). Shown as symbols are four cases along the curve, with (log Mcut/ M\u2299, \u03bbmfp/pMpc, log \u02d9 Nion,IGM/cMpc\u22123s\u22121, \u03c4e) equal to (5.10, 3.7, 50.916, 0.047) (green star), (6.95, 5.3, 50.765, 0.055) (red dots), (7.58, 6.8, 50.660, 0.064) (magenta square) and (8.67, 10.5, 50.550, 0.073) (black diamond). Each set of four numbers has the following relational meaning: for a given measurement of \u03c4e, the minimum required ionizing photon emissivity entering the IGM is log \u02d9 Nion,IGM in order for that \u03c4e to be a possible solution, which in turn corresponds to a mean free path of \u03bbmfp, which can be achieved if the lower mass cutoff of the halo mass function is Mcut. We see that the dependence of \u03bbmfp on Mcut is signi\ufb01cant, which provides a new constraint on the small-scale power in the cosmological model at a level that has hitherto been out of reach. \f\u2013 20 \u2013 5 6 7 8 9 log Mcut (Msun) 3 4 5 6 7 8 9 10 11 12 \u03bbmfp (pMpc) (log Mcut, \u03bbmfp, log \u02d9 Nion,IGM, \u03c4e) \u03bbmfp \u2212Mcut relation (5.10, 3.7, 50.916, 0.047) (6.95, 5.3, 50.765, 0.055) (7.58, 6.8, 50.660, 0.064) (8.67, 10.5, 50.550,0.073) Fig. 8.\u2014 shows \u03bbmfp as a function of the lower mass cutoff Mcut in the integral in Eq 29 (blue solid curve). Also shown as symbols are four cases along the curve, with (log Mcut/ M\u2299, \u03bbmfp/pMpc, log \u02d9 Nion,IGM/cMpc\u22123s\u22121, \u03c4e) equal to (5.10, 3.7, 50.916, 0.047) (green star), (6.95, 5.3, 50.765, 0.055) (red dot), (7.58, 6.8, 50.660, 0.064) (magenta square) and (8.67, 10.5, 50.550, 0.073) (black diamond). The thin blue dashed line is obtained when no entropy \ufb02oor due to that of mean cosmic gas is imposed. The thin, black dot-dashed and red dotted curves are obtained assuming there is no contribution from halos with virial temperature greater than 3 \u00d7 105K and 3 \u00d7 104K, respectively The dependence of \u03bbmfp on Mcut shown in Figure 8 can be translated into a constraint on dark matter particles. Here, we take warm dark matter as an example. In the warm dark matter model the smoothing scale, de\ufb01ned as the comoving half-wavelength of the mode for which the linear perturbation amplitude is suppressed by 2, is Rs = 0.48(\u2126M/0.25)0.11(h/0.7)\u22121.22(mx/keV)\u22121.11h\u22121Mpc (30) for a warm dark matter particle mass of mx (e.g., Viel et al. 2005), which we adopt as a proxy for a sharp cutoff (or free-streaming scale of particles). The equivalent free-streaming halo mass is then Ms = 5.8 \u00d7 1010(\u2126M/0.3)1.33(h/0.7)\u22124.66(mx/keV)\u22123.33 M\u2299. (31) Given the dependence chain of log Mcut on \u03bbmfp on \u02d9 Nion,IGM on \u03c4e, we obtain the lower bound on the mass mx of thermally produced warm dark matter particles as a function of \u03c4e shown as \f\u2013 21 \u2013 0.045 0.05 0.055 0.06 0.065 0.07 0.075 \u03c4e 5 10 30 50 100 300 500 1000 mx & ms (keV) mx=mass of thermal WDM particles ms=mass of sterile neutrinos Fig. 9.\u2014 shows the lower bound on the mass mx of thermally produced warm dark matter particles as a function of \u03c4e (blue solid curve). Similarly, the red dashed curve shows the lower bound on the mass ms of sterile neutrinos as a function of \u03c4e. the blue solid curve in Figure 9. The lower bound on the mass mx of thermally produced warm dark matter particles can be translated similarly to a lower bound constraint on the mass ms of sterile neutrinos produced via active-sterile neutrino oscillations obeying approximately a generalized Fermi-Dirac distribution. In this case, the effect of sterile neutrino is approximately the same as for thermally produced warm dark matter by using the following expression to relate the two masses (Colombi et al. 1996; Viel et al. 2005): ms = 4.46keV( mx 1keV)4/3( 0.12 \u2126Mh2)1/3. (32) The result is shown as the red dashed curve in Figure 9. The current best constraint on mx based on Ly\u03b1 forest is mx \u22653.3keV(2\u03c3) (Viel et al. 2013), improving upon earlier studies that generally constrain mx \u22650.5 \u22121keV (e.g., Narayanan et al. 2000; Barkana et al. 2001; Viel et al. 2005; Abazajian 2006). Combining with the 1\u03c3 upper limit used for \u0393 in our calculations, we \ufb01nd mx \u2265(15.1, 9.8, 4.6)keV at (1, 1.4, 2.2\u03c3) C.L., (33) based on \u03c4e = 0.055 \u00b1 0.009 and +1\u03c3 on \u0393. The corresponding constraint on sterile neutrino mass is ms \u2265(161, 90, 33)keV at (1, 1.4, 2.2\u03c3) C.L., (34) \f\u2013 22 \u2013 which basically rules out, for example, 7keV sterile neutrino dark matter model (Bezrukov & Gorbunov 2014; Park et al. 2014; Abazajian 2014). The lower bound placed on warm dark matter particle mass (or in general, on the small-scale power) hinges on the assumption that dark matter halos make up the bulk of the Lyman limit systems at z = 5.7. Are there possible caveats with respect to this assumption? Let us examine this. Under a physically plausible scenario of stellar reionization, there are possibly two additional kinds of (signi\ufb01cantly) neutral systems to serve as Lyman limit systems to contribute to the absorption of LyC photons. The \ufb01rst kind is neutral regions that envelope the expanding HII regions. Let us suppose that each HII region that is expanding has a radius of R and the neutral region surrounding it has a thickness of \u2206R. Analysis of the Ly\u03b1 forest at z = 5.7 indicates a volume-weighted neutral fraction of the IGM fHI,V \u223c0.9 \u00d7 10\u22124 at z = 5.7 (Fan et al. 2006). This provides a constraint on the possible size of \u2206R: \u2206R \u2264fHI,VR 3 . (35) The ionization front propagation speed at z = 5.7 is vIF = F \u00af nH = \u0393 \u00af \u03c3\u00af n = 1.7 \u00d7 104(\u0393\u221212 0.31 )( \u00af \u03c3 3.16 \u00d7 10\u221218cm2)\u22121 km s\u22121, (36) where \u00af nH is mean hydrogen number density at z = 5.7. Thus, the time it takes to sweep through the radial shell of thickness \u2206R would be \u2206t = \u2206R vIF \u2264fHI,VR 3vIF = 9.4 \u00d7 103( R 5.3pMpc)( fHI,V 0.9 \u00d7 10\u22124)(\u0393\u221212 0.31 )\u22121( \u00af \u03c3 3.16 \u00d7 10\u221218) yrs. (37) Thus, for any reasonable values of the parameters involved, \u2206t is much shorter than the Hubble time at z = 5.7 (which is about 1 Gyr). This suggests that such a con\ufb01guration is highly unlikely. Note that our assumption that these shells surround spherical HII regions is not necessary but only for the ease of illustration. If these spherical shells are replaced by pancaky bridges or \ufb01lamentary bridges between (or connecting) HII regions, the results and conclusions based on the above analysis remain largely the same, as long as the size of these pancakes or \ufb01laments are on the same order of \u223c10pMpc; in terms of our conclusion reached, even for a size of 1000pMpc, our conclusion remains unchanged. The second kind of possible neutral regions may be comprised of patches of neutral islands in the voids that are last reionized. We approximate them as opaque spheres with a radius of rvoid and a mean separation between them of dvoid, which can be related to the observed fHI,V: 4\u03c0 3 r3 voidd\u22123 void \u2264fHI,V. (38) The mean free path to LyC photons due to these islands would be \u03bbmfp,void = d3 void \u03c0r2 void \u2265(4 3)2/3\u03c0\u22121/3dvoidf\u22122/3 HI,V = 412dvoid( fHI,V 0.9 \u00d7 10\u22124)\u22122/3. (39) \f\u2013 23 \u2013 The typical separations of voids, i.e., dvoid, has to be on the order of the clustering scale of galaxies, which is about 4 \u22125cMpc (e.g., Ouchi et al. 2010), or larger. This suggests that \u03bbmfp,void \u2265245 pMpc at z = 5.7, implying that possible, to-be-last-reionized neutral islands in voids do not contribute much to the mean free path of LyC photons at z = 5.7. We thus conclude that halos likely contribute predominantly to the mean free path of LyC photons at z = 5.7 (likely at all lower redshifts as well, for that matter). Finally, we note that for simplicity we have adopted the assumption of sphericity of gas distribution in and around halos in question. Any deviation from sphericity would result in a reduction in cross section hence a more stringent demand for more small scale power. In addition, we note that baryonic fraction may be lower than the mean universal fraction. Furthermore, some gas in large halos with virial temperature higher than \u223c104K may be heated up to remove itself from the HI category. To give a sense of the magnitude of this effect we show in Figure 8 two additional cases where we assume that halos with virial temperature greater than 3 \u00d7 105K (thin, black dot-dashed curve) and 3 \u00d7 104K (thin red dotted curve), respectively, do not contribute to \u03bbmfp. We see a signi\ufb01cant effect; numerically, to attain \u03bbmfp = (5.3, 6.8, 10.5)pMpc in order to yield \u03c4e = (0.047, 0.055, 0.064, 0.073), respectively, the required log Mcut changes from (8.67, 7.58, 6.95) for no upper cutoff to (8.54, 7.51, 6.89) for upper cutoff of virial temperature of 3 \u00d7 105K, to (7.92, 7.07, 6.51) for upper cutoff of virial temperature of 3 \u00d7 104K. Moreover, internal ionizing radiation may reduce the HI fraction. Therefore, our assumptions and derived limits on small-scale power and on dark matter particle mass are all on the conservative side. 5. Discussion 5.1. Rapid Reionization Towards z = 5.7 The intrinsic emissivities of LyC photons at z = 5.7 and z = 6 are almost identical. We can use this fact to outline the nature of percolation of HII regions near the end of the reionization. We \ufb01rst note that we \ufb01nd that the theoretically derived relation of \u0393 \u2212\u03bbmfp at z = 6 is nearly identical to that at z = 5.7 at the visual resolution of eye when overplotted in Figure 1. It means, if the universe were in the post-overlap regime already at z = 6, its volume-weighted neutral fraction ought to be similar to that at z = 5.7. In other words, \u03bbmfp due to halos (mostly) based on \u039bCDM model and emissivity at z = 6 can easily accommodate a transparent universe similar to the one observed at z = 5.7. The observations indicate otherwise: fHI,V \u223c0.9 \u00d7 10\u22124 at z = 5.7 versus fHI,V > 2 \u00d7 10\u22124 at z = 6 (Fan et al. 2006). Thus, the universe is not fully ionized at z = 6 in the way of imposing a smaller \u03bbmfp hence a lower \u0393 for a given \u02d9 Nion,IGM. The likely, perhaps only, consistent solution would be that HII regions have not overlapped at z = 6 so that neutral patches in the IGM (not in the halos) render \u03bbmfp much lower than the notional \u03bbmfp,IGM and \u03bbmfp,halo in the post-overlap epoch. The inferred value of \u0393\u221212 < 0.02 at z = 6 (based on Ly\u03b3 absorption) (Cen & McDonald 2002; Fan et al. 2006) suggests that \u03bbmfp at z = 6 is an order of magnitude lower than that at z = 5.7. \f\u2013 24 \u2013 This is clear and fairly direct evidence that the percolation of HII regions is not yet complete at z = 6, indicating that the universe is in a rapid transitory phase from z = 6 to z = 5.7 clearing up some of the last neutral patches that dominate the mean free path, in a monotonic and irriversible process. Topologically, this indicates that HII regions transition from a set of isolated islands at z = 6 to a connected network of swiss-cheese-like HII region at z = 5.7. This expected rapid reionization process is consistent with and required by the necessary small values of \u03bbmfp \u22646.8pMpc at z = 5.7 to achieve \u03c4e \u22640.064, which in turn requires contribution from minihalos (those with virial temperature less than 104K or virial mass less than 1.6 \u00d7 108 M\u2299at z = 5.7). Gas in minihalo, when exposed to ionizing photons, responds dynamically by slowly evaporating through the action of thermal pressure of photoheated gas. Iliev et al. (2005) show that it takes about 100 \u2212200Myr to photoevaporate a minihalo of mass 107 M\u2299at z = 9. This process is expected to take longer for more massive minihalos. In our case, a minihalo of mass 107 M\u2299is relevant for \u03c4e = 0.055 (see the red dot in Figure 8); for \u03c4e = 0.064 minihalos of mass 1.6 \u00d7 108 M\u2299would be relevant (see the magenta square in Figure 8). Thus, it is probably true that, for the range of interest, the time scale taken for photoevaporation of relevant minihalos is 100 \u2212200Myr or longer. We note that the universal age difference from z = 6 to z = 5.7 is 63Myr, from z = 7 to z = 5.7 is 231Myr. We see in Figure 6 that the neutral fraction at z = 7 is about 40%, meaning about 40% of minihalos have not yet been exposed to ionizing radiation at z = 7. Thus, it is probable that a signi\ufb01cant fraction, perhaps a large majority, of minihalos have not lost gas in their inner regions (that actually contribute to the mean free path of LyC photons) by z = 5.7, permitting the possibility that they contribute signi\ufb01cantly to the mean free path of LyC photons, if necessary. 5.2. On fesc of Galaxies at Epoch of Reionization Using Eq 14, the four points (represented by the four symbols) in Figure 8 give fesc = (20.7, 14.6, 11.5, 8.9)%, in order to arrive at the reionization solutions constrained by the state of the IGM at z = 5.7 with \u03c4e = (0.047, 0.055, 0.064, 0.073), respectively. This required fesc based on the observed state of the IGM at z = 5.7 is consistent with computed fesc,comp = 10 \u221214% based on state-of-the-art high resolution cosmological radiation hydrodynamic simulations of dwarf galaxies at the epoch of reionization of Kimm & Cen (2014). We point out that the upper value (14%) includes contributions from runaway OB stars. It is noteworthy that fesc,comp is effectively a measure of the porosity of the interstellar medium, where LyC photons escape through transparent holes into the IGM. Therefore, a correct treatment/implementation of supernova feedback is essential, as is in Kimm & Cen (2014) but not in any other simulations that the author is aware of. Including Wolf-Rayet stars for Pop II stellar population, which empirically are much more abundant in local metallicity environment that is expected for galaxies at the epoch of reionization, may further increase the ratio of LyC photons to FUV photons, i.e., \u03beion, thus lessen the requirement for a high fesc. Thus, it seems that \f\u2013 25 \u2013 the stellar emissivity observed is adequate for maintaining the state of the IGM in terms of global and local ionization balance. It should be noted that these changes have no effect on solutions of reionization history that we have obtained, which depends directly on \u02d9 Nion,IGM. 5.3. Dichotomy in the Evolution of Lyman Alpha Emitters z > 6 In Figure 3 we see that solutions without Pop III contributions require \u03c7 = (0.7, 2.2, 3.6) for \u03c4e = (0.055, 0.064, 0.073), respectively. In general, the solutions even with Pop III contributions requires \u03c7 > 0 as long as \u03c4e \u22650.052. We note that the overall fesc tends to correlate with the porosity of the ISM, while individual fesc is strongly dependent on the line of sight of the observer (e.g., Cen & Kimm 2015). A positive \u03c7 > 0 is physically consistent with the expectation that smaller galaxies, having shallower gravitational potential wells, may be more susceptible to feedback processes from supernovae and have more porous ISM. Simulation results are consistent with this expected trend (e.g., Kimm & Cen 2014). Is there observational evidence that the escape of Ly\u03b1 and of LyC photons are both correlated with ISM posority? Jones et al. (2013) \ufb01nd an interesting trend of lower covering fractions of low-ionization gas for galaxies with strong Ly\u03b1 emission, providing evidence for a reduction in the average HI covering fraction (hence an increase in the escape fraction of ionizing radiation) is correlated with increase in Ly\u03b1 emission. Shapley et al. (2003) \ufb01nd that the blueshifts of interstellar absorption lines in LAEs and LBGs are similar at \u223c\u2212200 km s\u22121, suggesting that the velocity of out\ufb02ows in LAEs and LBGs are comparable. But their study also reveals a trend that Ly\u03b1 EW increases with decreasing \u2206vem\u2212abs in the EW range of \u221215 to +50\u00c5. Furthermore, they con\ufb01rm that \u2206vLy\u03b1 of LAEs is systematically smaller than the values of LBGs, with \u2206vLy\u03b1 of about 200 km s\u22121 for LAEs compared to about 400 km s\u22121 for LBGs. Moreover, they clarify that \u2206vLy\u03b1 decreases with increasing EW of Ly\u03b1. Recently, Shibuya et al. (2014) \ufb01nd an anti-correlation between Ly\u03b1 EW and the covering fraction estimated from the depth of absorption lines, which is an indicator of average neutral hydrogen column density. Their results support the idea that neutral column density is a key quantity determining Ly\u03b1 emissivity, consistent with the notion that the escape of LyC and Ly\u03b1 is correlated with each other and due to lower column density holes in the ISM. The combination of these facts leads one to conclude that the Ly\u03b1 velocity offset is positively correlated with NHI and negatively correlated with EW, exactly predicted from results based on Ly\u03b1 radiative transfer calculations (e.g., Zheng et al. 2010). None of these properties concerning Ly\u03b1 emission can be attributed to differences in the out\ufb02ow velocity, which do not appear to exist between LAEs and LBGs. Taken together, intrinsically, one would have expected then that the escape of Ly\u03b1 photons should be made easier with increasing redshift; i.e., both the ratio of Lyman alpha emitters to overall galaxy population at a chosen Ly\u03b1 EW or the overall Ly\u03b1 luminosity to FUV luminosity ratio as a whole are expected to increase with redshift beyond z = 5.7. Such an expectation is not borne out with observations. At some EW cuts, observations \f\u2013 26 \u2013 have consistently found that the fraction of LAEs out of LBGs decreases by a signi\ufb01cant factor from redshift z = 6 to z = 8 (e.g., Treu et al. 2013; Vanzella et al. 2014; Faisst et al. 2014; Schenker et al. 2014; Tilvi et al. 2014; Furusawa et al. 2016). This observational evidence strongly suggests that the intergalactic medium may have increasingly diminished the observability of the Ly\u03b1 from z \u223c6 to z \u223c8, consistent with the rapid reionization picture depicted in Figure 6). Physically, this is due to the fact that signi\ufb01cantly neutral IGM limits the size of Stromgren sphere around galaxies (Cen & Haiman 2000). Caruana et al. (2014) conclude that the neutral fraction of the IGM at z \u223c7 to be \u223c0.5, which would be consistent with our computed model shown in Figure 6). On the other hand, even if the IGM is indeed masking the appearance of the Ly\u03b1 emission for most, relatively low luminosity galaxies at the epoch of reionization, for rare, very luminous galaxies (which each are also likely clustered with other galaxies) with large Stromgren spheres, their Ly\u03b1 emission lines may be unaffected or possibly enhanced (given \u03c7 > 0), under suitable conditions. A corroborative or con\ufb01rmative piece of evidence for this may be that, if a strong Ly\u03b1 line is detected, the emission region could, but not necessarily required to, be compact spatially and in velocity space due to lack of scattering. There are observational indications that this may in fact be the case. Sobral et al. (2015) observe a luminous Ly\u03b1 source (CR7) with luminosity of 1043.93\u00b10.05 erg/s at z = 6.6 (the most luminous Ly\u03b1 emitter ever found at z > 6) but with a narrow FWHM of 266\u00b115 km s\u22121. Hu et al. (2016) detect a luminous Ly\u03b1 emitting galaxy, COLA1, with luminosity of 1043.9 erg/s at z = 6.593. COLA1 shows a multi-component Ly\u03b1 pro\ufb01le with a blue wing, suggesting a large and highly Stromgren sphere perhaps well extending into the infall region. Matthee et al. (2015) have argued that there is little evolution in the luminosity function of the most luminous LAEs at these redshifts, suggesting that these objects lie in large HII regions and protect themselves from changes in IGM neutral fraction, consistent with the expectation, at least in principle. More pinpointed analysis will be desirable in this respect, combining reionization simulations with detailed radiative transfer of Ly\u03b1 photons. In summary, we expect that there is a dichotomy in the evolution of Ly\u03b1 emitting galaxies. For relatively low Ly\u03b1 luminosity galaxies, their emission lines will be progressively diminished with increasing redshift due to the increasingly neutral IGM beyond z \u223c6. On the other hand, for the most luminous Ly\u03b1 emitters, under suitable conditions, their Stromgren spheres are large enough to allow their Ly\u03b1 line to escape unscathed by the neutral IGM. Both are consistent with present tentative observational evidence. 6." + }, + { + "url": "http://arxiv.org/abs/1604.06473v1", + "title": "Testing Models of Quasar Hosts With Strong Gravitational Lensing by Quasar Hosts", + "abstract": "We perform a statistical analysis of strong gravitational lensing by quasar\nhosts of background galaxies, in the two competing models of dark matter halos\nof quasars, HOD and CS models. Utilizing the BolshoiP Simulation we demonstrate\nthat strong gravitational lensing provides a potentially very powerful test of\nmodels of quasar hosting halos. For quasars at $z=0.5$, the lensing probability\nby quasars of background galaxies in the HOD model is higher than that of the\nCS model by two orders of magnitude or more for lensing image separations in\nthe range of $\\theta\\sim 1.2-12~$arcsec. To observationally test this, we show\nthat, as an example, at the depth of the CANDELS wide field survey and with a\nquasar sample of $1000$ at $z=0.5$, the two models can be differentiated at\n$3-4\\sigma$ confidence level.", + "authors": "Renyue Cen, Mohammadtaher Safarzadeh", + "published": "2016-04-21", + "updated": "2016-04-21", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION The basic characteristics of the dark matter halos hosting quasars, such as their masses, remain uncertain. The conventional, popular HOD (halo occupation distribution) model stands out on its simplicity in that it is based on assigning a probability function to quasars to reside in a halo of a given mass in order to match the observed quasar clustering strength (Zheng et al. 2005, 2007; Shen et al. 2013). A newly proposed model (Cen & Safarzadeh 2015a, \u2019CS model\u2019 hereafter) differs in its physical proposition by allowing for considerations of physical conditions of gas in galaxies hosting quasars and the fact that quasar activities are a special, rare phase. In particular, a restriction (an upper bound) on the halo mass of quasar hosts of 1012.5\u221213 M\u2299is imposed as a necessary condition, based on the physical condition that more massive halos, being completely hot gas dominated, are incapable of feeding the central supermassive black holes in a vigorous fashion. In addition, since the typical quasar duty cycle is much less than unity, quasar activity must not be a typical condition in a galaxy\u2019s history. We thus argue that some special condition is necessary to \u201ctrigger\" each quasar feeding event. We propose that signi\ufb01cant gravitational interaction, parameterized as the presence of a signi\ufb01cant companion halo within some distance, not necessarily a major merger, is the second necessary condition for making a quasar. We demonstrate that the CS model can equally well match the observed quasar clustering properties (auto-correlation functions of quasars and quasar-galaxy cross-correlation functions) over a wide range of redshift. The masses of the dark matter halos in the CS model are very different from those of the HOD based model. For example, at z \u223c0.5 \u22122, the host halos in the CS model have masses of \u223c1011 \u22121012 M\u2299, compared to \u22651013 M\u2299in the HOD model. This large difference gives rise to important differences in several observables. First, a critical differentiator is the cold gas content in quasars host galaxies, which is in part, but not entirely, due to the physical ingredients used in the construction of our model, i.e., the upper halo mass limit. Speci\ufb01cally, because of the large halos mass required in the HOD model, quasars hosts have much lower content of cold gas than in the CS model. Cen & Safarzadeh (2015a) have shown that the CS model is in excellent agreement with the observed covering fraction of 60% \u221270% for Lyman limit systems within the virial radius of z \u223c2 quasars (Prochaska et al. 2013). On the other hand, the HOD model is inconsistent with observations of the high covering fraction of Lyman limit systems in quasar host galaxies. Second, in Cen & Safarzadeh (2015b) we show that, while both HOD and CS models are consistent with the observed thermal Sunyaev-Zeldovich (tSZ) effect at the resolution of FWHM=10 arcmin obtained by Planck data, FWHM=1 arcmin beam tSZ measurements would provide a potentially powerful test between the two models. Subsequently, a careful analysis of the South Pole Telescope tSZ data at a beam of FWHM\u223c1 arcmin suggests that the CS model is strongly favored over the HOD model (Spacek et al. 2016). In this Letter we present and demonstrate yet another potentially powerful test to arXiv:1604.06473v1 [astro-ph.GA] 21 Apr 2016 \f2 distinguish between these two competing models, namely strong gravitational lensing by quasar hosting galaxies of background galaxies, which may \ufb01nally put to rest the issue of the halo masses of quasar hosts. 2. SIMULATIONS AND ANALYSIS METHOD We utilize the Bolshoi Simulation (Klypin et al. 2011) to perform the analysis. A set of properties of this simulation that meet our requirements includes a large box of 250h\u22121Mpc, a relatively good mass resolution with dark matter particles of mass 1.3 \u00d7 108h\u22121 M\u2299, and a spatial resolution of 1 h\u22121 kpc comoving. The mass and spatial resolutions are adequate for capturing halos of masses greater than 2 \u00d7 1010 M\u2299, which are resolved by at least about 100 particles and 40 spatial resolution elements for the virial diameter. Since the mass range of interest here is \u22651011 M\u2299, all halos concerned are well resolved. Dark matter halos are found through a friends-of-friends (FOF) algorithm. The adopted \u039bCDM cosmology parameters are \u2126m = 0.27, \u2126b = 0.045, \u2126\u039b = 0.75, \u03c38 = 0.82 and n = 0.95, where the Hubble constant is H0 = 100h km s\u22121 Mpc\u22121 with h = 0.70. We select quasars host halos from z = 0.5 data output of the Bolshoi Simulation, using the detailed prescriptions for both CS and HOD models, described in Cen & Safarzadeh (2015a). For the purpose of computing lensing statistics, we project all particles in the z = 0.5 simulation box along the x-axis onto a plane with spatial resolution of 4 proper kpc. At the location of each quasar halo, we compute the radial pro\ufb01le of the projected dark matter density centered on the halo. In addition to the dark matter, we also model the baryons\u2019 contribution to the projected surface density. Following the parametrization of Behroozi et al. (2013), we assign a baryon fraction to the dark matter halos as a function of the halo mass. The baryon mass is distributed and projected assuming a Singular Isothermal Sphere (SIS) model. The SIS radius for the baryons is de\ufb01ned to be rSIS = 2 \u00d7 re\ufb00where the effective radius is computed following van der Wel et al. (2014) \ufb01ts for elliptical galaxies in that redshift range: re\ufb00= 100.78( M 5 \u00d7 1010 M\u2299 )0.22 (1) The projected surface density, without and with baryonic correction, subtracted by the mean surface density of the box (\u223c3 \u00d7 107M\u2299/kpc2), is compared to critical surface density. We compute the surface density of the halos in radial bins and the radius within which the mean surface density is equal to \u03a3crit is de\ufb01ned as the Einstein radius rE for that halo, with the corresponding angle subtended being \u03b8E = rE/DL, where DL is the angular diameter distance to the quasar host (i.e., the lens). The critical density for strong lensing is \u03a3crit = c2 4\u03c0G DS DLDLS , (2) at redshift zl; in this paper, we consider zL = 0.5 for illustration. Here DS is the angular diameter distance to the source and DLS is the angular diameter distance between the lens and the source. We note that for sources at zs > 2 the critical density for strong lensing by a lens at zl = 0.5 is approximately constant \u223c109M\u2299/kpc2, which rises slowly to \u223c2 \u00d7 109M\u2299/kpc2, at zs = 1, followed by a steep rise to zs = 0.5. We compute the surface densities of 10,000 quasar host halos for both CS and HOD models and obtain the probability distribution function (PDF) of the lensed image angular separation statistics for each model. In order to make quantitative calculations for lensing statistics of background galaxies, we assess the lensing cross section in the source plane as follows. We assume an SIS model for the lens in which all the sources in the background whose un-de\ufb02ected photons pass within the lens\u2019s rE are lensed to give two images, with the cross section in the source plane giving two images being \u03c3 = \u03c0r2 E (Turner et al. 1984). De\ufb01ning the impact parameter as (f \u2261\u03b8Q/\u03b8E), lensing of background galaxies with f < 1 gives two images. The ampli\ufb01cation as a function of impact parameter is r = 1+f 1\u2212f . Averaging the ampli\ufb01cation over the cross section gives a factor of four total ampli\ufb01cation due to lensing inside the critical radius. In our case, we demand that both images are observed in order for us to be sure of a strong lensing event. With that requirement, we \ufb01nd that, in a magnitude limited survey, for sources within a given redshift interval, the effective source plane galaxy number density turns out to be unchanged. In other words, although we can probe to fainter limits because of the total ampli\ufb01cation power, the number of pairs of images both detectable is unchanged, with the effective source plane across section remaining at \u03c3 = \u03c0r2 E. Then, we obtain the number of multiply imaged galaxies as function of image separation \u2206\u03b8 = 2\u03b8E for a given \u03a3gal for each model. We compare the distributions of \u2206\u03b8 between the HOD and CS models We compute \u03c72 to statistically evaluate the size of quasar samples and the number density of background galaxies required in order to differentiate between the HOD and CS models, using Poisson statistics. \f3 The difference between the models for each radial bin is computed as follow: \u03c32 i = (NCS,i \u2212NHOD,i)2 NCS,i + NHOD,i (3) for i denotes the radial bin and NCS,i = \u03a3gal \u00d7 2\u03c0ridr \u00d7 NQSO \u00d7 PCS,i, where PCS,i is the probability of the CS model in ith radial bin at ri. The same is adopted for HOD model. The total difference taking into account all the radial bins is computed as follow and shown in Figure 3 below. \u03c32 tot = X \u03c32 i (4) 3. RESULTS 0 2 4 6 8 10 12 \u03b8(arcsec) 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 Probability(>\u03b8) HOD: DM+Baryon CS: DM+Baryon HOD: DM CS: DM Figure 1. shows the cumulative probability distribution functions of image separations \u03b8 in the HOD (blue curves) and CS (red curves) models, without (dashed curves) and with (solid curves) baryons, respectively. The result is based on 13,000 quasar host halo candidates in each model at z=0.5, viewed along each of the three orthogonal directions, resulting in a total of 39,000 effective candidates. The errorbars are based on Poisson statistics. Figure 1 shows the cumulative probability distribution function of image separations in the HOD (blue curves) and CS (red curves) models, without (dashed curves) and with (solid curves) baryons, respectively. We see that the large difference in masses of quasar host halos between HOD and CS models is most vividly displayed: the lensing probability in the HOD model is higher than that of the CS model by two orders of magnitude or more over the range \u03b8 \u223c1 \u22125 arcsec. Above \u223c6 arcsec image separation there is no case in the CS model, whereas the lensing probability in the HOD model is still at \u223c10\u22124 \u221210\u22123 at \u223c10 \u221212 arcsec. We note that 1 arcsec corresponds to 6.2kpc at z = 0.5. The pixel size of the mass projection map at z = 0.5 corresponds to 0.65 arcsec in angular size. Thus, we do not include bins at \u03b8 < 1.2 arcsec in our considerations of differentiations between the two models. The large differences between the HOD and CS models shown in Figure 1, can be understood by looking at Figure 2, which shows a comparison between the normalized probability distribution functions of masses of all quasar host halos (solid curves) and of those capable of producing strong lensing with image separation \u03b8 > 1.2 arcsec (dashed curves). We see that the vast majority of quasar host halos producing strong lensing with image separations \u03b8 > 1.2 arcsec have masses greater than 1012.5 M\u2299, peaking at 1013 \u22121013.5 M\u2299. Even though the overall number of quasar hosts are the same in the two models, their abundances for halos of masses around the peak (1013 \u22121013.5 M\u2299) differ by about two orders of magnitude, which evidently can account for most of the differences between the two lensing probabilities seen in Figure 1. There may be other conceivable \f4 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 logMh[M\u2299] 10\u22123 10\u22122 10\u22121 100 101 102 PDF CS HOD CS, \u03b8 > 1.2arcsec HOD, \u03b8 > 1.2arcsec Figure 2. shows the normalized probability distribution functions of quasar host halo masses in the HOD (blue solid curve) and CS (red solid curve) models, respectively, based on 13,000 halos used for Figure 1. The corresponding dashed curves are the normalized probability distribution functions of masses of selected quasar host halos capable of producing strong lensing with image separation \u03b8 > 1.2 arcsec. It is useful to note that the quasars at z \u223c0.5 that the models model have bolometric luminosity threshold of 1045.1erg/s (Cen & Safarzadeh 2015a). differences, such as the density slopes in the central regions, possibly due for example to difference residing environments of halos of the same masses, between the two quasar host halos in the two models. But as a whole, these other possible differences, if any, do not appear to make a large difference to the overall lensing probability. Given the results shown in Figure 1, we now estimate the observational samples, a combination of the number of target lenses (i.e., quasar hosts), NQSO, and the surface density of background galaxies, \u03a3gal (in arcsec\u22122), that are required to differentiate between the CS and HOD models. To be speci\ufb01c, we assume that the quasar hosts are at redshift z = 0.5. Figure 3 shows the con\ufb01dence levels of statistical differentiation between the two models, based on Eq (3,4). We note that the results only depend on the product NQSO\u03a3gal but we show four separate cases of NQSO for ease of assessment. For example, for a quasar sample of 100, a surface density of background galaxies of \u03a3gal = 0.1 arcsec\u22122 will allow for a 2.5\u03c3 test between the two models. For a quasar sample of 1000, \u03a3gal = 0.023 arcsec\u22122 produces a 4\u03c3 test. To illustrate the observational feasibility of testing the models, Figure 4 shows the cumulative surface number density of galaxies observed in the Hubble F160W \ufb01lter down to 50% completeness level in HUDF (Beckwith et al. 2006) (blue curve) and CANDELS deep (green curve) and shallow (red curve) tier observations (Grogin et al. 2011; Koekemoer et al. 2011). Numerically, we see that \u03a3gal(> z = 1 \u22122) \u223c0.01 \u22120.02 arcsec\u22122 for the CANDELS wide \ufb01eld survey; a survey of this depth with 1000 quasar at z = 0.5 would be able to differentiate between the two models at \u223c3 \u22124\u03c3 con\ufb01dence level. At the depth of HUDF \u03a3gal(> z = 1 \u22122) \u223c0.05 \u22120.08 arcsec\u22122, which could yield \u22652\u03c3 con\ufb01dence level with only about 200 quasars at z = 0.5. \f5 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 \u03a3gal(arcsec\u22122) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 \u03c3 NQSO = 1e+02 NQSO = 1e+03 NQSO = 1e+04 NQSO = 1e+05 Figure 3. shows the con\ufb01dence levels of statistical differentiation between the two models as a function of the surface density of background galaxies, \u03a3gal (in arcsec\u22122), based on Eq (3,4). Four cases of NQSO are shown. We assume that the quasar hosts are at redshift z = 0.5. NQSO is the number of target lenses (i.e., quasar hosts). Poisson statistics are used for the errorbars. 0 1 2 3 4 5 6 7 z 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 \u03a3gal(arcsec\u22122) HUDF CANDELS Deep CANDELS Wide Figure 4. Shows the cumulative surface number density of galaxies observed in the Hubble F160W \ufb01lter down to 50% completeness level in HUDF (Beckwith et al. 2006) and CANDELS deep and shallow tier observations (Grogin et al. 2011; Koekemoer et al. 2011). The completeness level at 50% corresponds to mAB(F160W) = 25.9, 26.6, 28.1 for the CANDELS wide, deep and HUDF, respectively. Data is from the compilation of Guo et al. (2013). 4." + }, + { + "url": "http://arxiv.org/abs/1604.01986v1", + "title": "Upper Limit on Star Formation and Metal Enrichment in Minihalos", + "abstract": "An analysis of negative radiative feedback from resident stars in minihalos\nis performed. It is found that the most effective mechanism to suppress star\nformation is provided by infrared photons from resident stars via\nphoto-detachment of ${\\rm H^-}$. It is shown that a stringent upper bound on\n(total stellar mass, metallicity) of ($\\sim 1000{\\rm M_\\odot}$, $-3.3\\pm 0.2$)\nin any newly minted atomic cooling halo can be placed, with the actual values\npossibly significantly lower. This has both important physical ramifications on\nformation of stars and supermassive black seeds in atomic cooling halos at high\nredshift, pertaining to processes of low temperature metal cooling, dust\nformation and fragmentation, and direct consequences on the faint end galaxy\nluminosity function at high redshift and cosmological reionization. The\nluminosity function of galaxies at the epoch of reionization may be\nsubstantially affected due to the combined effect of a diminished role of\nminihalos and an enhanced contribution from Pop III stars in atomic cooling\nhalos. Upcoming results on reionization optical depth from Planck\nHigh-Frequency Instrument data may provide a significant constraint on and a\nunique probe of this star formation physical process in minihalos. As a\nnumerical example, in the absence of significant contributions from minihalos\nwith virial masses below $1.5\\times 10^{8}{\\rm M_\\odot}$ the reionization\noptical depth is expected to be no greater than $0.065$, whereas allowing for\nminihalos of masses as low as ($10^7{\\rm M_\\odot}$, $10^{6.5}{\\rm M_\\odot}$) to\nform stars unconstrained by this self-regulation physical process, the\nreionization optical depth is expected to exceed $(0.075,0.085)$, respectively.", + "authors": "Renyue Cen", + "published": "2016-04-07", + "updated": "2016-04-07", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction Star formation in minihalos is a fundamental issue, because it is responsible for enriching the primordial gas with \ufb01rst metals that shape the subsequent formation of stars and possibly supermassive black hole seeds in atomic cooling halos. Since the pioneering works (e.g., Abel et al. 2002; Bromm et al. 2002; Nakamura & Umemura 2002), most studies have focused on formation of individual stars (e.g., Hirano et al. 2014). So far studies of the effects of external Lyman-Werner band (LW) (h\u03bd = 11.2 \u221213.6eV) radiation background (e.g., Machacek et al. 2001; Wise & Abel 2007; O\u2019Shea & Norman 2008), external IR radiation background (e.g., Chuzhoy et al. 2007; Hirano et al. 2015) on gas chemistry and thermodynamics hence star formation in minihalos have produced signi\ufb01cant physical insight. We assess the effects of these two LW photo-dissociation and IR photo-detachment H2 formation suppressing processes due to resident stellar population within minihalos, instead of the respective collective arXiv:1604.01986v1 [astro-ph.GA] 7 Apr 2016 \f\u2013 2 \u2013 backgrounds widely considered. We show that photo-detachment process of H2 by infrared photons of energy h\u03bd \u22650.755eV produced by resident stars places a strong upper bound on stellar mass and metals that may be formed in minihalos. This upper limit needs to be taken into account in the general considerations of galaxy formation at high redshift. 2. Maximum Stellar Mass and Metal Enrichment in Minihalos Minihalos are de\ufb01ned as small dark matter halos with virial temperature below that for ef\ufb01cient atomic cooling (i.e., Tv \u2264104K). Minihalos form early in the standard cold dark matter model and are only relevant for high redshift. Star formation may start in minihalos with Tv as low as \u223c1000K or so. The relation between halo virial mass (Mv) and virial temperature (Tv) is Mv = 108h\u22121 M\u2299 \u0012 Tv 1.98 \u00d7 104K \u0013 3 2 \u00120.6 \u00b5P \u0013 3 2 \u0012\u2126m \u2126z m \u2206c 18\u03c02 \u0013\u22121 2 \u00121 + z 10 \u0013\u22123 2 , (1) where z is redshift, \u2126m and \u2126\u039b are density parameter and cosmological constant at redshift zero, respectively; \u2126z m \u2261[1 + (\u2126\u039b/\u2126m)(1 + z)\u22123]\u22121 is the density parameter at redshift z; \u2206c = 18\u03c02 + 82d \u221239d2 and d = \u2126z m \u22121 (see Barkana & Loeb 2001 for more details). The corresponding physical virial radius is rv = 0.784h\u22121kpc \u0012 Tv 1.98 \u00d7 104K \u0013 1 2 \u00120.6 \u00b5P \u0013 1 2 \u0012\u2126m \u2126z m \u2206c 18\u03c02 \u0013\u22121 2 \u00121 + z 10 \u0013\u22121 2 . (2) In minihalos at high redshift, molecular hydrogen H2 is the primary gas cooling agent, before a signi\ufb01cant amount of metals is present. In the absence of a signi\ufb01cant amount of dust grains, the dominant H2 formation channel is via a two-step gas phase process (e.g., Draine 2003), \ufb01rst with radiative association: H + e\u2212\u2192H\u2212+ h\u03bd, (3) followed by associative detachment: H\u2212+ H \u2192H2 + e\u2212. (4) Given this formation channel, if one is interested in suppressing H2 formation, there are two main ways to achieve that goal. One is by destruction of formed H2 molecules through the photo-dissociation process by photons in the LW band of h\u03bd = 11.2 \u221213.6eV: H2 + h\u03bd \u2192H + H. (5) The other is by reducing the density of H\u2212, to which the rate of H2 formation is proportional, by infrared (IR) photons of energy h\u03bd \u22650.755eV via the photo-detachment process: H\u2212+ h\u03bd \u2192H + e\u2212. (6) \f\u2013 3 \u2013 For simplicity, we assume that the initial mass function (IMF) of Population III (Pop III) stars has a powerlaw distribution of the same Salpeter slope: n(M\u2217)dM\u2217= CM\u22122.35dM\u2217, (7) with an upper mass cutoff 100 M\u2299and a lower mass cutoff Mlow that we will vary to understand its in\ufb02uence on the results; C is a constant normalizing the stellar abundance per unit of star formation rate. We stress that our results are rather insensitive to either Mlow or the slope of the IMF . Then, one can compute the intrinsic spectral luminosity (in units of erg sec\u22121 Hz\u22121 sr\u22121) per stellar mass at any photon energy \u03bd as L\u03bd = Z th 0 Z 100 M\u2299 Llow \u03b8(tms \u2212th + tf) \u02d9 M\u2217(tf)J\u03bd(M\u2217)n(M\u2217)dM\u2217dtf, (8) where J\u03bd(M\u2217) is the mean spectral luminosity of a star of mass M\u2217at photon energy h\u03bd in the main sequence; \u03b8(x) is the Heaviside theta function; tms(M\u2217) is the star\u2019s main sequence lifetime; tf and th are the formation time of the star in question and the time under consideration when the luminosity is computed; \u02d9 M\u2217(tf) is star formation rate at time tf. The left panel of Figure 1 shows the individual intrinsic Pop-III stellar black-body spectrum per unit stellar mass times the main sequence lifetime for a range of masses for individual Pop III stars (indicated in the legend in units of solar mass), based on data from Marigo et al. (2001). It is easy to see that low mass stars are more ef\ufb01cient producers of IR photons (indicated by the vertical dashed magenta line); for 1 M\u2299to 20 M\u2299, a decrease of approximately 100 for IR intensity per unit stellar mass is observed. For the LW band photons (indicated by the two vertical dashed black lines), the opposite holds: a decrease of approximately four orders of magnitude is seen from 20 M\u2299to 1 M\u2299. In the right panel of Figure 1 we show comparisons the IR (LW) intensities of a single star of mass indicated by the x-axis in magenta (black) solid curves for redshifts z = 25 (z = 7), to be compared to the the threshold intensities for completion suppression of H2 formation by the respective processes shown as the horizontal dashed lines with the corresponding colors. See below for how the the threshold intensities are computed. Wolcott-Green & Haiman (2012) show that complete suppression of H2 formation in minihalos at high redshift is possible by either LW photo-dissociation or IR photo-detachment process. Based on a detailed modeling, they derive a critical radiation intensity for complete suppression of H2 formation of JLW,crit = 1.5 \u00d7 10\u221221ergs\u22121cm\u22122Hz\u22121sr\u22121 (9) at the LW band via photo-dissociation process alone, and a critical radiation intensity of JIR,crit = 6.1 \u00d7 10\u221220ergs\u22121cm\u22122Hz\u22121sr\u22121 (10) at the IR band (h\u03bd = 2eV) via photodetachment process alone, under the assumption of the existence of the respective backgrounds, not internal radiation. \f\u2013 4 \u2013 log h\u03bd(eV) 0 1 2 log J \u00d7 tms/M\u2217(erg/Hz/sr/M\u2299) 30 31 32 33 34 35 36 100 70 50 30 20 10 6 3 2 1.5 1 logM\u2217(M\u2299) 0 1 2 log J(erg/cm2/s/Hz/sr) -22 -21 -20 -19 -18 -17 z=25 z=7 z=25 z=7 Single star IR intensity IR suppression threshold Single star LW intensity LW suppression threshold Fig. 1.\u2014 Left panel: shows the intrinsic black-body spectra of individual Pop-III stars per unit stellar mass, multiplied by the main sequence lifetime, for a set of stellar masses (in units of solar mass) indicated in the legend. Also shown as the vertical magenta dashed line is the photon energy of 2eV for the photo-detachment process. The LW band is indicated by the two black vertical dashed lines. Right panel: shows the radiation intensity at 2eV (magenta solid curves) and 11.2eV (black solid curves) for a single star of mass shown on the x-axis. The star is assumed to be located at the center and the intensity is measured at the core radius of the minihalo (see text for de\ufb01nition of core radius). Two cases are shown, one for a minihalo at z = 25 with virial temperature of 103 K (upper solid curves) and the other at z = 7 with virial temperature of 104 K (lower solid curves). For both IR and LW photons, no absorption is assumed for this illustration. The horizontal dashed lines with the same corresponding colors are the threshold intensity for complete suppression of H2 formation by the respective processes. We consider the requirement of suppression of either H2 or H\u2212formation in the central core region of minihalos, which is likely most stringent compared to less dense gas at larger radii. Following Shapiro et al. (1999) we adopt the core radius and density to be rc = rv/29.4, \f\u2013 5 \u2013 which is then rc = 26.7h\u22121pc \u0012 Tv 1.98 \u00d7 104K \u0013 1 2 \u00120.6 \u00b5P \u0013 1 2 \u0012\u2126m \u2126z m \u2206c 18\u03c02 \u0013\u22121 2 \u00121 + z 10 \u0013\u22121 2 . (11) and hydrogen number density in the core is nc = 514nv (nv is the gas number density at the virial radius): nc = 4.5cm\u22123 \u00121 + z 10 \u00133 . (12) Since we use the numerical results from Wolcott-Green & Haiman (2012) on photo-detachment, it will be instructive to gain a physical understanding of its origin. The photodetachment cross section is \u03c3\u2212= 2.1 \u00d7 10\u221216(\u03f5 \u22120.755)3/2 \u03f53.11 cm2 (13) where \u03f5 is the photon energy in units of eV. The radiative association rate coef\ufb01cient is k\u2212= 1.3\u00d710\u22129cm3 s\u22121. Thus, with JIR,crit = 6.1 \u00d7 10\u221220erg/s/cm2/Hz/sr at 2eV and minihalo core density of nc = 31cm\u22123 at z = 18 (see Equation 1) (z = 18 is used in Wolcott-Green & Haiman (2012)) and assuming that the spectrum shape of \u221d\u03bd0 in the range 0.755 \u221213.6eV, one \ufb01nds that the ratio of photo-detachment rate to radiative association rate is 0.46; the ratio becomes 1.5 if one assumes the spectrum shape of \u221d\u03bd+1. Note that in the Raleigh-Jeans limit the spectral shape goes as \u221d\u03bd+2 (see Figure 1). We now see that when the photo-detachment rate and radiative association rate are approximately equal in the minihalo core, H2 formation is effectively completely suppressed, as one would have expected. This thus provides an order of magnitude understanding of the Wolcott-Green & Haiman (2012) results. Given the expected little dust content in very metal poor gas in minihalos, the optical depth for IR photons at 2eV is negligible. As a numerical example, the core hydrogen column density would be Nc \u2261rcnc = 1.5 \u00d7 1020cm\u22122 for a minihalo of Tv = 104 K at z = 8. Using the gas to dust column ratio (Draine 2003) with the assumption that dust content is linearly proportional to metallicity yields AV = 0.08(Z/ Z\u2299) mag in this case. It is easy to see that we may safely neglect optical depth effect for IR photons in question. For LW photons, H2 self-shielding effect may be important. We include, conservatively, for maximum H2 self-shielding of LW radiation by placing all sources at the center of the minihalo with the self-shielding reduction of LW photons using the accurate \ufb01tting formula from Draine & Bertoldi (1996) for a halo at z = 7 with Tv = 104 K, corresponding Doppler parameter b = 13 km s\u22121, H2 fraction of fH2 = 10\u22123 and H2 column density equal to fH2rcnc. This case is contrasted with the hypothetical case where self-shielding is neglected. The left panel of Figure 2 shows the critical cumulative stellar mass required to completely suppress further star formation, as a function of the lower mass cutoff of the IMF Mlow, following a minihalo of Tv = 103K at z = 25 through its becoming an atomic cooling halo at z = 7. In making this plot, we have adopted a Monte Carlo approach to randomly sample the IMF , assuming each starburst lasts about 4Myr, a time scale to approximate the effect of \f\u2013 6 \u2013 log Mlow (Msun) 0 0.5 1 1.5 upper limit on log total stellar mass 2 3 4 5 6 7 Detachment Dissociation w/o self-shield Dissociation w/ self-shield log Mlow (Msun) 0 0.5 1 1.5 upper limit on metallicity [Fe/H] -3.5 -3 Detachment:atomic halo@z=7 Fig. 2.\u2014 Left panel: shows the critical cumulative stellar mass for complete suppression of H2 formation, as a function of the lower mass cutoff of the IMF Mlow, via either the photodissociation process by LW photons with (blue open squares) and without H2 self-shielding of LW photons (blue solid squares) or the photo-detachment process by infrared photons (red solid dots). In this example, we assume that a minihalo of virial temperature Tv = 103K is formed at z = 25 when star formation commences, and the critical stellar mass (i.e., upper limit on total stellar mass) is evaluated at z = 7 when the minihalo has grown to a virial temperature of Tv = 104K. Right panel: shows the upper bound on the mean gas metallicity, corresponding to the critical stellar mass shown in the left panel, evaluated at z = 7 when the minihalo has grown to a virial temperature of Tv = 104K. In both panels, the errorbars indicate the dispersions obtained by Monte Carlo realizations of different star formation histories, described in the text. supernova blowout. While simulations have shown that the separation of episodic starbursts is about 20 \u2212100Myr (e.g., Kimm & Cen 2014) for atomic cooling halos, we expect that the separations for minihalos would be larger, thanks to the more violent blowouts of gas by supernovae out of shallower potential wells and less ef\ufb01cient cooling in minihalos for gas return. To stay on the conservative side, we use temporal separations between star formation episodes of 20Myr. In general, a larger separation gives a lower total stellar mass, because the radiative \f\u2013 7 \u2013 suppression effects are almost entirely dominated by stars formed within the ongoing starburst (not by stars from previous starbursts) and often the radiation from a single star is enough to provide the necessary suppression (see the right panel of Figure 1). On details regarding the Monte Carlo realizations, within each starburst, we randomly draw stars from the IMF with a lower mass cutoff of Mlow, until the radiation intensity in IR or UV, separately, at the core radius exceeds the required threshold. We keep track of stars formed in starbursts at higher redshift and take into account their radiative contributions given their main sequence lifetimes. Since we can not \u201cdraw\" a fractional star, in cases where a single star would already exceed the required threshold, stellar mass is higher than if fractional stars can be drawn. Based on the Monte Carlo random sampling procedure to draw stellar distribution from the IMF , the obtained dispersion are shown as vertical bars on symbols in both panels of Figure 2. It is evident that, taking into account H2 self-shielding of LW photons, for the entire range of Mlow considered, the destructive effect due to photo-detachment is larger by two-three orders of magnitude than that due to photo-dissociation taking into account attenuation for LW photons. Thus, we will use the photo-detachment effect to place an upper bound on stellar mass that can form before further H2 formation hence star formation is completely suppressed. The amount of stars formed within minihalos is small, at \u223c103 M\u2299, prior to the minihalo becoming an atomic halo. This self-regulation of star formation in minihalos likely have a signi\ufb01cant impact on the possible contribution of minihalos to reionization. A full characterization of this effect would need detailed simulations with this important process included. Wise et al. (2014) \ufb01nd stellar mass of 103.5 \u2212104.0 M\u2299in minihalos of mass 106.5 \u2212107.5 M\u2299, which is approximately a factor of at least 3 \u221210 higher than allowed, even compared to the largest possible minihalos (before their becoming atomic cooling halos) considered here, as shown in the left panel of Figure 2. We note that the amount stars formed are a result of accumulation of the number of star formation episodes. We have \"maximized\" the stellar mass by using a conservative episodic interval and considering the maximum minihalos at a low redshift z = 7. Obviously, for smaller minihalos at higher redshift with longer \u201cquiet\" periods the amount of stellar mass formed will be smaller. This suggests that the contribution of stars formed in minihalos to reionization may be substantially reduced. We estimate that the contribution of minihalos to cosmological reionization photon budget is likely limited to a few percent. Next, we consider the metal enrichment due to stars formed in minihalos. To compute that, we use the relation between the nickel (which decays to iron) mass produced by a supernova of mechanical explosion energy E: log Mni M\u2299 = 1.49 log E 1050 erg \u22122.9 (14) (Pejcha & Prieto 2015) and the relation between explosion energy E and the main sequence stellar mass M: E 1051 erg = ( M 10.8 M\u2299 )2 (15) \f\u2013 8 \u2013 (Poznanski 2013). We assume all stars with main sequence mass above 8 M\u2299explode as supernavae, except the two intervals 17 \u221223 M\u2299and \u226540 M\u2299, which produce black holes based on the so-called compact parameter \u03be as a physical variable (e.g., O\u2019Connor & Ott 2011; Pejcha & Thompson 2015). The right panel of Figure 2 shows the expected average metallicity when an atomic cooling halo is reached at z = 7. corresponding to the critical stellar mass shown as solid red dots in the left panel of Figure 2. We see that, on average, the expected maximum metallicity due to stars formed in minihalos falls into the range of \u22123.3\u00b10.2 in solar units, for Mlow = 1 \u221230 M\u2299. We use iron mass fraction of 1.77 \u00d7 10\u22123 as solar abundance (Asplund et al. 2009). We have conservatively assumed that enrichment process takes places in a closed-box fashion, with respect to metals produced. Furthermore, we have simplistically assumed that none of the metals produced is not incorporated back into subsequent stars. In reality, retainment of metals produced by stars in minihalos is probably far from complete, given their shallow potential wells, i.e., it is not a closed box. Furthermore, some of the earlier produced metals inevitably get reformed into stars. These conservative approaches used, along with our conservative adoption of 20Myr starburst separation, indicate that that the actual metallicity due to stars in minihalos may be signi\ufb01cantly below the maximum allowed values indicated in the right panel of Figure 2. In other words, we expect that the metallicity \ufb02oor put in by stars formed in previous minihalos, when an atomic cooling halo is formed, is likely signi\ufb01cantly below \u22123.3\u00b10.2 in solar units. There is one possible caveat in the arguments leading to the results. Despite the resultant low metallicity due to self-suppression of star formation by negative IR radiation feedback, the metallicity is not zero. Thus, it is prudent to check if the metallicity is suf\ufb01ciently low to justify the neglect of low-temperature metal cooling. We \ufb01nd that, using [Z/H] = \u22123 and molecular hydrogen fraction of fH2 = 10\u22123, the ratio of the cooling rate of metal lines (primarily due to OI, CII, SiII aand FeII) to that of molecular hydrogen is found to be (4.1\u00d710\u22122, 1.6\u00d710\u22123, 2.6\u00d710\u22124) at temperature T = (103, 103.5, 104) K (Maio et al. 2007), respectively. Empirically, experimental simulations have found that, in lieu of molecular hydrogen cooling, low-temperature metal cooling with a metallicity of [Z/H] \u223c\u22121.5 produces cooling effect comparable to that molecular hydrogen fraction with fH2 = 10\u22123 (Kimm 2016, private communications), which is consistent with above estimates based on cooling rates. Thus, the low-temperature metal cooling is probably no more than (\u223c2%, 0.1%, 0.01%) of the molecular hydrogen cooling in the case of absent negative feedback examined here, if [Z/H] \u2264\u22123.3, in minihalos with virial temperatures Tv = (103, 103.5, 104) K, respectively. Therefore, the low-temperature metal cooling is unlikely to be able to make up the \u201clost\" H2 cooling, due to negative feedback from local radiation, to alter the suppression of star formation. \f\u2013 9 \u2013 3. Discussion and" + }, + { + "url": "http://arxiv.org/abs/1507.07934v1", + "title": "Testing Dark Matter Halo Models of Quasars With Thermal Sunyaev-Zeldovich Effect", + "abstract": "A statistical analysis of stacked Compton$-y$ maps of quasar hosts with a\nmedian redshift of $1.5$ using Millennium Simulation is performed to address\ntwo issues, one on the feedback energy from quasars and the other on testing\ndark matter halo models for quasar hosts. On the first, we find that, at the\nresolution of FWHM=$10$ arcmin obtained by Planck data, the observed thermal\nSunyaev-Zeldovich (tSZ) effect can be entirely accounted for and explained by\nthe thermal energy of halos sourced by gravitational collapse of halos, without\na need to invoke additional, large energy sources, such as quasar or stellar\nfeedback. Allowing for uncertainties of dust temperature in the calibration of\nobserved Comton$-y$ maps, the maximum additional feedback energy is $\\sim 25\\%$\nof that previously suggested. Second, we show that, with FWHM=$1$ arcmin beam,\ntSZ measurements will provide a potentially powerful test of quasar-hosting\ndark matter halo models, limited only by possible observational systematic\nuncertainties, not by statistical ones, even in the presence of possible quasar\nfeedback.", + "authors": "Renyue Cen, Mohammadtaher Safarzadeh", + "published": "2015-07-28", + "updated": "2015-07-28", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction The nature of the dark matter halos hosting quasars remain debatable. There are primarily two competing models. One is the traditional, popular HOD (halo occupation distribution) model, which is based on assigning a probability function to quasars to reside in a halo of a given mass in order to match the observed quasar clustering strength (Zheng et al. 2005, 2007; Shen et al. 2013). The other model is a physically motivated model recently put forth (Cen & Safarzadeh 2015, \u2019CS model\u2019 hereafter). While the CS model, like the HOD based model, matches the observed clustering of quasars, the masses of the dark matter halos in the CS model are very different from those of the HOD based model. For example, at z \u223c0.5 \u22122, the host halos in the CS model have masses of \u223c1011 \u22121012 M\u2299, compared to (0.5 \u22122) \u00d7 1013 M\u2299in the HOD model. This then offers a critical differentiator between the CS and HOD models, namely, the cold gas content in quasars host galaxies. Speci\ufb01cally, because of the large halos mass required in the HOD 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu 2Johns Hopkins University, Department of Physics and Astronomy, Baltimore, MD 21218, USA arXiv:1507.07934v1 [astro-ph.GA] 28 Jul 2015 \f\u2013 2 \u2013 model, quasars hosts have much lower content of cold gas than in the CS model. Cen & Safarzadeh (2015) have shown that the CS model is in excellent with the observed covering fraction of 60% \u221270% for Lyman limit systems within the virial radius of z \u223c2 quasars (Prochaska et al. 2013). On the other hand, the HOD model is inconsistent with observations of the high covering fraction of Lyman limit systems in quasar host galaxies. Given the fundamental importance of the nature of dark matter halos hosting quasars, in this Letter we present another potentially powerful test to distinguish between these two competing models. We show that upcoming measurements of thermal Sunyaev-Zeldovich effect at arc-minute resolution (or better) should be able to differentiate between them with high con\ufb01dence. 2. Simulations and Analysis Method We utilize the Millennium Simulation (Springel et al. 2005) to perform the analysis. A set of properties of this simulation that meet our requirements includes a large box of 500h\u22121Mpc, a relatively good mass resolution with dark matter particles of mass 8.6 \u00d7 108h\u22121 M\u2299, and a spatial resolution of 5 h\u22121 kpc comoving. The mass and spatial resolutions are adequate for capturing halos of masses greater than 1011 M\u2299, which are resolved by at least about 100 particles and 40 spatial resolution elements for the virial diameter. Dark matter haloes are found through a friends-of-friends (FOF) algorithm. Satellite halos orbiting within each virialized halo are identi\ufb01ed applying a SUBFIND algorithm (Springel et al. 2001). The adopted \u039bCDM cosmology parameters are \u2126m = 0.25, \u2126b = 0.045, \u2126\u039b = 0.75, \u03c38 = 0.9 and n = 1, where the Hubble constant is H0 = 100h km s\u22121 Mpc\u22121 with h = 0.73. We do not expect that our results strongly depend on the choice of cosmological parameters within reasonable ranges, such as those from Komatsu et al. (2011). The steps taken to construct the tSZ maps are as follow. For each model (either CS or HOD) quasar model, we sample the quasar host dark matter halos at each redshift, z = 0.5, 1.4 and 3.2. For each quasar host, we select all halos within a projected radius of 80 arcmin centered at the quasar in a cylinder with the depth equal to the length of the simulation box in a given direction. The thermal energy of a halo of mass Mh is calculated using Eth = 3\u2126b 2\u2126m Mh\u03c32 (1) where Mh is the halo mass and \u03c3 the 1-d velocity dispersion computed as \u03c3 = 0.01 \u00d7 \u0010 Mh M\u2299 \u00111/3h\u2126M(z = 0) \u2126M(z) i1/6 (1 + z)1/2[km/s]. (2) The energy of each halo is then distributed uniformly in projected area inside its virial radius rv. To construct SZ maps, we project the energy of each halo using a cloud-in-cell technique in 2-d. We obtain the Compton-y parameter corresponding to total projected thermal energy Eth/A at each pixel with: y = 0.88 \u00d7 0.588 \u00d7 2\u03c3TEth 3mec2A (3) \f\u2013 3 \u2013 where A is the area of the pixel, \u03c3T the Thomson scattering cross section, me the electron mass, c the speed of light, and 0.88 and 0.58 accounts for electron density to mass density, and molecular weight, respectively. We limit the dark matter halos that contribute to the y calculation to the mass range [3 \u00d7 1012, 5.5 \u00d7 1014] M\u2299at z = 0.5 and [3 \u00d7 1012, 6.5 \u00d7 1014] M\u2299 at both z = 1.4 and 3.2. The upper mass limits is used in order to enable comparisons to observations, accounting for the fact that in the Planck observation generated y maps the clusters more massive than these indicated upper limits are masked out (Planck Collaboration et al. 2014). The lower mass limits re\ufb02ect the fact that less massive halos would be cold stream dominated instead of virial shock heated gas dominated; changing the lower mass limit from 3 \u00d7 1012 M\u2299to 1 \u00d7 1012 M\u2299only slightly increases the computed y parameter. To enable comparison with the observed Compton-y maps stacked over a range of redshift z \u223c0.1 \u22123.0 with median redshift of zmed \u223c1.5 (Ruan et al. 2015), we appropriately assign weightings of (36%, 51%, 13%) for z = (0.5, 1.4, 3.2) maps, respectively, and sum up the contributions from the three redshifts. These weightings are adopted to mimic the redshift distribution of stacked quasars used in the observational anaylysis. To compute the variance of the y-parameter we make nine maps each averaged over 1/9th of total individual Comptony maps of the quasar hosts at each redshift for either of the HOD or CS models. Then we have 9 \u00d7 9 \u00d7 9 = 729 possible \ufb01nal maps constructed with the weightings de\ufb01ned above. The dispersion and the mean is then computed considering these 729 \ufb01nal maps. In addition, we construct isolated quasar host only y maps, with only the quasar host halo\u2019s energy contributing to the \ufb01nal tSZ map. In other words, in those isolated quasar y maps, we exclude effects from projected, clustered neighboring halos. 3. Validating Quasar Models with Planck Thermal Sunyaev-Zeldovich Effect Maps We \ufb01rst validate the quasar models by comparing them to Planck observations. Figure 1 shows Compton-y maps for \ufb01ve randomly selected quasar maps at z = 1.4 (including the projection effects) in the \ufb01ve panels other than the top-left panel and the averaged over 10,000 such individual maps is shown in the top-left panel. Each individual map is centered on the quasar halo from the CS model. Halos that contribute to the signal are in the mass range we describe above and in some cases the quasar halo itself does not contribute to the signal if its mass fall outside the mass range. The left panel of Figure 2 shows the Compton-y radial pro\ufb01le obtained by sampling for CS (blue shaded region) and HOD (purple shaded region) model, respectively. Overplotted is the result obtained by stacking the Planck tSZ maps for quasars in the redshift range (0.1, 3.0) with median redshift of 1.5 (green shaded region, Ruan et al. 2015). To compare with Planck tSZ maps, we smooth our synthetic maps with a beam of FWHM=10 arcmin. We see that, at the resolution of Planck of FWHM=10 arcmin, both CS and HOD model are consistent with the observed level of tSZ being contributed entirely by shocked heated, virialized gas \f\u2013 4 \u2013 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin mean tSZ Compton\u2013y map 2.9e-07 3.0e-07 3.1e-07 3.2e-07 3.3e-07 3.4e-07 3.5e-07 3.6e-07 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin 0.0e+00 1.5e-07 3.0e-07 4.5e-07 6.0e-07 7.5e-07 9.0e-07 1.0e-06 1.2e-06 1.3e-06 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin 0.0e+00 1.5e-07 3.0e-07 4.5e-07 6.0e-07 7.5e-07 9.0e-07 1.0e-06 1.2e-06 1.3e-06 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin 0.0e+00 1.5e-07 3.0e-07 4.5e-07 6.0e-07 7.5e-07 9.0e-07 1.0e-06 1.2e-06 1.3e-06 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin 0.0e+00 1.5e-07 3.0e-07 4.5e-07 6.0e-07 7.5e-07 9.0e-07 1.0e-06 1.2e-06 1.3e-06 \u221230 \u221220 \u221210 0 10 20 30 arcmin \u221230 \u221220 \u221210 0 10 20 30 arcmin 0.0e+00 1.5e-07 3.0e-07 4.5e-07 6.0e-07 7.5e-07 9.0e-07 1.0e-06 1.2e-06 1.3e-06 Fig. 1.\u2014 Top-left panel show the average Compton-y map of 10,000 individual maps centered on the quasar host sampled from CS model at z = 1.4. The other \ufb01ve panels show \ufb01ve randomly selected individual maps for \ufb01ve quasar halos. The pixel size is 0.034 arcmin. within massive halos. Given our generous mass limit of contributing halos and neglect of gravitationally shock heated gas outside the virial radius, it is likely that the estimates for CS and HOD models shown in the left panel of Figure 2 are somewhat under-estimated. Thus, in disagreement with Ruan et al. (2015) with respect to feedback energy from other nongravitational sources, we see little evidence for a need of a large contribution to the tSZ from non-gravitational energy sources, including quasars or stars. \f\u2013 5 \u2013 To better understand this discrepancy, we show in right panel of Figure 2 the Compton-y pro\ufb01le in HOD model, when only the quasar-hosting halo contributes to the thermal energy in the map, neglecting the contribution from clustered neighboring halos. We see that the isolated quasar map yields a tSZ signal peaked around y \u223c1.4 \u00d7 10\u22128 (black curve with shaded area) versus y \u223c3.0 \u00d7 10\u22127 as seen in the left panel where all neighboring halos are included. It is hence very clear that the overall Compton-y parameter re\ufb02ects the collective thermal energy contribution of halos clustered around the quasar hosting halos in both CS and HOD models. The collective effect exceeds that of the quasar host halo by more than an order of magnitude. We attribute the suggested need of additional quasar feedback energy in order to account for the observed tSZ effect proposed by Ruan et al. (2015) to the fact that projection effects due to clustered halos are not taken into account in their analysis. In right panel of Figure 2 we also show the mean tSZ signals for quasars at three different redshifts separately. Since in this case no projected structures are included, the results are commensurate with the quasar halo masses in the models that increase with increasing redshift. In the (HOD, CS) model (Cen & Safarzadeh 2015), the lower mass threshold of quasar hosts is [2 \u00d7 1013, (2 \u22125) \u00d7 1012] M\u2299at z = 3.2, [5.8\u00d71012, (2\u22125)\u00d71011] M\u2299at z = 1.4 and [5.7\u00d71012, (1\u22123)\u00d71011] M\u2299at z = 0.5. It is also worth noting that, in the absence of projection effects, the quasar tSZ signal in the CS model is about a factor of 5 (at z = 3.2) to 25 (at z = 0.5) lower than in the HOD model, due to differences in the quasar host halo masses in the two models. It should be made clear that the projection effects are present at all redshifts. In the middle panel of Figure 2 we show the average Compton-y pro\ufb01le per quasar for the CS (solid curves) and HOD (dashed curves) model separately at z = 0.5 (blue), z = 1.4 (green) and z = 3.2 (red), including projection effects. Two trends are seen and fully understandable. First, overall, the tSZ signal per quasar, with projected structures, increases with decreasing redsh\ufb01t, in the range from z = 0.5 to z = 3.2. This is expected due to continued growth of cosmic structure with time. We note that, if we had not removed the most massive clusters in our tSZ maps (to account for the masking-out of massive clusters in Planck maps (Planck Collaboration et al. 2014), the increase with decreasing redshift would be stronger. Second, the ratio of tSZ signal with projection effects to that without projection effects increases strongly with decreasing redshift, due to the combined effect of decreasing quasar host halo mass and increasing clustering around massive halos with decreasing redshift. In the left panel of Figure 2 the observed y-map values are not dust-corrected. The correction amplitude for dust effect with the procedure used by Ruan et al. (2015), by applying the channel weights from the Hill & Spergel (2014) y-map construction to dust-like (modi\ufb01ed blackbody) spectra, depends sensitively on dust temperature assumed. Greco & Hill (2015, private communications) show that for a dust temperature of 34 K used in Ruan et al. (2015), the y-map response is indeed negative over the entire redshift range of the quasar sample, resulting in an increase in total thermal energy in the y-map by about 37%; for lower dust temperatures, the y-map response becomes less negative and could go positive below some temperature for all redshifts; for dust temperature of 20 K, the y-map response is very slightly \f\u2013 6 \u2013 0 5 10 15 20 25 30 QSO\u2013centric distance (arcmin) 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 mean tSZ Compton\u2013y \u00d7 107 smoothed w/ Planck FWHM = 10 arcmin CS QSO Halo Model HOD QSO Halo Model Ruan et al 2015 0 2 4 6 8 10 12 14 QSO\u2013centric distance (arcmin) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 mean tSZ Compton\u2013y \u00d7 107 CS , z = 0.5 CS , z = 1.4 CS , z = 3.2 HOD , z = 0.5 HOD , z = 1.4 HOD , z = 3.2 0 2 4 6 8 10 12 14 QSO\u2013centric distance (arcmin) 0 1 2 3 4 5 6 mean tSZ Compton\u2013y without projected structure CS, z = 0.5 (y \u00d7 109) CS, z = 1.4 (y \u00d7 109) CS, z = 3.2 (y \u00d7 109) HOD, z = 0.5 (y \u00d7 108) HOD, z = 1.4 (y \u00d7 108) HOD, z = 3.2 (y \u00d7 108) HOD, (y \u00d7 108) Fig. 2.\u2014 Left panel shows the mean Compton-y pro\ufb01le of quasars, including projection effects, for the CS (blue shaded region) and HOD (purple shaded region) model, respectively, for a synthetic sample of quasars with redshift distribution [z = (0.1, 3.0) with median redshift of 1.5] mimicing that of the observed sample used in Ruan et al. (2015). The radial pro\ufb01le of the observed quasars (Ruan et al. 2015) is overplotted as the green shaded region without dust correction. We have normalized the observed radial pro\ufb01le with that of the models at 30 arcmin radius, which corresponds to the \u201cbackground\". Middle panel shows the mean Compton-y pro\ufb01le of a quasar, including projection effects, at three separate redshifts, z = 0.5 (blue curves), z = 1.4 (green curves) and z = 3.2 (red curves), for the CS (solid curves) and HOD (dashed curves) model separately. Right panel is similar to the middle panel, except that only the quasar-hosting halo contributes to the thermal energy in the map, without considering the contribution from other halos due to projection effects. Also shown in black is the mean Compton-y pro\ufb01le in HOD model, without projection effects, with appropriate weightings in accordance with that of the observed sample used in Ruan et al. (2015). negative at z < 1.4 but signi\ufb01cantly positive at z > 1.4, with the net y-map response for the quasar sample slightly positive. With regard to dust temperature, observational evidence is varied but data suggesting lower temperatures are widespread. For example, Schlegel et al. (1998) indicate dust temperature of 17 \u221221 K in our own Galaxy; Kashiwagi & Suto (2015) suggest a dust temperature of 18 K for dust around galaxies from far-infrared image stacking analysis; Greco et al. (2014) suggest an overall dust temperature of 20 K in modeling the cosmic infrared background. Thus, the contribution of dust emission itself to y-map depends signi\ufb01cantly on the dust temperature and the exact temperature of dust is uncertain at best and the actual y-map response is thus uncertain. Even if we take the dust-corrected y-map from Ruan et al. (2015), given our results that the dust-uncorrected y values can be explained soley by gravitational energy of halos hosting QSOs and neighboring ones, the QSO contribution is at most about 1/4 of what is inferred in Ruan et al. (2015). \f\u2013 7 \u2013 4. Testing Competing Quasar Models with Arc-Minute Resolution Thermal Sunyaev-Zeldovich Effect Maps Having validated both the CS and HOD models by the Planck tSZ data on 10 arcmin scales in the previous section, here we propose a test to differentiate between them. Figure 3 shows the stacked tSZ map of quasars with a median redshift of 1.5 smoothed with FWHM=1 arcmin in the CS (left panel) and HOD (right panel) model. The difference between the two model is visually striking: the HOD model, being hosted by much more massive halos than the CS model, displays a much more peaked tSZ pro\ufb01le at the arcmin scales. The reason is that one arcmin corresponds to 516 kpc at z = 1.5, indicating that individual quasar hosting halos of mass \u22651013 M\u2299in the HOD model are no longer signi\ufb01cantly smoothed out by a 1 arcmin beam. The quasar hosting halos in the CS model, on the other hand, have much lower masses than those in the HOD model and hence have much lower tSZ effect at the arcmin scale. At the arcmin scale, projection effects are much reduced compared to the 10 arcmin scale. \u22124 \u22122 0 2 4 arcmin \u22124 \u22122 0 2 4 arcmin CS model mean tSZ Compton\u2013y map 3.2e-07 4.0e-07 4.8e-07 5.6e-07 6.4e-07 7.2e-07 8.0e-07 8.8e-07 9.6e-07 \u22124 \u22122 0 2 4 arcmin \u22124 \u22122 0 2 4 arcmin HOD model mean tSZ Compton\u2013y map 3.2e-07 4.0e-07 4.8e-07 5.6e-07 6.4e-07 7.2e-07 8.0e-07 8.8e-07 9.6e-07 Fig. 3.\u2014 Left panel shows the stacked tSZ map of quasars with a median redshift of 1.5 smoothed with FWHM=1 arcmin in the CS model. Right panel shows the same for HOD model. Figure 4 quanti\ufb01es what is seen in Figure 3 for the two quasar models. We see that, with FWHM=1 arcmin, the central value of y parameter differs by a factor of about two in the two models: (1.0 \u00b1 0.05) \u00d7 10\u22126 in the HOD model versus (0.55 \u00b1 0.03) \u00d7 10\u22126 in the CS model. This is a large difference and can be easily tested. Before quantifying how the two models may be differentiated, it is useful to understand the distribution of contributions from individual y maps to the averaged y map. Figure 5 shows the probability distribution function (PDF) of y parameter of the central region of radius 1 arcmin of 10,000 individual quasar hosting halos (including projection effects) smoothed with FWHM=1 arcmin (red histogram) and smoothed with FWHM=10 arcmin (blue histogram). It is evident \f\u2013 8 \u2013 0 2 4 6 8 10 QSO\u2013centric distance (arcmin) 0.2 0.4 0.6 0.8 1.0 1.2 mean tSZ Compton\u2013y \u00d7 106 smoothed w/ FWHM = 1 arcmin CS QSO Halo Model HOD QSO Halo Model Fig. 4.\u2014 shows the predicted radial Compton-y pro\ufb01le of CS and HOD model smoothed with 1 arcmin FWHM. \u22129.5 \u22129.0 \u22128.5 \u22128.0 \u22127.5 \u22127.0 \u22126.5 \u22126.0 \u22125.5 \u22125.0 log Compton\u2013y parameter 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Normalized distribution FWHM = 10 arcmin FWHM = 1 arcmin Fig. 5.\u2014 shows the probability distribution function (PDF) of y parameter of the central region of radius 1 arcmin of 10,000 individual quasar hosting halos (including projection effects) smoothed with FWHM=1 arcmin (red histogram) and smoothed with FWHM=10 arcmin (blue histogram) in the CS model. The vertical lines color indicate the median and inter-quartile of the contribution to the mean y value of similar color histograms. that the distribution of log y in both cases is close to gaussian hence the distribution of y is approximately lognormal. This indicates that the overall contribution to the stacked maps is \f\u2013 9 \u2013 skewed to the high end of the y distribution. We \ufb01nd that 7.8%, 12.1% and 22.5% of high y quasar halos contribute to 25%, 50% and 75% of the overall y value in the case with FWHM= 1 arcmin, 6.3%, 9.6% and 18.4% in the case with FWHM= 10 arcmin. Given the non-gaussian nature, we use bootstrap to estimate errors on the mean y value. We \ufb01nd that the fractional error on the mean, computed by bootstrap sampling from our 10,000 samples, is 3.7% and 3.2% for FWHM=10 arcmin and 1 arcmin cases, respectively. Thus, with a sample of 26, 000 quasars as in Ruan et al. (2015), the fractional error on the mean would be 2% for FWHM=1 arcmin case. Since the fractional difference between the HOD (ycentral = (1.0 \u00b1 0.05) \u00d7 10\u22126) and the CS (ycentral = (0.55 \u00b1 0.03) \u00d7 10\u22126 is 60%, this means that the HOD and CS model can be distinguished at \u223c30\u03c3 level, if statistical uncertainties are the only uncertainties. It is thus likely that the signi\ufb01cance level of differentiating the two models using arcmin scale tSZ effect around quasars will be limited by systematic uncertainties. As stated in \u00a73, there is a possibility that a signi\ufb01cant fraction (\u223c25%) of the observed thermal energy based on y-maps may be due to non-gravitational heating, such as quasar feedback suggested by Ruan et al. (2015). Under the reasonable assumption that the energy from quasar feedback accumulates over time, say via episodic high-energy radio jets, the quasar feedback energy would be proportional to the galaxy stellar mass or approximately the halo mass, given the observed correlation between supermassive black hole mass and the bulge stellar mass or velocity dispersion (e.g., Magorrian et al. 1998; Richstone et al. 1998; Gebhardt et al. 2000; Ferrarese & Merritt 2000; Tremaine et al. 2002). If we further assume that the radial pro\ufb01le of the deposited energy from quasar feedback is the same as that of thermal energy sourced by gravitational energy, it follows then that the central y-value of the (HOD,CS) model would be boosted from [(1.0 \u00b1 0.05) \u00d7 10\u22126, (0.55 \u00b1 0.03) \u00d7 10\u22126] shown in Figure 4 to [(1.4 \u00b1 0.07) \u00d7 10\u22126, (0.77 \u00b1 0.04) \u00d7 10\u22126]. With the inclusion of this systematic uncertainty on quasar feedback energy, the expected central y-value ranges would become [(1.0 \u22121.4) \u00d7 10\u22126, (0.55 \u22120.77) \u00d7 10\u22126], respectively, for the (HOD,CS) model, which remain strongly testable with arcmin resolution tSZ observations. 5." + }, + { + "url": "http://arxiv.org/abs/1504.07248v1", + "title": "Coevolution Between Supermassive Black Holes and Bulges Is Not Via Internal Feedback Regulation But By Rationed Gas Supply Due To Angular Momentum Distribution", + "abstract": "We reason that, without physical fine-tuning, neither the supermassive black\nholes (SMBHs) nor the stellar bulges can self-regulate or inter-regulate by\ndriving away already fallen cold gas to produce the observed correlation\nbetween them. We suggest an alternative scenario where the observed mass ratios\nof the SMBHs to bulges reflect the angular momentum distribution of infallen\ngas such that the mass reaching the stable accretion disc is a small fraction\nof that reaching the bulge region, averaged over the cosmological time scales.\nWe test this scenario using high resolution, large-scale cosmological\nhydrodynamic simulations (without AGN feedback), assuming the angular momentum\ndistribution of gas landing in the bulge region to yield a Mestel disc that is\nsupported by independent simulations resolving the Bondi radii of SMBHs. A mass\nratio of $0.1-0.3\\%$ between the very low angular momentum gas that free-falls\nto the sub-parsec region to accrete to the SMBH and the overall star formation\nrate is found. This ratio is found to increase with increasing redshift to\nwithin a factor of $\\sim 2$, suggesting that the SMBH to bulge ratio is nearly\nredshift independent, with a modest increase with redshift, a testable\nprediction. Furthermore, the duty cycle of active galactic nuclei (AGN) with\nhigh Eddington ratios is expected to increase significantly with redshift.\nFinally, while SMBHs and bulges are found to coevolve on $\\sim 30-150$Myr time\nscales or longer, there is indication that, on shorer time scales, the SMBH\naccretion rate and star formation may be less correlated.", + "authors": "Renyue Cen", + "published": "2015-04-27", + "updated": "2015-04-27", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction There is mounting evidence that massive bulges in the nearby universe harbor central SMBHs of mass 106 \u2212109 M\u2299. The correlation between SMBH mass ( MBH) and the bulge (BG) mass ( MBG) or velocity dispersion (\u03c3) (e.g., Magorrian et al. 1998; Richstone et al. 1998; Gebhardt et al. 2000; Ferrarese & Merritt 2000; Tremaine et al. 2002) suggests coevolution. Although alternative models for producing this observed relation are available (e.g., Ostriker 2000; Adams et al. 2001; Colgate et al. 2003; Cen 2007), the correlation is often construed as 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1504.07248v1 [astro-ph.GA] 27 Apr 2015 \f\u2013 2 \u2013 evidence for AGN feedback to regulate the growth of SMBHs and bulges. The idea that AGN feedback may alleviate problems in galaxy formation models (e.g., Kauffmann & Haehnelt 2000; Croton et al. 2006; Somerville et al. 2008) further enhances its appeal. The threedimensional hydrodynamic simulations successfully reproduced the observed MBH/ MBG ratio (e.g., Di Matteo et al. 2005; Hopkins et al. 2006), providing the physical basis for this scenario. This Letter has two goals. First, we make a qualitative examination of the implications of the observed relation between bulges and the central massive objects (CMOs), wherein the two follow a linear relation over four decades in mass. It is shown that neither the SMBHs nor the nuclear star clusters (NSCs) nor the stellar bulges could have played a dominant role in regulating the growth of any of the three components in the way of blowing away a signi\ufb01cant fraction of gas already landed in the respective regions so as to produce the CMO-bulge relation. Second, an alternative model is put forth wherein the correlation between SMBH mass and bulge mass is dictated by the angular momentum distribution of the infalling gas. We successfully test this new scenario using ab initio Large-scale Adaptive-mesh-re\ufb01nement Omniscient Zoom-In (LAOZI) cosmological hydrodynamic simulations. 2. Arguments Against Internal Regulation of the Central Components With the ACS Virgo Cluster Survey of early-type galaxies spanning four decades in mass, C\u00f4t\u00e9 et al. (2006) and Ferrarese et al. (2006) \ufb01nd a transition at MB,0 = \u221220.5, where the brighter galaxies lack resolved stellar nuclei and SMBHs dominate the CMO mass, while fainter ones have resolved stellar nuclei that dominate the CMO mass. Furthermore, the logarithm of the mean nucleus-to-galaxy luminosity ratio in fainter, nucleated galaxies, \u22122.49 \u00b1 \u22120.09 (\u03c3 = 0.59 \u00b1 \u22120.10) is indistinguishable from that of the SMBH-to-bulge mass ratio, \u22122.61 \u00b1 \u22120.07 (\u03c3 = 0.45 \u00b1 \u22120.09). A similar result is found by Wehner & Harris (2006) using a different data set. Turner et al. (2012) \ufb01nd an identical relation using early-type galaxies in the ACS Fornax Cluster Survey. We express the universal scaling relation between CMOs and bulges as MCMO = MBH + MNSC = \u03b1 MBG, (1) whereby with the transition between NSC and SMBH occurs at MB \u223c\u221220.5 or stellar mass MBG0 = (3\u22124)\u00d71010 M\u2299, and \u03b1 = 2.5\u00d710\u22123 (C\u00f4t\u00e9 et al. 2006; Ferrarese et al. 2006; Wehner & Harris 2006; Turner et al. 2012). One may express regulation of the growth of bulges as eBH MBH + eNSC MNSC + eBG MBG = f\u03c3\u03b2 MBG, (2) where eBH, eNSC and eBG are the feedback strength coef\ufb01cients per unit mass of the respective components exerted on the stellar bulge and the ejected gas mass is equal to fMBG; \u03c3 is the velocity dispersion of the stellar bulge; \u03b2 is a parameter that absorbs uncertainties regarding the dynamics of concerned feedback processes, with \u03b2 = 2 for energy-conserving feedback (eBH, eNSC and eBG have units of energy per unit mass) and \u03b2 = 1 for momentum-conserving \f\u2013 3 \u2013 feedback (eBH, eNSC and eBG have units of momentum per unit mass). Note that a signi\ufb01cant feedback regulation means f \u226b1. Insights can be gained by asking the following question: Can the feedback from SMBH and NSCs conspire to regulate the growth of the stellar bulge, i.e., eBH MBH + eNSC MNSC = f\u03c3\u03b2 MBG? (3) The single powerlaw relation between MCMO and MBG across four decades in bulge mass can be understood, only if the negative feedback per unit stellar mass of the NSC and of the SMBH are approximately the same, eBH \u2248eNSC, barring the unknown physical reason for the right hand side of Eq (3) the required amount of notional feedback to regulate the bulge growth to change character abruptly at MBG = MBG0. Although having eBH \u2248eNSC may be possible, it would render a negative answer to the question above (Eq 3), as follows. In the momentum driven regime, since the feedback from the nuclear cluster is subject to higher densities and shorter cooling timescales hence diminished strength in comparison to that in the stellar bulge, i.e., eBG > eNSC. In the energy driven feedback scenario, eBG = eNSC. Since MNSC \u226aMBG, the supernova feedback from stars in the bulge would vastly exceed that from the NSC. This thus invalidates the statement that the NSC and SMBH provide the necessary feedback to regulate the growth of the bulge. The only scenario left for the SMBH to regulate the bulge growth is to force eNSC = 0 and assume the feedback per unit SMBH mass, while constant at MBG > MBG0, to become negligible at about MBG = MBG0. In both the momentum (\u03b2 = 1, Ostriker et al. 2010) and energy feedback scenario (\u03b2 = 2, Faucher-Gigu\u00e8re & Quataert 2012), the amount of momentum or energy per unit SMBH mass, eBH, is ultimately proportional to the driving energy (\u221dMBHc2, where c is speed of light). Thus, there exists no known process to suddenly make eBH drop to zero at some speci\ufb01c MBH, while being constant otherwise. If negative feedback is needed to internally regulate the bulge, the only alternative left is stellar feedback from bulge stars themselves, i.e., eBG = f\u03c3\u03b2. (4) Under the assumption that the feedback strength from stars per unit mass (eBG) is constant, one obtains f \u221d\u03c3\u2212\u03b2, which has the same dependence on \u03c3 as the predicted mass loading factors for both momentum (\u03b2 = 1) or energy (\u03b2 = 2) driven winds (e.g., Murray et al. 2005). Therefore, bulge self-regulation, if required, would be physically supportable and selfconsistent. If bulge is self-regulated, then, under the assumption that eNSC = eBG, NSC may also be self-regulated. The correlation between MMCO and MBG would then require that the mass loading factor for the SMBH is the same as for the NSC, i.e., eBH = eNSC, which is a \ufb01ne-tuned outcome. In the absence of inter-regulation between CMOs and bulges, the proportions of the amount of gas feeding the nuclear and bulge regions must be proportional to the observed MCMO/ MBG ratio. \f\u2013 4 \u2013 3. An Alternative Scenario: Rationed Cold Gas Supply to Nuclear and Bulge Regions Over Cosmological Time Scales Our arguments in the previous section indicate that the observed MCMOMBG correlation requires the same proportionality in the initial amounts of gas feeding the respective regions, averaged over the cosmological time scales. We test this scenario using direct cosmological simulations. 3.1. Simulation Characteristics See Cen (2014) for a more detailed description of the ab initio LAOZI simulations. Brie\ufb02y, we use the WMAP7-normalized (Komatsu et al. 2011) \u039bCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100h km s\u22121Mpc\u22121 = 70 km s\u22121Mpc\u22121 and n = 0.96. A zoom-in box of size 21 \u00d7 24 \u00d7 20h\u22123Mpc3 comoving is embedded in a 120 h\u22121Mpc periodic box. The maximum resolution is better than 111h\u22121pc (physical) at all times. Star formation follows the prescription of Cen & Ostriker (1992). Supernova feedback from star formation is modeled following Cen et al. (2005) with feedback energy being distributed into 27 local gas cells weighted by the speci\ufb01c volume of each cell, to mimic the process of supernova blastwave propagation to channel more energy into the less dense regions. We exclude AGN feedback in order to ascertain the lack of need for it. 3.2. Construction of Gas Feeding Histories of Simulated Galaxies Galaxies are identi\ufb01ed using the HOP algorithm (Eisenstein & Hut 1998) grouping stellar particles. Galaxy catalogs are constructed from z = 0.62 to z = 1.40 with an increment of \u2206z = 0.02 and from z = 1.40 to z = 6 with \u2206z = 0.05, having a temporal resolution of 30\u2212150Myr. For each galaxy at z = 0.62 a genealogical line is constructed up to z = 6, where the parent of each galaxy is identi\ufb01ed with the one at the next higher redshift with the most overlap in stellar mass. At each redshift, we compute the amount (Mc) and mean speci\ufb01c angular momentum (Jc) of gas in the central 1kpc region. To proceed, an ansatz is made: the gas mass with angular momentum lower than Jn is Mc(\u03b2Jn/(1 + \u03b2)Jc)\u03b2. We use \u03b2 = 1, which corresponds to a Mestel (1963) disc of surface density \u03a3(r) \u221dr\u22121. \u03b2 = 1 is motivated by simulations of Hopkins & Quataert (2010, 2011) with resolution as high as 0.1pc. Figure 12 of Hopkins & Quataert (2010) shows that the evolved density runs of the gas discs, on average, follow the \u03a3(r) \u221dr\u22121 pro\ufb01le from 0.1pc to 1kpc. In all of the six individual cases with signi\ufb01cant gas in\ufb02ow, shown in Figures (2, 3) of Hopkins & Quataert (2011), the \u03a3(r) \u221dr\u22121 pro\ufb01le provides an excellent \ufb01t. We compute the 1-d stellar velocity dispersion \u03c3 within the effective radius for each galaxy in the simulation at any redshift and assume an SMBH of \f\u2013 5 \u2013 mass equal to MBH = 108 M\u2299(\u03c3/200 km/s)4 (Tremaine et al. 2002). The Bondi radius is rB = 2G MBH/3\u03c32 = 7.2pc(\u03c3/200 km/s)2, (5) and the speci\ufb01c angular momentum at rB is JB = \u221a 2rB\u03c3. (6) The gas landing within r0 is assumed to accrete to the SMBH, where at r > r0 the disc has Toomre Q parameter below unity and is hence consumed by star formation. Expressing various parameters by their \ufb01ducial values, we have r0 = 0.42(\u03b1/0.1)2/5(lE/0.1)\u22122/5( MBH/108 M\u2299)3/25(Ma/0.1)14/25(\u03ba/\u03bae)4/25 pc (7) (Eq 42, Goodman 2003), where \u03b1 is radiative ef\ufb01ciency, lE luminosity in Eddington units, Ma Mach number of the viscous disc at r0, and \u03ba and \u03bae opacity and electron-scattering opacity, respectively. Hence the feeding rate to the accretion disc that eventually accretes to the SMBH is \u02d9 Mfeed = Mc((r0/rB)1/2JB/Jc)t\u22121 dyn, (8) where the angular momentum at r0 is J0 = (r0/rB)1/2JB for a Keplerian disc and tdyn = 1kpc/ \u221a 3\u03c3 is the free-fall time at 1kpc. For our analysis, we use r0 = 0.42( MBH/108 M\u2299)3/25 pc, (9) bearing in mind that uncertainties are at least on the order of unity. To see how uncertainty in \u03b2 affects results, we note, a 25% deviation in \u03b2 from unity causes \u02d9 Mfeed in Eq (8) to change by a factor of 2.7, which can be compensated by adjusting each of the parameters in Eq (7) except MBH by a factor of 2.5 appropriately. 3.3. Results We de\ufb01ne a ratio R \u2261500 \u02d9 Mfeed/SFR (SFR is the star formation rate) such that, if R is about unity, the observed SMBH to bulge mass ratio of \u223c0.2% (e.g., Marconi & Hunt 2003; H\u00e4ring & Rix 2004) would be borne out. Transformation from stellar disc(s) to a bulge is not addressed here. It is noted, however, that stellar discs formed from multiple gas in\ufb02ows of inclined angles over the lifetime of a galaxy may be conducive to bulge formation. Note that SFR is computed directly during the simulation, whereas the SMBH accretion rate is computed in post-processing by evaluating Eq (8). Figure 1 shows histories of \u02d9 Mfeed (blue) and R (red) for four random example galaxies. The most noticeable feature is that, without any intentional tuning, R hovers close to unity with \ufb02uctuations of order unity. Figure 2 shows R as a function of redshift. We see that R increases with increasing redshift from \u223c0.7 at z = 0.6 \u22121 to \u223c1.5 at z = 3 \u22124 for galaxies with 1010.5\u221211 M\u2299(green), \f\u2013 6 \u2013 0.5 1 2 3 4 5 -4 -3 -2 -1 0 1 log M*=11.96 0.5 1 2 3 4 5 -4 -3 -2 -1 0 1 log M*=11.46 z 0.5 1 2 3 4 5 log \u02d9 Mfeed(M\u2299/yr)(blue) & log 500 \u02d9 Mfeed/SFR (red) -4 -3 -2 -1 0 1 log M*=11.18 z 0.5 1 2 3 4 5 -4 -3 -2 -1 0 1 log M*=10.8 Fig. 1.\u2014 shows histories of the feeding rate \u02d9 Mfeed (blue) and R \u2261500 \u02d9 Mfeed/SFR (red) for four random galaxies. The logarithm of the stellar mass for each galaxy at z = 0.62 is indicated at the top of each panel. with similar trends for other mass ranges. We highlight three implications. First, the observed SMBH to bulge ratio is readily achievable in a cosmological setting, with a slight tendency for more massive galaxies to have higher R. This is due to the rationing of gas supply to the central regions of galaxies: a small amount of gas of the lowest angular momentum feeds the SMBH accretion disc, while the rest builds up the stellar bulge, with the demarcation line determined by the accretion disc stability condition. Note that our analysis is solely based on the angular momentum distribution of gas that has already landed in the central 1kpc region. The frequency of gas in\ufb02ow events into the central regions and the mass distribution of events are computed directly in our simulations. Second, R increases with increasing redshift, to within a factor of \u223c2. The trend with redshift is expected in a cosmological context, because both the frequency and strength of galaxy interactions increase with increasing redshift, yielding overall in\ufb02ow gas of lower angular momentum hence a larger R at high redshift. Third, the smoothness of R on cosmological time scales (\u2265100Myr) suggests that the dispersion of R is modest, around order unity, at all redshifts, consistent with the dispersion of the observed \f\u2013 7 \u2013 redshift 1 2 3 500 \u02d9 MBH/SFR 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 log M*=10.5-11 log M*=10-10.5 log M*= 9.5-10 Fig. 2.\u2014 shows the median of R as a function of redshift, separately for three stellar mass ranges 109.5\u221210 M\u2299(red), 1010\u221210.5 M\u2299(blue) and 1010.5\u221211 M\u2299(green). The stellar mass is measured at the redshift in question. The vertical errorbars indicate the interquartile range, whereas the horizontal errorbars represent the redshift range of the bin. The red and blue points are horizontally slightly right-shifted for clarity of display. There are (659, 2214) galaxies with stellar mass in the range 1010.5\u221211 M\u2299for z = (3 \u22124, 0.62 \u22121), respectively. correlation locally (note that the comparison is made between computed \u02d9 Mfeed/SFR and observed MBH/ MBG). Future observations at high redshift may be able to test these predictions. Although R is relatively smooth over cosmological time scales, the gas in\ufb02ow rate varies up to an order of magnitude (Figure 1). The \ufb02uctuations in the in\ufb02ow rate are caused by a variety of physical processes, including interactions between galaxies in close proximity, minor mergers and occasional major mergers. We have not studied in suf\ufb01cient detail to ascertain whether secular processes play any major role. Is SMBH accretion rate directly dictated by the feeding rate from galactic scales? Figure 3 shows the probability distribution of feeding rate in units of Eddington rate as a function of Eddington ratio. The Eddington ratio is based on the assumed MBH from the observed MBH \u2212\u03c3 relation. At z \u223c0.6 where comparisons with observations may be made, the com\f\u2013 8 \u2013 log \u03f5E -3 -2 -1 0 log dP/dlog \u03f5E -4 -3 -2 -1 0 M\u2217= 1010.5\u221211M\u2299 z=0.6-1 z=1-2 z=2-3 z=3-4 obs at z~0.6: Aird+12 Fig. 3.\u2014 shows the probability distribution of feeding rate in units of Eddington rate per logarithmic Eddington ratio interval, as a function of Eddington ratio, in four redshift ranges, z = 0.62 \u22121 (solid red), z = 1 \u22122 (dotted blue), z = 2 \u22123 (dashed green), and z = 3 \u22124 (dot-dashed black) for galaxies in the stellar mass range of 1010.5\u221211 M\u2299(other stellar mass ranges have similar properties). Also show as solid dots is the observed powerlaw distribution with a slope of \u223c\u22120.6 at z \u223c0.6 from Aird et al. (2012). The slope of the solid red curve is \u22123.3 measured for the log eE range from \u22122.3 to \u22121.6 indicated by the red dashed line. puted distribution is steeper, computed slope \u22123.3 versus \u22120.60 observed. This indicates that accretion onto the SMBHs is \u201c\ufb01ltered\" through physical processes operating on the accretion disc. This suggests that temporal correlation between AGN and star formation activities in individual galaxies below 30 \u2212150Myr is expected to be weak, in excellent agreement with observations (e.g., Hickox et al. 2014). A comparison between the distribution of the feeding rate to the accretion disc (red curve) and that of the observed Eddington ratio (black dots) suggests that at z \u223c0.6 accretion discs around SMBHs spend most of the time accumulating gas, at feeding rate below 1% Eddington ratio and that the apparent powerlaw distribution of Eddington ratio may be a result of superposition of AGN internal light pro\ufb01les that are universal in shape (i.e., slope of \u223c\u22120.6). We see that the computed feeding rate distribution shifts to \f\u2013 9 \u2013 the right \u223c0.5 dex per unit redshift, indicating that the duty cycle of luminous AGNs increases with redshift. 4." + }, + { + "url": "http://arxiv.org/abs/1502.04026v1", + "title": "Quantifying Distributions of Lyman Continuum Escape Fraction", + "abstract": "Simulations have indicated that most of the escaped Lyman continuum photons\nescape through a minority of solid angles with near complete transparency, with\nthe remaining majority of the solid angles largely opaque, resulting in a very\nbroad and skewed probability distribution function (PDF) of the escape fraction\nwhen viewed at different angles. Thus, the escape fraction of Lyman continuum\nphotons of a galaxy observed along a line of sight merely represents the\nproperties of the interstellar medium along that line of sight, which may be an\nill-representation of true escape fraction of the galaxy averaged over its full\nsky. Here we study how Lyman continuum photons escape from galaxies at $z=4-6$,\nutilizing high-resolution large-scale cosmological radiation-hydrodynamic\nsimulations. We compute the PDF of the mean escape fraction ($\\left$) averaged over mock observational samples, as a function of the\nsample size, compared to the true mean (had you an infinite sample size). We\nfind that, when the sample size is small, the apparent mean skews to the low\nend. For example, for a true mean of 6.7%, an observational sample of (2,10,50)\ngalaxies at $z=4$ would have have 2.5% probability of obtaining the sample mean\nlower than $\\left=$(0.007%, 1.8%, 4.1%) and 2.5%\nprobability of obtaining the sample mean being greater than (43%, 18%, 11%).\nOur simulations suggest that at least $\\sim$ 100 galaxies should be stacked in\norder to constrain the true escape fraction within 20% uncertainty.", + "authors": "Renyue Cen, Taysun Kimm", + "published": "2015-02-13", + "updated": "2015-02-13", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction A fraction of the Lyman continuum (LyC) photons generated by young massive stars is believed to escape from the host galaxies to enter the intergalactic space. This is a fundamental quantity to determine the epoch and pace of cosmological reionization, provided that the universe is reionized by stars (e.g., Gnedin 2000; Cen 2003). After the completion of cosmological reionization, it plays another important role in determining the ultra-violet (UV) radiation background (on both sides of the Lyman limit) in conjunction with another major source of UV photons quasars that progressively gains importance at lower redshift (e.g., Faucher-Gigu` ere et al. 2008; Fontanot et al. 2014). Observations of star-forming galaxies at high redshifts (z \u223c3) suggest a wide range of the escape fraction of ionizing photons. While only a small fraction of LyC photons (\u2272a few percent) escapes from their host galaxies in the majority of the Lyman break galaxy samples, a non-negligible 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu 2Princeton University Observatory, Princeton, NJ 08544; kimm@astro.princeton.edu arXiv:1502.04026v1 [astro-ph.GA] 13 Feb 2015 \f\u2013 2 \u2013 number of them (\u223c10%) shows high levels of LyC \ufb02ux corresponding to \u27e8fesc,1D\u27e9\u223c10% (Shapley et al. 2006; Iwata et al. 2009; Nestor et al. 2011, 2013; Mostardi et al. 2013). Cooke et al. (2014) claim that the mean escape fraction may be even higher (\u27e8fesc, 1D\u27e9\u223c16%) if the observational sample is not biased toward the galaxies with a strong Lyman limit break. It is not well understood quantitatively, however, what the probability distribution function (PDF) of the LyC escape fraction is and how a limited observational sample size with individually measured escape fractions can be properly interpreted, because of both possible large variations of the escape fraction from sightline to sightline for a given galaxy and possible large variations from galaxy to galaxy. The purpose of this Letter is to quantify how LyC photons escape, in order to provide a useful framework for interpreting and understanding the true photon escape fraction given limited observational sample sizes. 2. Simulations To investigate how LyC photons escape from their host halos, we make use of the cosmological radiation hydrodynamic simulation performed using the Eulerian adaptive mesh re\ufb01nement code, ramses (Teyssier 2002; Rosdahl et al. 2013, ver. 3.07). The reader is referred to Kimm & Cen (2014, , the FRU run) for details, where a detailed prescription for a new, greatly improved treatment of stellar feedback in the form of supernova explosion is given. Speci\ufb01cally, the new feedback model follows the dynamics of the explosion blast waves that capture the solution for all phases (from early free expansion to late snowplow), independent of simulation resolution and allow for anisotropic propagation. The initial condition for the simulation is generated using the MUSIC software (Hahn & Abel 2011), with the WMAP7 parameters (Komatsu et al. 2011): (\u2126m, \u2126\u039b, \u2126b, h, \u03c38, ns = 0.272, 0.728, 0.045, 0.702, 0.82, 0.96). We adopt a large volume of (25Mpc/h)3 (comoving) to include the e\ufb00ect of large-scale tidal \ufb01elds on the galaxy assembly. The entire box is covered with 2563 root grids, and high-resolution dark matter particles of mass Mdm = 1.6 \u00d7 105 M\u2299are employed in the zoomed-in region of 3.8 \u00d7 4.8 \u00d7 9.6 Mpc3. We allow for 12 more levels of grid re\ufb01nement based on the density and mass enclosed within a cell in the zoomed-in region to have a maximum spatial resolution of 4.2 pc (physical). Star formation is modeled by creating normal and runaway particles in a dense cell (nH \u2265100 cm\u22123) with the convergent \ufb02ow condition (Kimm & Cen 2014, the FRU run). The minimum mass of a normal (runaway) star particle is 34.2 M\u2299(14.6 M\u2299). We use the mean frequency of Type II supernova explosions of 0.02 M\u2299 \u22121, assuming the Chabrier initial mass function. Dark matter halos are identi\ufb01ed using the HaloMaker (Tweed et al. 2009). Eight consecutive snapshots are analyzed at each redshift (3.96 \u2264z \u22644.00, 4.92 \u2264z \u22645.12, and 5.91 \u2264z \u22646.00) to increase the sample size in our calculations. At each snapshot there are \u2248 142, 137, and 104 halos in the halo mass range of 109 \u2264Mvir < 1010 M\u2299, and 15, 10, and 7 halos with mass Mvir \u22651010 M\u2299. The most massive galaxy at z = 4 (5, 6) has stellar mass of 1.6\u00d7109 M\u2299 (6.0 \u00d7 108, 2.5 \u00d7 107 M\u2299), and host halo mass 8.8 \u00d7 1010 M\u2299(5.2 \u00d7 1010, 4.1 \u00d7 1011 M\u2299) The escape fraction is computed as follows. We cast 768 rays per star particle and follow their \f\u2013 3 \u2013 propagation through the galaxy. Each ray carries the spectral energy distribution (SED), including its LyC emission, determined using Sturburst99 (Leitherer et al. 1999), given the age, metallicity, and mass of the star particle. The LyC photons are attenuated by neutral hydrogen (Osterbrock & Ferland 2006) and SMC-type dust (Draine et al. 2007) in the process of propagation. For a conservative estimate, we assume the dust-to-metal ratio of 0.4. We also simply assume that dust is destroyed in hot gas (T > 106 K). We note that attenuation due to dust is only signi\ufb01cant in the most massive galaxy (Mstar = 1.1 \u00d7 109 M\u2299, \u03c4d = 0.58) in our sample. The second most massive galaxy (Mstar = 3.6 \u00d7 108 M\u2299) shows \u03c4d = 0.29, meaning that it reduces the number of photons by only < 30%. Given that the dust-to-metal ratio is even smaller than 0.4 in low-metallicity systems (Lisenfeld & Ferrara 1998; Engelbracht et al. 2008; Galametz et al. 2011; Fisher et al. 2013), it is likely that the attenuation by dust is even less signi\ufb01cant in our simulated galaxies. We de\ufb01ne the true escape fraction of the galaxy as the ratio of the sum of all outward \ufb02uxes at the virial sphere to the sum of the initially emitted \ufb02uxes of all stellar particles in the galaxy; we shall call this fesc,3D. In addition, an observer at in\ufb01nity at a random point in the sky of the galaxy collects all LyC \ufb02uxes and de\ufb01nes the escape fraction along that particular line of sight; this is called fesc,1D. 3. Probability Distribution Functions of LyC Photon Escape Fraction It is useful to give a qualitative visual illustration of how LyC photons may escape from galaxies at z = 4. Figure 1 shows three examples of an all-sky map the sky an observer sitting at the center of the galaxy would see of the neutral hydrogen column density. We note that 8 dex of dynamical range is plotted and recall that at the Lyman limit a neutral hydrogen column density of \u223c1017cm\u22122 would provide an optical depth of \u223c1. As a result, LyC photons can only escape through highly ionized or evacuated \u201choles\u201d indicated by dark blue colors on the maps and the transition from near transparency to very opaque is fast. This indicates that the escaping LyC photons are dominated by those that escape through completely unobscured channels and the amount of escaped LyC photons for a given galaxy depends strongly on the direction. Moreover, it is evident that, in addition to large variations from position to position on the sky for a given galaxy, there are large variations of the overall column density structures from galaxy to galaxy. For example, the galaxy in the top-left panel shows no transparent sky patches at all, which is typical for galaxies during times of intense starburst as shown in Kimm & Cen (Figure 4 2014). On the other hand, the galaxy in the bottom panel has large swaths of connected transparent patches that cover nearly one half of the sky, typical for galaxies at periods following the blowout of gas subsequent to intense starburst (Figure 4 Kimm & Cen 2014). This qualitative behavior is also found earlier in independent simulations by Wise & Cen (2009). Let us now turn to more quantitative results. Figure 2 shows the probability distribution of the apparent escape fraction for massive halos (top) and less massive halos (bottom) at z = 4 (left column) and z = 6 (right column). Black histograms show the distribution of the true (3D) escape fraction of each sample (i.e., from the viewpoint of the overall intergalactic medium), while red histograms show the PDF of the apparent escape fraction (i.e., from the point of view of observers placed at a far distance). Note that the distribution of the true escape fraction is noisier than that of \f\u2013 4 \u2013 16.0 24.0 log NHI 16.0 24.0 log NHI 16.0 24.0 log NHI Fig. 1.\u2014 shows three examples of all-sky maps the sky an observer sitting at the center of the galaxy would see of the neutral hydrogen for most massive (Mvir = 7.8 \u00d7 1010 M\u2299, top left panel), second massive (6.1 \u00d7 1010 M\u2299, top right panel), and a smaller halo (1.8 \u00d7 109 M\u2299, bottom). The observer is placed at the center of the halo. Note that the actual escape fraction presented later is computed by ray-tracing LyC photons of all stellar particles spatially distributed through the clumpy interstellar medium until escaping through the virial sphere. The true escape fraction of LyC photons of these halos are 5.4%, 12%, and 5.0%, respectively. the apparent escape fraction due to the smaller sample size for the former, because for (3D) escape fraction each galaxy is counted once but for the apparent escape fraction each galaxy is sampled many times. In terms of the mean escape fraction, there is a trend that, at a given redshift, the galaxies embedded in more massive halos tend to have a lower mean escape fraction. There is also a weak trend that the escape fraction increases with redshift. For example, the true (3D) median escape fractions are (7.0%, 9.5%) for the halos of masses (\u22651010, 109 \u22121010) M\u2299, respectively, at z = 4; the true (3D) median escape fractions are (8.8%, 29%) for the halos of masses (\u22651010, 109 \u22121010) M\u2299, respectively, at z = 6. Upon a close examination we suggest that the redshift dependence can be attributed, in part, to the following \ufb01ndings. At a given halo mass, the speci\ufb01c star formation rate decreases with decreasing redshifts at 4 \u2264z \u22646. As star formation becomes less episodic at lower redshifts, it takes longer to blow out the star-forming clouds via SNe. Consequently, a larger fraction of LyC photons is absorbed by their birth clouds. We also \ufb01nd that the speci\ufb01c star formation rate does not change notably at z > 6 while the mean density \f\u2013 5 \u2013 -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.070 med=0.039 z~4 Mvir*1010Msun (N=123) -6 -5 -4 -3 -2 -1 0 log fesc,1D -2 -1 0 1 log PDF med=0.095 med=0.059 109med=0.088 med=0.068 z~6 Mvir*1010Msun (N=23) -6 -5 -4 -3 -2 -1 0 log fesc,1D -2 -1 0 1 log PDF med=0.286 med=0.204 109med=0.065 med=0.051 0.3)SFR<10 (N=99) z~4 -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.042 med=0.030 0.01)SFR<0.3 (N=296) -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.146 med=0.068 SFR<0.01 (N=784) log fesc,1D -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.088 med=0.059 0.3)SFR<10 (N=17) z~6 -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.089 med=0.059 0.01)SFR<0.3 (N=91) -6 -5 -4 -3 -2 -1 0 -2 -1 0 1 log PDF med=0.364 med=0.269 SFR<0.01 (N=359) log fesc,1D Fig. 3.\u2014 is similar to Figure 2, except the galaxy sample is subdivided according to their star formation rates, SFR=0.3 to 10 M\u2299/yr (top panel), 0.01 to 0.3 M\u2299/yr (middle panel), and < 0.01 M\u2299/yr (bottom panel). Evidently, the distribution of the apparent LyC escape fraction is very broad and skewed toward the lower end. The reason for this behavior is understandable. In the case of the galaxies with low fesc,3d values, the LyC photons escape normally through transparent holes with small solid angles. Since not all of these holes are seen to an observer, the distribution of fesc,1d for individual galaxies tends to get skewed toward the lower end of the distribution. As a result, the medians of the two distributions, shown as arrows in Figure 2, are about a factor of \u223c2 smaller than the mean. More importantly, it suggests that an observational sample of limited size may underestimate the true mean escape fraction. The top two panels of Figure 4 show the probability distribution function of the apparent mean for a given observational sample size Nstack for the high mass (top) and low mass (bottom) sample, respectively. We compute the apparent mean of a sample of galaxies using LyC photon (or SFR)-weighted mean escape fraction, which is exactly equivalent to stacking the galaxies. The bottom two panels of Figure 4 are similar to top two panels in Figure 4, for the subsamples with di\ufb00erent star formation rates. What we see in these \ufb01gures is that the probability distribution is rather broad. It is thus clear that it is not a robust exercise to try to infer the mean escape fraction based on a small sample (\u226410) of galaxies, whether individually measured or through stacking. Table 1 provides a quantitative assessment of the uncertainties, which shows the 1 and 2\u03c3 \f\u2013 7 \u2013 probability intervals of fractional lower and upper deviations from the true mean escape fraction. Some relatively mild trends are seen that are consistent with earlier observations of the \ufb01gures. Speci\ufb01cally, the convergence to the true mean escape fraction in terms of sample sizes is faster towards high redshift, towards higher halo mass, and towards higher star formation rates. Let us take a few numerical examples. We see that with a sample of 50 galaxies of halo mass in the range of (1010 \u22121011) M\u2299at z = 4 the 2\u03c3 fractional range of the escape fraction is 58% to 159%, which improves to a range of 68% to 140% when a sample of 100 galaxies is used. Note that the observations of Mostardi et al. (2013) have 49 Lyman break galaxies and 91 Lyman alpha emitters at z \u223c2.85. At z = 6 for the (1010 \u22121011) M\u2299halo mass range, we see that with a sample of 20 galaxies, the 2\u03c3 fractional range of the escape fraction is 59% to 161%, comparable to that of a sample of 50 galaxies at z = 4, as a result of bene\ufb01ting from the faster convergence at higher redshift. On the other hand, at z = 5 for the (0.3 \u221210) M\u2299yr\u22121 star formation rate range, the 2\u03c3 fractional range of the escape fraction is 56% to 163% with a sample of 20 galaxies, which is improved to 71% to 137% with a sample of 50 galaxies. Finally, we note that the actual observed Lyman continuum escape fraction has additionally su\ufb00ered from possible absorbers in the intergalactic medium, primarily Lyman limit systems. Since the background galaxy and the foreground absorbers are physically unrelated, we may consider the e\ufb00ects from the internal factors in galaxies and those from the intergalactic medium completely independent. Thus, in this case, assuming no knowledge of the foreground absorbers, the overall distribution would be the convolution of the two, resulting a still broader overall distribution than derived above considering internal factors alone. In reality, however, one may be able to remove, to a large degree, the Lyman continuum opacity due to intergalactic absorbers by making use of a tight correlation between Ly\u03b1 and LyC absorption (Inoue & Kamaya 2008). \f\u2013 8 \u2013 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 Nstack=1 Nstack=2 Nstack=5 Nstack=10 Nstack=20 Nstack=50 Nstack=100 Nstack=1000 =0.067 1010)Mvir<1011MO \u2022 -4 -3 -2 -1 0 log 0.0 0.2 0.4 0.6 0.8 1.0 =0.11 109)Mvir<1010MO \u2022 cumulatative PDF z~4 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 Nstack=1 Nstack=2 Nstack=5 Nstack=10 Nstack=20 Nstack=50 Nstack=100 Nstack=1000 =0.069 1010)Mvir<1011MO \u2022 -4 -3 -2 -1 0 log 0.0 0.2 0.4 0.6 0.8 1.0 =0.075 109)Mvir<1010MO \u2022 cumulatative PDF z~6 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 Nstack=1 Nstack=2 Nstack=5 Nstack=10 Nstack=20 Nstack=50 Nstack=100 Nstack=1000 =0.078 0.3)SFR<10 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 =0.090 0.01)SFR<0.3 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 =0.099 SFR<0.01 cumulatative PDF log z~4 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 Nstack=1 Nstack=2 Nstack=5 Nstack=10 Nstack=20 Nstack=50 Nstack=100 Nstack=1000 =0.063 0.3)SFR<10 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 =0.085 0.01)SFR<0.3 -4 -3 -2 -1 0 0.0 0.2 0.4 0.6 0.8 1.0 =0.030 SFR<0.01 cumulatative PDF log z~6 Fig. 4.\u2014 Top two panels show the probability distribution function of the apparent mean for a given observational sample size Nstack for the high mass (top) and low mass (bottom) sample, respectively. The mean is computed by weighting the number of photons produced in each galaxies to mimic the stacking of the SED in observations. The true mean of the distribution is denoted in each panel. Bottom two panels are the same as the top two panels, but for the subsamples with di\ufb00erent star formation rates, as indicated in the legend. \f\u2013 9 \u2013 4." + }, + { + "url": "http://arxiv.org/abs/1412.4075v1", + "title": "A New Model for Dark Matter Halos Hosting Quasars", + "abstract": "A new model for quasar-hosting dark matter halos, meeting two physical\nconditions, is put forth. First, significant interactions are taken into\nconsideration to trigger quasar activities. Second, satellites in very massive\nhalos at low redshift are removed from consideration, due to their deficiency\nof cold gas. We analyze the {\\em Millennium Simulation} to find halos that meet\nthese two conditions and simultaneously match two-point auto-correlation\nfunctions of quasars and cross-correlation functions between quasars and\ngalaxies at $z=0.5-3.2$. %The found halos have some distinct properties worth\nnoting. The masses of found quasar hosts decrease with decreasing redshift,\nwith the mass thresholds being $[(2-5)\\times 10^{12}, (2-5)\\times 10^{11},\n(1-3)\\times 10^{11}]\\msun$ for median luminosities of $\\sim[10^{46}, 10^{46},\n10^{45}]$erg/s at $z=(3.2, 1.4, 0.53)$, respectively, an order of magnitude\nlower than those inferred based on halo occupation distribution modeling. In\nthis model quasar hosts are primarily massive central halos at $z\\ge 2-3$ but\nincreasingly dominated by lower mass satellite halos experiencing major\ninteractions towards lower redshift. But below $z=1$ satellite halos in groups\nmore massive than $\\sim 2\\times 10^{13}\\msun$ do not host quasars. Whether for\ncentral or satellite halos, imposing the condition of significant interactions\nsubstantially boosts the clustering strength compared to the total population\nwith the same mass cut. The inferred lifetimes of quasars at $z=0.5-3.2$ of\n$3-30$Myr are in agreement with observations. Quasars at $z\\sim 2$ would be\nhosted by halos of mass $\\sim 5\\times 10^{11}\\msun$ in this model, compared to\n$\\sim 3\\times 10^{12}\\msun$ previously thought, which would help reconcile with\nthe observed, otherwise puzzling high covering fractions for Lyman limit\nsystems around quasars.", + "authors": "Renyue Cen, Mohammadtaher Safarzadeh", + "published": "2014-12-12", + "updated": "2014-12-12", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.GA" + ], + "main_content": "Introduction Masses of dark matter halos hosting quasars are not directly measured. They are inferred by indirect methods, such as via their clustering properties (i.e., auto-correlation function, ACF, or cross-correlation function, CCF). Using ACF or CCF can yield solutions on the (lower) threshold halo masses. The solution on halo mass based on such a method is not unique, to be illustrated by a simple example. Let us suppose a sample composed of halos of large mass M and an equal number of small halos of mass m, coming in tight pairs of M and m with a separation much small than the scale for the correlation function of interest. For such a sample, the ACF of halos of mass M is essentially identical to that of m or cross correlation between M and m. Although dark matter halos in the standard hierarchical cold dark matter model are less simple, the feature that small 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu 2Johns Hopkins University, Department of Physics and Astronomy, Baltimore, MD 21218, USA arXiv:1412.4075v1 [astro-ph.CO] 12 Dec 2014 \f\u2013 2 \u2013 mass halos tend to cluster around massive halos is generic. This example suggests that alternative solutions of dark matter halos hosting quasars exist. It would then be of interest to \ufb01nd models that are based on our understanding of the thermal dynamic evolution of gas in halos and other physical considerations, which is the purpose of this Letter. 2. Simulations and Analysis Method We utilize the Millennium Simulation (Springel et al. 2005) to perform the analysis, whose properties meet our requirements, including a large box of 500h\u22121Mpc, a mass resolution with dark matter particles of mass 8.6 \u00d7 108h\u22121 M\u2299, and a spatial resolution of 5 h\u22121 kpc comoving. Halos are found using a friends-of-friends (FOF) algorithm. Satellite halos are separated out using the SUBFIND algorithm (Springel et al. 2001). The adopted \u039bCDM cosmology parameters are \u2126m = 0.25, \u2126b = 0.045, \u2126\u039b = 0.75, \u03c38 = 0.9 and n = 1, and H0 = 100h km s\u22121 Mpc\u22121 with h = 0.73. Given the periodic box we compute the 2-point auto-correlation function (ACF) \u03be(rp, \u03c0) of a halo sample by \u03be(rp, \u03c0) = DD RR \u22121, (1) where rp and \u03c0 is the pair separation in the sky plane and along the line of sight, respectively, DD and RR are the normalized numbers of quasar-qausar and random-random pairs in each bin (rp \u22121 2\u2206rp \u2192rp + 1 2\u2206rp, \u03c0 \u22121 2\u2206\u03c0 \u2192\u03c0 + 1 2\u2206\u03c0). The cross-correlation function (CCF) is similarly computed: \u03be(rp, \u03c0) = D1D2 R1R2 \u22121, (2) where D1 and D2 correspond to galaxies and quasars. R1 and R2 correspond to randomly distributed galaxies and quasars that are computed analytically. The projected 2-point correlation function wp(rp) is: (Davis & Peebles 1983) wp(rp) = 2 Z \u221e 0 d\u03c0 \u03bes(rp, \u03c0) . (3) In practice, the integration is up to \u03c0max. We use \u03c0max = (100, 80, 70)h\u22121Mpc comoving at z = (3.2, 1.4, 0.5), respectively, as in observations. 3. A New Model for QSO-Hosting Dark Matter Halos at z = 0.5 \u22123.2 Our physical modeling is motivated by insights on cosmic gas evolution from cosmological hydrodynamic simulations and observations. Simulations show four signi\ufb01cant trends. First, cosmological structures collapse to form sheet, \ufb01laments and halos, and shock heat the gas to progressively higher temperatures with decreasing redshift (e.g., Cen & Ostriker 1999). Second, overdense regions where larger halos are preferentially located begin to be heated earlier and have higher temperatures than lower density regions at any given time, causing speci\ufb01c star formation rates of larger galaxies \f\u2013 3 \u2013 10-2 10-1 100 101 102 rp [h\u22121 Mpc] 10-1 100 101 102 103 104 105 wp (rp )[h\u22121 Mpc] z=3.2 ACF mh,0 =2e12,DR0 =1 ACF mh,0 =2e12,DR0 =3 ACF mh,0 =5e12,DR0 =3 ACF Mth =1e13 obs: Shen+07 QSO ACF Fig. 1.\u2014 shows the ACF of quasar hosts at z = 3.2 for two cases of mh,0 = (2\u00d71012, 5\u00d71012) M\u2299with DR0 = 3 shown as (open red squares, solid yellow hexagons), respectively. For mh,0 = 2 \u00d7 1012 M\u2299 one additional case is shown for DR0 = 1 (solid green diamonds). For comparison, a plain threshold mass case with Mth = 1013 M\u2299is shown as open blue circles. Poisson errorbars are only plotted for blue circles. Black triangles is the observed ACF (Shen et al. 2007a), using 4426 spectroscopically identi\ufb01ed quasars at 2.9 < z < 5.4 (median \u00af z = 3.2), from the SDSS DR5 (Schneider et al. 2005; Adelman-McCarthy et al. 2006). to fall below the general dimming trend at higher redshift than less massive galaxies and galaxies with high sSFR to gradually shift to lower density environments at lower redshift. This physical process of di\ufb00erential gravitational heating with respect to redshift is able to explain the apparent cosmic downsizing phenomenon (e.g., Cowie et al. 1996), the cosmic star formation history (e.g., Hopkins & Beacom 2006), and galaxy color migration (Cen 2011, 2014). Third, quasars appear to occur in congested environments, as evidenced by high bias inferred based on their strong clustering, with the apparent merger fraction of bright QSOs (L > 1046erg/s) approaching unity (e.g., Hickox et al. 2014). Finally, a quasar host galaxy presumably channels a signi\ufb01cant amount of gas into its central black hole, which we interpret as the galaxy being rich in cold gas. This requirement would exclude satellite halos of high mass halos at lower redshift when the latter become hot gas dominated (e.g., Feldmann et al. 2011; Cen 2014). These physical considerations provide the basis for the construction of the new model detailed below in steps. First, for z > 1. (1) All central and satellites halos with virial mass > mh,0 constitute the baseline sample, denoted as SA. (2) Each halo in SA is then selected with the following probability, PDF(DR), computed as follows. For a halo X of mass mh, we make a neighbor list of all neighbor halos with mass \u2265mh/2. For each neighbor halo on the neighbor list, we compute DRn = dn/rv,n, where dn is the distance from X to, and rv is the virial radius of, the neighbor in question. We then \ufb01nd the minimum of all DRn\u2019s, \f\u2013 4 \u2013 10-2 10-1 100 101 102 rp [h\u22121 Mpc] 10-1 100 101 102 103 104 wp (rp )[h\u22121 Mpc] z=1.4 ACF Mth =6e12 ACF mh,0 =2e11,DR0 =1 ACF mh,0 =2e11,DR0 =0.5 ACF mh,0 =5e11,DR0 =0.5 obs: Richardson+12 QSO ACF Fig. 2.\u2014 shows ACF of quasar hosts at z = 1.4 for three cases: (mh,0, DR0) = (2 \u00d7 1011 M\u2299, 0.5) (open blue squares), (5 \u00d7 1011 M\u2299, 0.5) (open red diamonds) and (2 \u00d7 1011 M\u2299, 1.0) (solid green hexagons). For comparison, a plain threshold mass case with Mth = 6 \u00d7 1012 M\u2299is shown as solid yellow circles. Poisson errorbars are only plotted for red diamonds. Black triangles are the observed ACF quasars (Richardson et al. 2012), using a sample of 47,699 quasars with a median redshift of \u00af z = 1.4, drawn from the DR7 spectroscopic quasar catalog (Schneider et al. 2010; Shen et al. 2011) for large scales and 386 quasars for small scales (< 1 Mpc/h) from (Hennawi et al. 2006). calling it DR for halo X. PDF(DR) is de\ufb01ned as PDF(DR) = 1 for DR < DR0; PDF(DR) = (DR0/DR)3 for DR \u2265DR0. (4) Our choice of the speci\ufb01c PDF is somewhat arbitrary but serves to re\ufb02ect our assertion that the probability of dark matter halos hosting quasars decreases if the degree of interactions decreases, when DR > DR0. The results remain little changed, for example, had we used a steeper powerlaw of 4 instead of 3. At z < 1, when the mean SFR in the universe starts a steep drop (Hopkins & Beacom 2006), we impose an additional criterion (3) to account for the gravitational heating. (3) Those halos that are within the virial radius of massive halos > Mh,0 are removed, for z < 1. In essence, we model the quasar hosts at z > 1 with two parameters, mh,0 and DR0 and at z < 1 with three parameters, mh,0, DR0 and Mh,0. We present results in the order of decreasing redshift. Figure 1 shows ACF of quasar hosts at z = 3.2 for three cases: (mh,0, DR0) = (2 \u00d7 1012 M\u2299, 3), (5 \u00d7 1012 M\u2299, 3) and (2 \u00d7 1012 M\u2299, 1). Based on halo occupation distribution (HOD) modeling, Richardson et al. (2012) infer median mass of quasar host halos at z \u223c3.2 of Mcen = 14.1+5.8 \u22126.9 \u00d7 1012 h\u22121 M\u2299, consistent with the threshold mass case with Mth = 1013 M\u2299. All model ACFs fall below the observed data at rp \u226530Mpc/h, due to simulation box size. The ACF amplitude is seen to increase with increasing mh,0. The ACF with a smaller value of DR0 steepens at a smaller rp and rises further toward lower rp. This \f\u2013 5 \u2013 behavior is understandable, since a lower DR0 overweighs pairs at smaller separations. The extant observations do not allow useful constraints on DR0 at z = 3.2. We see from visual examination that mh,0 = (2\u22125)\u00d71012 M\u2299provides an excellent \ufb01t to the observed ACF for rp = 2 \u221230h\u22121Mpc. Figure 2 shows ACF of quasar hosts at z = 1.4 for three cases: (mh,0, DR0) = (2 \u00d7 1011 M\u2299, 0.5), (5 \u00d7 1011 M\u2299, 0.5) and (2 \u00d7 1011 M\u2299, 1.0). The threshold mass case with Mth = 6 \u00d7 1012 M\u2299provides a good match to the observational data for rp = 1 \u221230h\u22121Mpc, consistent with HOD modeling by Richardson et al. (2012), who constrain the median mass of the central host halos to be Mcen = 4.1+0.3 \u22120.4 \u00d7 1012 h\u22121 M\u2299. We see that mh,0 = (2 \u22125) \u00d7 1011 M\u2299provides excellent \ufb01ts to the observed ACF for rp = 1 \u221240h\u22121Mpc. The observed ACF extends down to about 20h\u22121kpc, which allows us to constrain DR0. We see that, varying DR0 from 1.0 to 0.5, the amplitude of the ACF at rp \u22641h\u22121Mpc increases, with DR0 = 0.5 providing a good match. The physical implication is that quasar activities at z = 1.4 seem to be triggered when a halo of mass \u2265(2 \u22125) \u00d7 1011 M\u2299interact signi\ufb01cantly with another halo of comparable mass, in contrast to the z = 3.2 quasars that are primarily hosted by central galaxies with no major companions. 10-1 100 101 rp [h\u22121 Mpc] 100 101 102 103 wp (rp )[h\u22121 Mpc] z=0.51 CCF Mth =3.5e12 ACF Mth =1e13/h obs: Shen+13 CMASS ACF obs: Shen+13 QSO-CMASS CCF 10-1 100 101 rp [h\u22121 Mpc] 100 101 102 103 wp (rp )[h\u22121 Mpc] z=0.51 CCF mh,e =2e11,Mh,0 =2e13,DR0 =0.5 CCF mh,0 =2e11,Mh,0 =2e13,DR0 =1 CCF mh,0 =5e10,Mh,0 =2e13,DR0 =1 CCF mh,0 =2e11,Mh,0 =1e13,DR0 =1 obs: Shen+13 QSO-CMASS CCF Fig. 3.\u2014 Left panel shows the ACF of halos of masses above Mth = 1 \u00d7 1013h\u22121 M\u2299(open yellow squares), \u201cmock CMASS galaxies\u201d, and the CCF between halos of mass above 3.5 \u00d7 1012 M\u2299 and CMASS galaxies (open red pentagons). Black solid dots and triangles are the observed quasar-CMASS galaxy CCF and CMASS galaxy ACF (shown in both left and right panels), respectively, at z \u223c0.53 from Shen et al. (2013). The CMASS sample of 349,608 galaxies at z \u223c0.53 (White et al. 2011; Anderson et al. 2012) is from the Baryon Oscillation Spectroscopic Survey (Schlegel et al. 2009; Dawson et al. 2013). The sample of 8198 quasars at 0.3 < z < 0.9 (\u27e8z\u27e9\u223c0.53) is from the DR7 (Abazajian et al. 2009) spectroscopic quasar sample from SDSS I/II (Schneider et al. 2010). Right panel shows the model quasar-CMASS galaxy CCF at z = 0.51 for four cases with (mh,0, Mh,0, DR0) = (2 \u00d7 1011 M\u2299, 2 \u00d7 1013 M\u2299, 0.5) (solid red diamonds), (2 \u00d7 1011 M\u2299, 2 \u00d7 1013 M\u2299, 1.0) (solid green hexagons), (5 \u00d7 1010 M\u2299, 2 \u00d7 1013 M\u2299, 1.0) (open blue squares) and (2 \u00d7 1011 M\u2299, 1 \u00d7 1013 M\u2299, 1.0) (open yellow stars). Finally, Figure 3 shows results at z = 0.51. The left panel shows the ACF of halos of \f\u2013 6 \u2013 masses above the threshold 1013h\u22121 M\u2299mock CMASS galaxies which provides a good match to the observed ACF of CMASS galaxies. Consistent with previous analysis, we see that the CCF between halos of mass above the threshold 3.5 \u00d7 1012 M\u2299and mock CMASS galaxies match the observed counterpart. The right panel of Figure 3 shows the mock quasar-CMASS galaxy CCF at z = 0.51 for four cases with di\ufb00erent combinations of (mh,0, Mh,0, DR0). The case with (mh,0, Mh,0, DR0) = [(1 \u22123) \u00d7 1011 M\u2299, 2 \u00d7 1013 M\u2299, 0.5] provides an adequate match to the observation, while (mh,0, Mh,0, DR0) = (5 \u00d7 1010 M\u2299, 2 \u00d7 1013 M\u2299, 1.0) appears to underestimate the CCF. The case with (2 \u00d7 1011 M\u2299, 1 \u00d7 1013 M\u2299, 1.0) signi\ufb01cantly underestimates the observed ACF at rp < 0.5h\u22121Mpc. This indicates that halos of masses greater than mh,0 = (1 \u22123) \u00d7 1011 M\u2299 residing in environment of groups of masses (1 \u22122) \u00d7 1013 M\u2299are primarily responsible for the strong clustering at rp < 0.5h\u22121Mpc. It is interesting to note that the exclusion halo mass of Mh,0 = 2.0 \u00d7 1013 M\u2299, to account for environment heating e\ufb00ects, is physically self-consistent with the fact that the red CMASS galaxies are red due to the same environment e\ufb00ects hence have about the same halo mass (Mth = 1013h\u22121 M\u2299). 4. Predictions and Tests of our Model 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 log[Mh /(M \u2299)] 0 1 2 3 4 5 dp/dlogM our model range, z=3.2 Richardson+12, z=3.2 our model range, z=1.4 Richardson+12, z=1.4 our model range, z=0.53 Shen+13, z=0.53 Fig. 4.\u2014 shows the QSO-hosting halo mass distributions at z = 3.2 (solid red curves), z = 1.4 (solid blue curves) and z = 0.53 (solid green curves). We show two bracketing (approximately \u00b11\u03c3 for the computed correlation functions) models at each redshift. The corresponding distributions based on HOD modeling are shown in dashed curves. The short vertical bars with matching colors and line types indicate the median halo masses of their respective distributions. We have demonstrated that our physically based model can account for the observed clustering of quasars at z = 3.2, 1.4, 0.53. Figure 4 contrasts the sharp di\ufb00erences between our model and the conventional HOD based modeling; the halo masses in our model are an order of magnitude lower than those inferred from HOD modeling. Our model gives quasar-hosting halo mass threshold of \f\u2013 7 \u2013 [(2 \u22125) \u00d7 1012, (2 \u22125) \u00d7 1011, (1 \u22123) \u00d7 1011)] M\u2299at z = (3.2, 1.4, 0.53), respectively. compared to median mass of (14.1+5.8 \u22126.9\u00d71012, 4.1+0.3 \u22120.4\u00d71012, 4.0\u00d71012)h\u22121 M\u2299based on HOD modeling (Richardson et al. 2012; Shen et al. 2013). Although we have not made \ufb01tting for quasars at redshift higher than z = 3.2, we anticipate that the quasars at higher redshifts that have comparable luminosities as those at z = 3.2 will primarily be hosted by central galaxies of mass \u223c(2 \u22125) \u00d7 1012 M\u2299. We note that the median luminosity of the observed quasars decreases from \u223c1046erg/s at z \u22651.4 to \u223c1045erg/s at z = 0.53, which re\ufb02ects the known downsizing scenario and is in accord with the decreasing halo mass with decreasing redshift inferred in our model. Our results and detailed comparions with HOD based modeling are also tabulated in Table 1, along with inferred quasar duty cycles and lifetimes. Can we di\ufb00erentiate between these two models? Trainor & Steidel (2012) cross correlate 1558 galaxies with spectroscopic redshifts with 15 of the most luminous (\u22651014 L\u2299, M1450 \u223c\u221230) quasars at z \u223c2.7. Even for these hyperluminous quasars (HLQSOs), they infer host halo mass of log(Mh/ M\u2299) = 12.3 \u00b1 0.5, which is in very good agreement with our model (Mh \u223c(2\u22125)\u00d71012 M\u2299) but much smaller than inferred from HOD modeling. They also \ufb01nd that, on average, the HLQSOs lie within signi\ufb01cant galaxy over-densities, characterized by a velocity dispersion \u03c3v \u223c200 km/s and a transverse angular scale of \u223c25\u201d (\u223c200 physical kpc), which they argue correspond to small groups with log(Mh/ M\u2299) \u223c13. The rare HLQSOs are apparently not hosted by rare dark matter halos. This is fully consistent with our suggestion that dark matter halo mass is not the sole determining factor of quasar luminosities and that interactions may be instrumental to triggering quasar activities. Another, independent method to infer halo masses of quasar hosts is to measure their cold gas content. Prochaska et al. (2013) detect about 60% \u221270% covering fraction of Lyman limit systems within the virial radius of z \u223c2 quasars, using the binary quasar sample (Hennawi et al. 2006). This has created signi\ufb01cant tension: hydrodynamic simulations of the cold dark matter model yield less than 20% covering fraction for halos of mass \u223c3\u00d71012 M\u2299(Faucher-Giguere et al. 2014); halos of still higher mass have still lower covering fractions. On the other hand, the simulations show a \u223c60% covering fraction if the mass of quasar-hosting halos is \u223c3 \u00d7 1011 M\u2299. This indicates that the lower halo masses for quasar hosts in our model can explain the high content of neutral gas in z \u223c2 quasars. The mean quasar lifetime may be estimated by equating it to tH \u00d7 fq, where tH is the Hubble time at the redshift in question and fq the duty cycle of quasar hosting halos. Existing observational constraints provide useful range for tq for quasars at z \u223c3. Lifetimes based on halo abundances from clustering analyses of quasars have been given by many authors (e.g., Martini & Weinberg 2001; Porciani et al. 2004; Shen et al. 2007b); in our case, this is a degenerate derivation. Thus, it is useful to have a survey of quasar lifetimes based on other, independent methods. Jakobsen et al. (2003) derive tq > 10Myr, Worseck et al. (2007) give tq > 25Myr, Gon\u00b8 calves et al. (2008) yield tq = 16 \u221233Myr, and McQuinn & Worseck (2014) yield tq => 10Myr for quasars at z \u22482 \u22123, all based on the method of quasar proximity e\ufb00ect. Bolton et al. (2012) obtain tq > 3Myr using line-of-sight thermal proximity e\ufb00ect. Trainor & Steidel (2013), using a novel method of Ly\u03b1 emitters (LAEs) exhibiting \ufb02uorescent emission via the reprocessing of ionizing radiation from \f\u2013 8 \u2013 nearby hyperluminous QSOs, \ufb01nd 1 \u2264tq \u226420Myr at z = 2.5 \u22122.9. We see that all these estimates are consistent with our model. As a comparison, the inferred tHOD q \u223c400Myr at z = 3.2 from HOD modeling. Finally, self-consistently reproducing the quasar luminosity functions (e.g., Wyithe & Loeb 2002, 2003; Shen 2009; Conroy & White 2013) will provide another test, which we defer to a separate study. Table 1: Comparing Our Model With HOD Modeling w.r.t. Halo Mass and Quasar Lifetime (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) zmed Lbol nobs nsim mh,0 fq tq MHOD h fHOD q tHOD q log(erg s ) \u00d710\u22127 \u00d710\u22124 \u00d71012 \u00d710\u22123 [Myr] \u00d71012 \u00d710\u22123 [Myr] 3.2 46.3 2.5 0.2 \u22120.9 2 \u22125 3 \u221213 5-26 20 215 425 1.4 46.1 30 9 \u221244 0.2 \u22120.5 0.6 \u22123 3-15 5.8 1.8 7.5 0.53 45.1 50 29 \u221285 0.1 \u22120.3 0.6 \u22122 5-15 5.7 1.3 10 Column (1) zmed is the median redshift of the sample that is analyzed. Column (2) Lbol is the median bolometric luminosity of the observed quasar sample obtained using conversions in (Richards et al. 2006; Runnoe et al. 2012). Column (3) nobs is the number density of the observed quasar sample (Shen & Kelly 2012) in [Mpc3h\u22123], multiplied by a factor of 2.5 to account for the fact that about 60% of quasars belong to the so-called type II quasars based on low redshift observations (Zakamska et al. 2003), which are missed in the quoted observational sample. We note that the percentage of obscured quasars appear to increase with redshift (e.g., Ballantyne et al. 2006; Treister & Urry 2006). Thus, the values of tq may be underestimated. Column (4) mh,0 is the lower mass threshold dark matter halos hosting quasars in [M\u2299]. Column (5) nsim is the number density of the dark matter halos hosting quasars with mass \u2265mh,0 in [Mpc\u22123h3]. Column (6) fq \u2261nobs/nsim is the duty cycle of the quasars in our model. Column (7) tq is the mean quasar lifetime in our model de\ufb01ned as tH \u00d7 fq, where tH is the Hubble time at the redshift in question. Column (8) MHOD h,med is the derived host halo mass of the observed population of quasars derived from HOD modeling (Richardson et al. 2012; Shen et al. 2013) in [M\u2299]. Column (9) fHOD q is the duty cycle of the observed population of quasars based on HOD modeling, using the type II quasars-corrected abundance in Column (3). Column (10) tHOD q is the life time of the quasars based on fHOD q in Column (9). 5." + }, + { + "url": "http://arxiv.org/abs/1409.6755v1", + "title": "Diverse Properties of Interstellar Medium Embedding Gamma-Ray Bursts at the Epoch of Reionization", + "abstract": "Analysis is performed on ultra-high resolution large-scale cosmological\nradiation-hydrodynamic simulations to, for the first time, quantify the\nphysical environment of long-duration gamma-ray bursts (GRBs) at the epoch of\nreionization. We find that, on parsec scales, 13% of GRBs remain in high\ndensity ($\\ge 10^4$cm$^{-3}$) low-temperature star-forming regions, whereas 87%\nof GRBs occur in low-density ($\\sim 10^{-2.5}$cm$^{-3}$) high temperature\nregions heated by supernovae. More importantly, the spectral properties of GRB\nafterglows, such as the neutral hydrogen column density, total hydrogen column\ndensity, dust column density, gas temperature and metallicity of intervening\nabsorbers, vary strongly from sightline to sightline. Although our model\nexplains extant limited observationally inferred values with respect to\ncircumburst density, metallicity, column density and dust properties, a\nsubstantially larger sample of high-z GRB afterglows would be required to\nfacilitate a statistically solid test of the model. Our findings indicate that\nany attempt to infer the physical properties (such as metallicity) of the\ninterstellar medium of the host galaxy based on a very small number of (usually\none) sightlines would be precarious. Utilizing high-z GRBs to probe\ninterstellar medium and intergalactic medium should be undertaken properly\ntaking into consideration the physical diversities of the interstellar medium.", + "authors": "Renyue Cen, Taysun Kimm", + "published": "2014-09-23", + "updated": "2014-09-23", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "Introduction Very high redshift (z \u22656) gamma-ray bursts (GRBs) (e.g., Greiner et al. 2009; Tanvir et al. 2009; Cucchiara et al. 2011) provide an excellent probe of both the interstellar (ISM) and intergalactic medium (IGM) at the epoch of reionization (EoR) using absorption spectrum techniques thanks to their simple power-law afterglow spectra and high luminosity (Lamb & Reichart 2000), complimentary to quasar absorption spectrum observations (Fan et al. 2006). Here we present a \ufb01rst, detailed analysis of the physical properties of ISM surrounding GRBs, utilizing state-of-the-art radiation-hydrodynamic simulations, with the hope that they may aid in proper interpretations of observations of GRB afterglows at EoR with respect to both ISM and IGM. 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu 2Princeton University Observatory, Princeton, NJ 08544; kimm@astro.princeton.edu arXiv:1409.6755v1 [astro-ph.HE] 23 Sep 2014 \f\u2013 2 \u2013 2. Simulations The simulations are performed using the Eulerian adaptive mesh re\ufb01nement code, ramses (Teyssier 2002, ver. 3.07), with concurrent multi-group radiative transfer (RT) calculation (Rosdahl et al. 2013). The reader is referred to Kimm & Cen (2014) for details. Notably, a new treatment of supernova feedback is implemented, which is shown to capture the Sedov solution for all phases (from early free expansion to late snowplow). The initial condition for the cosmological simulations is generated using the MUSIC software (Hahn & Abel 2011), with the WMAP7 parameters (Komatsu et al. 2011): (\u2126m, \u2126\u039b, \u2126b, h, \u03c38, ns = 0.272, 0.728, 0.045, 0.702, 0.82, 0.96). The total simulated volume of (25Mpc/h)3 (comoving) is covered with 2563 root grids, and 3 more levels are added to a rectangular region of 3.8 \u00d7 4.8 \u00d7 9.6 Mpc to achieve a high dark matter mass resolution of mdm = 1.6 \u00d7 105 M\u2299. In the zoomed-in region, cells are further re\ufb01ned (12 more levels) based on the density and mass enclosed within a cell. The corresponding maximum spatial resolution of the simulation is 4.2 pc (physical). The simulation is found to be consistent with a variety of observations, including the luminosity function at z \u223c7. Normal and runaway star particles are created in convergent \ufb02ows with a local hydrogen number density nth \u2265100 cm\u22123 (FRU run, Kimm & Cen 2014), based on the Schmidt law (Schmidt 1959). Note that the threshold is motivated by the density of a Larson-Penston pro\ufb01le (Larson 1969; Penston 1969) at 0.5\u2206xmin, \u03c1LP \u22488.86c2 s/\u03c0 G \u2206x2 min, where cs is the sound speed at the typical temperature of the ISM (\u223c30K) and \u2206xmin is the \ufb01nest cell resolution. Additionally, we ensure that the gas is Jeans unstable, and that the cooling time is shorter than the dynamical time (e.g. Cen & Ostriker 1992). We assume that 2% of the star-forming gas is converted into stars per its free-fall time (t\ufb00) (Krumholz & Tan 2007). The mass of each star particle is determined as m\u22c6= \u03b1 Np\u03c1th \u2206x3 min, where \u03c1th is the threshold density, and \u03b1 is a parameter that controls the minimum mass of a star particle (m\u22c6,min). Np is an integer multiple of m\u22c6,min to be formed in a cell, which is drawn from a Poisson random distribution, P(Np) = (\u03bbNp/Np!) exp (\u2212\u03bb) with the Poissonian mean \u03bb \u2261\u03f5\u22c6 \u0010 \u03c1\u2206x3 m\u22c6,min \u0011 \u0010 \u2206tsim t\ufb00 \u0011 , where \u2206tsim is the simulation time step. The resulting minimum mass of a normal (runaway) star particle is 34.2 M\u2299(14.6 M\u2299). We adopt the Chabrier initial mass function to compute the mean frequency of Type II supernova explosions per solar mass (0.02M \u22121 \u2299). Dark matter halos are identi\ufb01ed using the Amiga halo \ufb01nder (Knollmann & Knebe 2009). This yields 731 halos of mass 108 \u2264Mvir < 3 \u00d7 1010 M\u2299at z = 7. We adopt the assumption that long-duration GRB rate is proportional to type II supernova rate; short-duration GRBs are not addressed here since they appear not to be associated with massive stars and are hosted by elliptical galaxies (Berger 2013). For our analysis we use all snapshots between z = 7.5 and z = 7. 3. Results We present results from our simulations and make comparisons to available observations. As will be clear later, we generally \ufb01nd broad agreement between our model and observations, although larger observational samples of GRB afterglows would be needed to fully test the model. \f\u2013 3 \u2013 3.1. Properties of Embedding Interstellar Medium on Parsec Scales We \ufb01rst describe the physical conditions of the ISM that embeds GRBs on pc scales. The left (right) panel of Figure 1 shows the distribution of GRB rate in the density-temperature (densitymetallicity) parameter space. We see two separate concentrations of GRBs in the n \u2212T plane, with (nH, T) equal to (10\u22122.5cm\u22123, 107.5K) and (104.0cm\u22123, 103.8K), respectively. It must be made clear that the density and temperature are de\ufb01ned on the local gas cell of scale of a few pc that a GRB sits. The appearance of GRB afterglow spectra depends, in most case, more strongly on the properties of gas along the line of sight rather than the gas immediately embedding them, as will be shown later. It is seen that most of the GRBs reside in the low density, high temperature peak, contributing to 87% of GRBs. It is easy to identify two corresponding concentrations in the Z \u2212n plane in the right panel: the low density, high temperature peak corresponds the high metallicity peak in the range [\u22121.5, 0.5] in solar units, while the high density, low temperature peak corresponds the low metallicity peak in the range [\u22122, \u22121]. We note that super solar metallicities in hot winds driven by type II supernova explosions in starburst galaxies in conditions similar to those of our simulated galaxies are locally observed. For example, Konami et al. (2011) observe metallicity of hot X-ray gas of 2 \u22123 times the solar value in M82. Martin et al. (2002) \ufb01nd that the best \ufb01t model for the hot X-ray gas metallicity in dwarf starburst galaxy NGC 1569 is solar, although a metallicity as high as 5 times solar still gives \u03c72 value only about 0.1% larger than the best \ufb01t model; on the contrary, the model with 0.25 times solar has much worse \u03c72 value. The extant observations of GRB afterglow spectra do not have the capability to detect the metallicity of the X-ray absorbing medium of relatively low column density. Metallicities of lower temperature gas phases are observed and are predicted to be substantially sub solar, as will be shown in Figure 3 later. The (nH, T) = (104.0cm\u22123, 103.8K) peak coincides with the cores of dense gas clouds in the simulation, where star formation is centered. The temperature of 103.8K is mostly produced by atomic hydrogen cooling and the lower temperature extension due to metal cooling included in the simulation. As a numerical example, for a gas parcel of density 104cm\u22123, temperature of 104K and metallicity of 1% of solar value, the cooling time is about 400 yrs, which is much shorter than relevant dynamic time scales. It is thus clear that the cold density phase seen our simulation is easily understandable. However, due to lack of treatment for molecular hydrogen cooling, gas is unable to cool signi\ufb01cantly below \u223c104K. Had we included molecular hydrogen cooling and low temperature metal cooling, we expect the gas to cool approximately isobarically to about 20K. Thus, we prefer not to infer any observable properties of GRBs that would depend strongly on the nature of this cold gas phase, such as molecular clouds. It is more appropriate to treat (nH, T) = (104.0cm\u22123, 103.8K) as bounds: (nH, T) = (> 104.0cm\u22123, < 103.8K). This is also why we present our results for the highdensity low-temperature regions as bounds in the abstract and conclusions sections as well as places where clari\ufb01cation is helpful. The noticeable sub-dominance of GRBs residing in very high density (n \u223c104cm\u22123), star-forming regions suggests that a large number of stars are displaced from their birth clouds. This may be achieved by a substantial relative motions between stars and their birth clouds due to hydrodynamic interactions of the later or dynamical e\ufb00ects of stars. As a numerical illustration for the former possibility, a relative motion of 10 km/s between the birth cloud and the \f\u2013 4 \u2013 -6 -4 -2 0 2 4 6 log nH [cm-3] 0 2 4 6 8 10 log T [K] log NHI>19 log NHI<19 50% 90% 99% -6 -4 -2 0 2 4 6 log nH [cm-3] -3 -2 -1 0 1 log Z [ZO \u2022 ] log NHI>19 log NHI<19 50% 90% 99% Fig. 1.\u2014 Left panel: shows the distribution of GRB rate in the density-temperature (n \u2212T) parameter space. Note that the density and temperature are de\ufb01ned on the local gas cell of scale of a few pc that a GRB sits in and it will be made clear later that the appearance of GRB afterglows is in most cases more dependent on the properties of gas along the line of sight. We have further divided the GRBs into two groups with respect to intervening neutral hydrogen column density: NHI > 1019cm\u22122 (red) and NHI < 1019cm\u22122 (blue), details of which will be given in subsequent \ufb01gures. The contour levels speci\ufb01ed indicate the fraction of GRBs enclosed. Right panel: shows the distribution of GRB rate in the density-metallicity (n \u2212Z) parameter space. star would yield a displacement of 100pc in a lifetime of 10Myr. We note that the runaway OB stars in our simulation have typical velocities relative to the birth clouds of 20 \u221240 km/s. Thus the runaway OB stars have contributed signi\ufb01cantly to the displacing GRBs from their birth clouds. The GRBs being in hot low density environment is also a result of supernova heating by earlier supernovae exploding in the birth clouds. We estimate that these two e\ufb00ects are responsible about equally for placing most of the GRBs in low-density high temperature regions. While it is not possible to locate GRBs within the host galaxies at high redshift at this time, observations of low redshift GRBs may still be instructive. Le Floc\u2019h et al. (2012) show that GRB 980425 occurring in a nearby (z = 0.0085) SBc-type dwarf galaxy appears to be displaced from the nearest H II region by 0.9kpc, which is in fact signi\ufb01cantly larger than the displacement distances for the vast majority of our simulated GRBs in high redshift galaxies. Interestingly, the optical afterglow luminosity has a bimodal distribution at 12 hours after trigger (Nardini et al. 2008). The bimodal distribution of volumetric density seen in the left panel of Figure 1 alone should produce a bimodal distribution of the afterglows with respect to break frequencies, luminosities, and break times, etc (e.g., Sari et al. 1998). We cannot make detailed comparisons, because the circumburst density of the high nH GRB subset is underestimated due to our limited resolution and because it remains uncertain if the appearance of GRB afterglows \f\u2013 5 \u2013 would also depend strongly on intervening material (dust obscuration, etc). It is suggestive that the complex situations seen in simulations may account for the observed bimodality of afterglows without having to invoke intrinsic bimodality of GRBs. 3.2. Strong Variations of Intervening Gas and Dust along Di\ufb00erent Sightlines One of the most important points that this paper hopes to highlight and convey is that the appearances of the afterglows of GRBs are not solely determined by the circumburst medium in their immediate vicinity (e.g., the physical conditions shown in Figure 1 are on pc scales centered on GRBs). They also strongly depend on the line of sight beyond the immediate circumburst medium through the ISM in the host galaxy, which we now quantify. Let us \ufb01rst give the meaning for our chosen value of the intervening neutral hydrogen column density NHI = 1019cm\u22122, which is used in Figure 1 to separate GRBs into separate groups. Figure 2 shows the distribution of neutral hydrogen column density integrated along the line of sight for all GRBs, separately for halos in \ufb01ve mass ranges. A bimodal distribution of NHI is seen, peaked at NHI \u223c1021\u221222cm\u22122 and NHI \u223c1016\u221217cm\u22122, respectively, well separated by NHI \u223c1019cm\u22122. It is clear that the bimodality exists for all halo masses surveyed. The low NHI peak is rather broad, extending all the way to NHI = 1011cm\u22122, suggesting some locations of GRBs well into the di\ufb00used hot ISM. There is a noticeable dip in the neutral hydrogen column density distribution at \u223c1014cm\u22122 for the most massive galaxies of \u22651010 M\u2299. We attribute this to more signi\ufb01cant shock heating in the most massive halos. Returning to Figure 1, it is now easy to see that the GRBs in the low NHI (\u22641019cm\u22122) peak in Figure 2 is composed of only one set of GRBs situated in low density environment around (10\u22122.5cm\u22123, 107.5K), seen as the red contours in Figure 1. The high NHI (\u22651019cm\u22122) peak in Figure 2, on the other hand, consists of a combination of two separate populations with distinctly di\ufb00erent circumburst medium, which correspond to two separate loci of the blue contours at (10\u22122.5cm\u22123, 107.5K) and (104.0cm\u22123, 103.8K) in Figure 1. The apparently two di\ufb00erent groups of GRBs situated around (nH, T) = (10\u22122.5cm\u22123, 107.5K) one with low NHI\u22641019cm\u22122 (red) and the other with low NHI\u22651019cm\u22122 (blue) are due entirely to the line of sight through the ISM of the host galaxy. Overall, we \ufb01nd that 38% of GRBs have NHI\u22641017cm\u22122 (i.e., optically thin to Lyman continuum), whereas 44% have NHI\u22651020.3cm\u22122 (i.e, containing a damped Lyman-alpha system). It is clear that various properties of the afterglows of GRBs, even sitting in the same very local environment on pc scales, may appear di\ufb00erent due to di\ufb00erent intervening interstellar gas and dust along the line of sight through the host galaxy. In summary so far, there are three separate populations of GRB afterglows are expected, if our model is correct. One might classify them in the following simple way: (1) HnHN=(high volumetric density n \u223c104cm\u22123, high neutral column density NHI\u22651019cm\u22122), (2) LnHN=(low volumetric density n \u223c10\u22122.5cm\u22123, high neutral column density NHI\u22651019cm\u22122), (3) LnLN=(low volumetric density n \u223c10\u22122.5cm\u22123, high neutral column density NHI\u22641019cm\u22122). Again, types (2) and (3) are a result of di\ufb00erent viewing angles, where type (2) is due to viewing angles through largely hot ionized gas and type (3) viewing angles going through cold and dense gas in addition. \f\u2013 6 \u2013 12 15 18 21 24 log NHI [cm-2] -2.5 -2.0 -1.5 -1.0 -0.5 log PDF log Mh=[ 8.0, 8.5) log Mh=[ 8.5, 9.0) log Mh=[ 9.0, 9.5) log Mh=[ 9.5,10.0) log Mh=[10.0,11.5) Fig. 2.\u2014 shows the probability distribution functions (PDF) of neutral hydrogen column density for all GRBs, separated according to the halo masses indicated in the legend. Laskar et al. (2014) analyze multi-wavelength observations of the afterglow of GRB 120521C (z \u223c6) and re-analyze two previous GRBs at z > 6 (GRB 050904 and 090423), and conclude that the circumburst medium has a volumetric density of nH \u22640.05cm\u22123 that is constant. The GRBs in the LnHN or LnLN group provide the right match to the observations. While the statistic is still small, it is expected that about 87% of GRBs should arise in in either the LnHN or LnLN group. Observations of GRB 050904 at z = 6.3 reveal that it contains a damped Lyman alpha systems (DLAs) system in the host galaxy of column density NHI = 1021.6cm\u22122 and metallicity of Z = \u22122.6 to \u22121 (Totani et al. 2006; Kawai et al. 2006). Based on X-ray observations, Campana et al. (2007) conclude that Z \u22650.03 Z\u2299for GRB 050904. The evidence thus suggests that GRB 050904 likely resides in a dense environment, although it cannot be completely sure because the metallicity range of the low-density peak (right panel of Figure 1) overlaps with the observed range. It is useful at this juncture to distinguish between the metallicity of the local environment of a GRB and that of absorbers in the GRB afterglow spectrum. Let us now turn to the expected metallicity of UV/optical absorbers in the GRB after\ufb02ow spectra. Figure 3 shows the PDFs of total hydrogen-column-density-weighted metallicity of gas along the line of sight, excluding gas hotter than 106K, for the three sub-populations of GRBs. We see that for all three GRB groups the metallicity of the absorbers in the GRB spectra peaks in the range \u22123 to \u22121. Thus, it is now clear that our model can easily account for the observed properties of GRB 050904. The additional evidence that, based on the analysis of the equivalent width ratio of the \ufb01ne structure transition lines Si II* \u03bb1264.7\u02da A and Si II \u03bb1260.4\u02da A, infers the electron density \f\u2013 7 \u2013 -3 -2 -1 0 1 log -3 -2 -1 0 log PDF log NHI * 19 (nH * 10 cm-3) log NHI * 19 (nH < 10 cm-3) log NHI < 19 (nH < 10 cm-3) Fig. 3.\u2014 shows the PDFs of total hydrogen column density weighted metallicity of gas along the line of sight, excluding gas hotter than 106K, for the three sub-populations of GRBs: GRBs in (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22641019cm\u22122 (red dashed, LnLN group), (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22651019cm\u22122 (green dotted, LnHN group), and (nH, T) = (104.0cm\u22123, 103.75K) with NHI \u22651019cm\u22122 (blue solid, HnHN group). of log ne = 2.3 \u00b1 0.7. Furthermore, the magnitude of the optical afterglow at 3.4 days after the burst favors a high density circumburst medium. In combination, it appears that GRB 050904 is likely in a dense environment being to the HnHN group. This appears to be at some minor odds with our model, since we only expect that 13% of GRBs to arise in the HnHN group. It would be highly desirable to obtain a larger sample of high-z GRBs to provide a statistically \ufb01rmer test. Analyses of observations of GRB 130606A at z = 5.9 indicate that it likely contains a sub-DLA system of NHI \u223c1019.8cm\u22122 in the host galaxy (Totani et al. 2013; Castro-Tirado et al. 2013). The inferred low metallicity of \u22121.8 to \u22120.8 in solar units (Castro-Tirado et al. 2013) and \u22121.3 to \u22120.5 (Chornock et al. 2013), in conjunction with the NHI, suggests that GRB 130606A may reside in a low density environment with a foreground sub-DLA system in the host galaxy. This proposal is consistent with the evidence of detection of highly ionized species (e.g., N V and Si IV) (Castro-Tirado et al. 2013). It seems likely that GRB 130606A belongs to the LnHN group. It is easy to see that in our model the metallicity distribution of UV/optical absorbers in GRB afterglow spectra is wide, which itself is due to the very inhomegeneous metallicity distributions in the ISM of the host galaxies. Thus, it would be a rather chancy practice trying to infer the metallicity of the host galaxy solely based on a small number of (typically one) GRB afterglow absorption spectra. \f\u2013 8 \u2013 The reader has already seen clearly that the distributions of all concerned physical quantities, including metallicity, density, total and neutral hydrogen column density, are wide. We will add yet one more quantity and show the cumulative distributions of the ratio of neutral hydrogen to total hydrogen column density for the three groups in the right panel of Figure 4. We see that for GRBs in the LnLN group the neutral hydrogen to total hydrogen column ratio is signi\ufb01cantly less than unity. Even for the HnHN and LnHN groups, (10%,14%) of GRBs have the ratio less than 0.1. In other words, it is generally a pretty bad assumption that the apparent absorbers in the GRB afterglow spectra are mainly neutral. This indicates that the so-called \u201cmissing gas problem\u201d (e.g., Schady et al. 2011) may be accommodated in this model. 16 18 20 22 24 26 log NH [cm-2] -3 -2 -1 0 log PDF log NHI * 19 (nH * 10 cm-3) log NHI * 19 (nH < 10 cm-3) log NHI < 19 (nH < 10 cm-3) -10 -8 -6 -4 -2 0 log NHI / NH -3 -2 -1 0 1 log cumulative fraction log NHI * 19 (nH * 10 cm-3) log NHI * 19 (nH < 10 cm-3) log NHI < 19 (nH < 10 cm-3) Fig. 4.\u2014 Left panel: shows the PDFs of total hydrogen column density for the three subpopulations of GRBs: GRBs in (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22641019cm\u22122 (red dashed, LnLN group), (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22651019cm\u22122 (green dotted, HnLN group), and (nH, T) = (104.0cm\u22123, 103.75K) with NHI \u22651019cm\u22122 (blue solid, HnHN group). Right panel: shows the cumulative PDFs of the ratio of NHI/NH. The left panel of Figure 4 shows the PDFs of the total hydrogen column density for the three sub-populations of GRBs, which is most relevant for probing GRB X-ray afterglows and hence a useful test of our model. One expectation from our model is that the vast majority of GRBs sitting in low density circumburst medium (LnHN + LnLN) do not have Compton thick obscuring gas. This prediction is veri\ufb01able with a combination of afterglow light curves and X-ray observations. On the other hand, one expects from our model that a signi\ufb01cant fraction of the GRBs sitting in high density circumburst medium (HnHN) have an extended high NH tail and dominate the GRBs with NH \u22651023cm\u22122. Quantitatively, we \ufb01nd that (45%, 3%) of GRBs have NH \u2265(1023, 1024)cm\u22122; it is noted that these two numbers are likely lower bounds due to possible numerical resolution e\ufb00ects. As already noted earlier, it is seen from the right panel that the GRBs in the LnLN group are intervened by highly ionized gas peaking at an average neutral fraction of \u223c10\u22124, with no cases having a neutral fraction exceeding 10\u22121. In contrast, for GRBs in both the HnLN and HnHN \f\u2013 9 \u2013 groups, more than 50% of them have an average neutral fraction greater than \u223c10\u22121. -6 -4 -2 0 2 log (NH /1021cm-2)(Z/ZO \u2022 ) -3 -2 -1 0 log PDF log NHI * 19 (nH * 10 cm-3) log NHI * 19 (nH < 10 cm-3) log NHI < 19 (nH < 10 cm-3) T < 106 K 3 4 5 6 log [K] -2 -1 0 1 log PDF log NHI * 19 (nH * 10 cm-3) log NHI * 19 (nH < 10 cm-3) log NHI < 19 (nH < 10 cm-3) Fig. 5.\u2014 Left panel: shows the PDFs of metallicity weighted total hydrogen column density, (NH/1021cm\u22122)(Z/ Z\u2299, excluding gas with temperature greater than 106K, for the three sub-populations of GRBs: GRBs in (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22641019cm\u22122 (red dashed), (nH, T) = (10\u22122.5cm\u22123, 107.5K) with NHI \u22651019cm\u22122 (green dotted), and (nH, T) = (104.0cm\u22123, 103.75K) with NHI \u22651019cm\u22122 (blue solid). The exclusion of \u2265106K gas is to intended for the situation that dust is e\ufb03ciently destroyed in hot gas. According to Draine (2003), AV \u2248(NH/1021cm\u22122)(Z/ Z\u2299. Right panel: shows the PDFs of gas temperature weighted by NHZ, excluding gas with temperature greater than 106K. We now turn to the issue of dust obscuration. The left panel of Figure 5 shows the PDFs of visual extinction AV . It is noted that the simulation does not follow dust formation explicitly. Thus, we have adopted the well known empirical relation between metal column density and visual extinction: AV = (NH/1021cm\u22122)(Z/ Z\u2299) (Draine 2003). While the applicability of the relation derived from local observations is uncertain, detailed analysis of galaxy colors at EoR suggest that the simulated galaxies based on this relations give rise to self-consistent results when comparing to observations (Kimm & Cen 2013; Cen & Kimm 2014). Moreover, direct observations of dust suggest that this relation holds well in other galaxies locally, and galaxies and damped Lyman alpha systems at moderate to high redshift (e.g., Draine et al. 2007; De Cia et al. 2013; Draine et al. 2014; Fisher et al. 2014). Nevertheless, it is possible that the normalization factor in front of the relation is probably uncertain to order of unity. It is evident that a signi\ufb01cant fraction of GRBs in the high HnHL group (blue solid curve) are heavily dust obscured, with (53%,16%) of GRBs in the HnHL group have AV \u2265(1, 10). At the other extreme, we see that the GRBs in the LnLN group (red dashed curve) have negligible dust columns with no case of AV \u22650.3; nevertheless, it is worth pointing out that, even for this set of GRBs 12% has an AV \u22650.03 due largely to dust in high temperature gas. The GRBs in the LnHN group (green dotted curve) situates inbetween the above \f\u2013 10 \u2013 two groups, with a small but non-negligible fraction (7%) at AV > 1. Observationally, the issue of dust in high-z GRB hosts is less than settled. Zafar et al. (2010), based on a re-analysis of the multiepoch data of the afterglow of GRB 050904 at z = 6.3, conclude that there is no evidence of dust. Given that the neutral column density of the in situ DLA for GRB 050904 is NHI = 1021.6cm\u22122 and low metallicity Z = \u22122.6 to \u22121 (Totani et al. 2006; Kawai et al. 2006), an AV \u223c0.01 is possible, if one adopts the lower metallicity value that is statistically allowed in our model (see the right panel of Figure 1). Thus, both the low extinction and a standard ratio of dust to metals are still consistent with the observations. Our model indicates that 9% of GRBs have AV \u22651. Thus, with a z \u22656 GRB sample of size 11, one expects to see one GRBs with nH \u223c104cm\u22123 that is signi\ufb01cantly obscured by dust with AV > 1. This may be testable with SWIFT data relatively soon. The right panel of Figure 5 shows the PDFs of average gas temperature weighted by NHZ, excluding gas with temperature greater than 106K (the exclusion is a crude way to say that dust in gas hotter than 106K is destroyed). The purpose of this plot is to provide an indication the diversity of intervening gas with dust. One notes that the lines of sight of HnHL GRBs contain dust in cold medium (T \u2264104K), whereas those of high LnHN GRBs are dominated by dust residing in gas T \u223c104\u22125K, and the LnLN GRBs are intervened by dust in hotter gas of T \u2265105K. Under the assumption that the hotter gas is presumably produced by shocks, which are more destructive to larger dust grains, one might suggest that dust becomes increasingly grayer from HnHL to LnHN to LnLN. One expectation is that some lines of sight, especially those for the HnLN and HnHN groups the total dust arise from multiple, di\ufb00erent temperature regions. This may provide a physical explanation for the observational indications of multiple dust components (e.g., Zafar et al. 2012). As a side note on the SFR of GRB hosts. Basa et al. (2012) place the star formation rate (SFR) of GRB 080913 at z = 6.7 to be less than 0.9 M\u2299/yr. Berger et al. (2007) obtain an upper bound on the star formation rate of GRB 050904 at z = 6.3 less than 5.7 M\u2299/yr. From the simulations we \ufb01nd that (42%, 57%, 66%) of GRBs occur in galaxies with SFR less than (0.3, 1.0, 3.0) M\u2299/yr. Thus, while simulations and observations are in good agreement, larger data sets are needed to place the comparisons on a solid statistical ground. Finally, we must stress that the analysis performed here has focused on the ISM embedding the GRBs at EoR. The exact details of the state of the IGM at EoR are uncertain both observationally and theoretically. The theoretical di\ufb03culty is in part computational, because we do not have the capability to simulate a large enough volume to capture of the reionization of the IGM selfconsistently, while still having enough resolution for the ISM. The goal of this work is to present the signatures of the ISM theoretically, which is lacking. It may be argued that the properties of ISM in galaxies are somewhat detached from the properties of the IGM on large scales; in other words, the observed spectra of GRB afterglows at EoR may be considered to be imprinted by both ISM and IGM as a linear superposition. Consequently, a proper understanding of the ISM will not only aid in the interpretation of the ISM of galaxies at EoR but also is highly needed for proper interpretation of the properties (neutral fraction, topology, etc) of the IGM at EoR. \f\u2013 11 \u2013 4." + }, + { + "url": "http://arxiv.org/abs/1406.1467v1", + "title": "Gaussian Random Field: Physical Origin of Sersic Profiles", + "abstract": "While the Sersic profile family provide adequate fits for the surface\nbrightness profiles of observed galaxies, the physical origin is unknown. We\nshow that, if the cosmological density field are seeded by random gaussian\nfluctuations, as in the standard cold dark matter model, galaxies with steep\ncentral profiles have simultaneously extended envelopes of shallow profiles in\nthe outskirts, whereas galaxies with shallow central profiles are accompanied\nby steep density profiles in the outskirts. These properties are in accord with\nthose of the Sersic profile family. Moreover, galaxies with steep central\nprofiles form their central regions in smaller denser subunits that possibly\nmerge subsequently, which naturally leads to formation of bulges. In contrast,\ngalaxies with shallow central profiles form their central regions in a coherent\nfashion without significant substructure, a necessary condition for disk galaxy\nformation. Thus, the scenario is self-consistent with respect to the\ncorrelation between observed galaxy morphology and Sersic index. We predict\nfurther that clusters of galaxies should display a similar trend, which should\nbe verifiable observationally.", + "authors": "Renyue Cen", + "published": "2014-06-05", + "updated": "2014-06-05", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction The process of galaxy formation has likely imprinted useful information in the stellar structures. A great amount of e\ufb00ort has been invested in characterizing detailed stellar structures of galaxies of all types, dating back to Plummer (1911) for globular clusters and Reynolds (1913) for the Andromeda, and if one is so inclined, to Kant (1755) who might be the \ufb01rst contemplating the shape of the Milky Way and island universes. In modern times, among the best known examples, the de Vaucouleurs (1948) law surface brightness I(R) \u221de\u2212kR1/4 (where R is radius and k a normalization constant) describes giant elliptical galaxies well, whereas the King (1962) law appears to provide better \ufb01ts for fainter elliptical galaxies; disk galaxies are in most cases described by the exponential disk model (Hodge 1971): I(R) \u221de\u2212kR. The major advantage of Sersic (1968) pro\ufb01le family I(R) \u221de\u2212kR1/n is that they provide an encompassing set of pro\ufb01les with n from less than 1 to as large as 10, including the exponential disk (n = 1) and de Vaucouleurs (n = 4) model. Even at the age of sophisticated hydrodynamic simulations, the physical origin of the Sersic pro\ufb01le family that have well described all galaxies remains enigmatic. This author is of the opinion that the nature of galaxy formation process in the context of modern cosmological structure formation model is perhaps too complex to warrant any possibility of analytic \ufb01ts to be accurate beyond the zero-th order. While e\ufb00orts to characterize deviations from or additions to the standard \ufb01ts are 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1406.1467v1 [astro-ph.GA] 5 Jun 2014 \f\u2013 2 \u2013 not only necessary but also very important to account for rich galaxy data (e.g., Lauer et al. 1995), it would also seem bene\ufb01cial to construe the basic trend displayed by the wide applicability of the Sersic pro\ufb01le family to enhance our physical understanding of the galaxy formation process. In this Letter we provide a basic physical understanding of the Sersic pro\ufb01le family in the context of the standard cosmological model with gaussian random density \ufb01eld. Our simple analysis provides, for the \ufb01rst time, a self-consistent physical origin for the Sersic pro\ufb01le family. This also opens up the possibilities to explore the physical links to other properties of galaxies, since, for example, it comes natural and apparently inevitable that the steep pro\ufb01led galaxies have a much higher fraction of substructures that form early and interactions/mergers among them would lead to formation of elliptical galaxies, enabling a self-consistent picture. This study is the \ufb01fth paper in the series \u201cOn the Origin of the Hubble Sequence\u201d. 2. Gaussian Random Field and Sersic Pro\ufb01les The standard cosmological constant dominated cold dark matter cosmological model has a number of distinct features. One of the most important is that the initial density \ufb02uctuations are gaussian and random. As a result, the statistical properties are fully determined by a vector quantity, namely, the linear power spectrum of the density \ufb02uctuations, Pk, which is well determined by observations from the microwave experiments and others (e.g., Komatsu et al. 2011). Observational evidence is that allowed deviations from gaussianity are at the level of 10\u22123 and less in the linear regime (Planck Collaboration et al. 2013). In a gaussian random \ufb01eld, di\ufb00erent waves are superimposed on one another in a random fashion, with the ensemble of waves at a given length following gaussian distribution and the square of the mean equal to the amplitude of the power spectrum at that wavelength. Here, a simple illustration is shown to contain rich physics and can already account for the basic trend of the Sersic pro\ufb01les, which, more importantly, are additionally in accord with properties of galaxies other than the pro\ufb01les. Figure 1 shows an example of the formation of a massive galaxy that contains small-scale \ufb02uctuations with large amplitude (left panel) and an example of the formation of a massive galaxy that contains small-scale \ufb02uctuations with small amplitude (right panel). In both panels peaks that are above the horizontal red dot-dashed line would have collapsed by z = 1. Our choice of redshift z = 1 has no material consequence and we expect the generic trends should not depend on that choice. In the left panel we see that, between the two points where the blue dashed curve intersect the horizontal red dot-dashed line, there are three separate density peaks with peak amplitude of 6\u22127. Thus, a signi\ufb01cant portion of the three peaks would have collapsed by redshift z = 4 \u22126 to form three separate galaxies. Note that structures formed at higher redshifts tend to be denser than structures formed at lower redshifts. Therefore, these earlier structures would settle to form the dense central region. Although it is probable that the galaxies formed at the three separate peaks subsequently merge to form a dense elliptical galaxy, our conclusion of forming a dense central region \f\u2013 3 \u2013 \u22123 \u22122 \u22121 0 1 2 3 \u22128 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 7 8 9 10 elliptical formation steep central profile with extended envelope spatial scale density fluctuation amplitude long wave fluctuations total fluctuations collapse amplitude \u22123 \u22122 \u22121 0 1 2 3 \u22128 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 7 8 9 10 spatial scale spiral formation shallow central profile with steep outer profile long wave fluctuations total fluctuations collapse amplitude Fig. 1.\u2014 Left panel: shows an example of the formation of a massive galaxy at z = 1 that is overall determined by the large waves indicated by the blue dashed curve. On top of the large linear wave, there is small-scale linear wave of length 1/8 of the large one with a \ufb02uctuation amplitude 1.5 times larger than the large wave. Both long and short waves are chosen to be sinusoidal for the illustration. The sum of the long and short waves is shown as the black solid curve. The horizontal red dot-dashed line of amplitude value 1.68 indicates the amplitude of the \ufb02uctuation that has collapsed by z = 1. Right panel: same as the left panel except the small-scale wave has a \ufb02uctuation amplitude 10 times smaller than the large wave. Note that the examples are shown in 1-d but meant to be in 3-d. in this case does not necessarily require all of them to merge. Moreover, there are two somewhat smaller peaks at x values of \u223c\u22121.5 and \u223c+1.5 of amplitude \u223c4.5, which would have collapsed by redshift z = 2 \u22124. In addition, there two still smaller peaks at x values of \u223c\u22122.5 and \u223c+2.5 of amplitude \u223c2.5, which would have collapsed by redshift z = 1 \u22122. It is reasonable to expect that the four outer small galaxies would accrete onto the central galaxy to form the outer envelope by z = 0. Thus, this con\ufb01guration would form a central dense structure with a steep pro\ufb01le due to the early formation of the central subunits and their subsequent descent to the center (and possible merging), and an extended envelope due to later infall of small galaxies that form in outer regions at some earlier times, resulting in a pro\ufb01le resembling a Sersic pro\ufb01le with n \u226b1. This overall picture seems to resemble two-phase formation scenario for elliptical galaxies from detailed cosmological hydrodynamic simulations (Oser et al. 2010; Lackner et al. 2011). \f\u2013 4 \u2013 In the right panel we see that, between the two points where the blue dashed curve intersect the horizontal red dot-dashed line, there is no signi\ufb01cant substructure. Therefore, the collapse of the central region will be rather coherent without signi\ufb01cant central condensation (i.e., without a stellar bulge). Furthermore, there is no signi\ufb01cant density peak outside the central region that has collapsed; as a result, there is little stellar envelope due to late infall of small galaxies. Thus, this con\ufb01guration would form a galaxy with a shallow central density slope and a very steep outer slope. We suggest that this con\ufb01guration would form a bulge-less spiral galaxy with a pro\ufb01le similar to a Sersic pro\ufb01le with n = 1. A corollary is that the con\ufb01guration depicted in the right panel would occur in a \u201cquiet\u201d environment, which may be quantitatively described as having a small pair-wise velocity dispersion (Davis & Peebles 1983) or a high Mach number (Suto et al. 1992). Our local environment appears to belong to this category. Perhaps this explains why there is preponderence of giant bulge-less galaxies in our neighborhood (Kormendy et al. 2010). This does not necessarily suggest that the observed large fraction (\u223c50%) of large bulge-less galaxies in our local universe is representative for the universe as a whole. Our own expectation is that the fraction of large bulge-less galaxies, averaged over the entire universe, will be substantially lower than that seen in the very local neighborhood. Future surveys with resolutions as good as those for local galaxies now can check this. It is easy to imagine a variety of con\ufb01gurations that may fall in-between these two (nearly) bookend examples. Since the gaussian density \ufb02uctuation is \u201ccompensated\u201d in the sense that the large density peak tends to be sandwiched by a pair of troughs, the expected trend is this: a larger degree of central substructure is accompanied by a larger degree of substructure in the outskirts, whereas a lesser degree of central substructure is accompanied by a lesser degree of substructure in the outskirts. Since the total density \ufb02uctuations are linear combinations of each independent waves, one can generalize the con\ufb01gurations from two waves to an arbitrary number of waves but the trend seen in Figure 1 remains. In short, the generic trend obtained essentially hinges on two important features of the gaussian random \ufb01eld: each density wave is compensated and independent. 3. Discussion and" + }, + { + "url": "http://arxiv.org/abs/1405.0516v1", + "title": "Evolution of Cold Streams and Emergence of the Hubble Sequence", + "abstract": "A new physical framework for the emergence of the Hubble sequence is\noutlined, based on novel analyses performed to quantify the evolution of cold\nstreams of a large sample of galaxies from a state-of-the-art ultra-high\nresolution, large-scale adaptive mesh-refinement hydrodynamic simulation in a\nfully cosmological setting. It is found that the following three key physical\nvariables of galactic cold inflows crossing the virial sphere substantially\ndecrease with decreasing redshift: the number of streams N_{90} that make up\n90% of concurrent inflow mass flux, average inflow rate per stream dot M_{90}\nand mean (mass flux weighted) gas density in the streams n_{gas}. Another key\nvariable, the stream dimensionless angular momentum parameter lambda, instead\nis found to increase with decreasing redshift. Assimilating these trends and\nothers leads naturally to a physically coherent scenario for the emergence of\nthe Hubble sequence, including the following expectations: (1) the predominance\nof a mixture of disproportionately small irregular and complex disk galaxies at\nz>2 when most galaxies have multiple concurrent streams, (2) the beginning of\nthe appearance of flocculent spirals at z~1-2 when the number of concurrent\nstreams are about 2-3, (3) the grand-design spiral galaxies appear at z<1 when\ngalaxies with only one major cold stream significantly emerge. These expected\ngeneral trends are in good accord with observations. Early type galaxies are\nthose that have entered a perennial state of zero cold gas stream, with their\nabundance increasing with decreasing redshift.", + "authors": "Renyue Cen", + "published": "2014-05-02", + "updated": "2014-05-02", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction Despite the commendable successes, a systematic physical theory for the origin of the Hubble (1926) sequence the holy grail of galaxy formation remains elusive. While the greatly increased richness in observational data has prompted revisions in the classi\ufb01cation (van den Bergh 1976; Sandage & Binggeli 1984; Cappellari et al. 2011; Kormendy & Bender 2012) that have provided more coherence along each sequence and counterpart identi\ufb01cations across di\ufb00erent sequences (e.g., S0 versus S sequences), it has not, for the most part, signi\ufb01cantly improved the clarity of our physical understanding of the Hubble sequence. It seems that this perpetual state of perplexity does not stem from lack of freedom to parameterize input physics, such as in the semi-analytic and other 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1405.0516v1 [astro-ph.GA] 2 May 2014 \f\u2013 2 \u2013 phenomenological approaches. Rather, there are key physical ingredients that are not understood to even be parameterized. One such physical ingredient is the environment, which is a hard problem to address computationally, due to the twin requirements of capturing large-scale environment and small-scale structure, and di\ufb03cult to parameterize due to diversity. Our recent analysis has shown that the quenching and color migration for the vast majority of galaxies may be primarily due to environment e\ufb00ects (Cen 2014b), which has recently received signi\ufb01cant observational support (e.g., Lin et al. 2014; Carollo et al. 2014; Muzzin et al. 2014). There are enough both direct evidence and theoretical insights gained in galaxy interactions (e.g., Mihos & Hernquist 1996) or analytic analyses (e.g., Fall & Efstathiou 1980; Mo et al. 1998) to conclude that angular momentum is another key physical ingredient in galaxy formation theory. The possibility that the angular momentum dynamics of gas accretion is complex in a cosmological setting and di\ufb00erent from that of dark matter (Bullock et al. 2001) is under-appreciated. Recent studies begin to show that angular momentum dynamics of gas and stars in the inner regions are only loosely, at best, related to that of dark matter halos (e.g., Hahn et al. 2010; Cen 2014a). One could suggest that the diversity of galaxies and its evolution the Hubble sequence and its emergence may be substantially governed by the complexity and trends of dynamics of cold (T < 105K) gas streams with respect to their number, mass \ufb02ux, density and angular momentum, since they provide the main fuel for galaxy formation. Such suggestion is, so far, without formal proof. This study provides analyses on cold streams, utilizing a large sample of ultra-highly resolved galaxies from an ab initio Large-scale Adaptive-mesh-re\ufb01nement Omniscient Zoom-In cosmological hydrodynamic simulation (LAOZI) of the standard cold dark matter model. It is shown that the cold gas accretion \ufb02ows display physical trends that can provide a self-consistent account for the origin of the emergence of the Hubble sequence. This study is built on insights from recent innovative work (Kere\u02c7 s et al. 2005; Dekel & Birnboim 2006; Nelson et al. 2013) that suggests, to varying degrees, a two-mode gas accretion onto galaxies, in contrast to the classic description of gas cooling following virialization heating (Rees & Ostriker 1977; Silk 1977; Binney 1977; White & Rees 1978). There are currently signi\ufb01cant quantitative di\ufb00erences concerning the cold streams from di\ufb00erent simulation groups (see references above), which may be, in part, due to di\ufb00erent tracking methods. The method for identifying cold gas streams described in \u00a72 may be used to enable a uniform comparison. 2. Cosmological Simulations and Identi\ufb01cation of Cold Streams The reader is referred to Cen (2014b) for detailed descriptions of our simulations. Brie\ufb02y, a zoom-in region of comoving size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 is embedded in a 120h\u22121Mpc periodic box and resolved at 114h\u22121pc physical. Cosmological parameters are from WMAP7 (Komatsu et al. 2011): \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100h km s\u22121Mpc\u22121 = 70 km s\u22121Mpc\u22121 and n = 0.96. The zoom-in region is centered on a cluster of mass of \u223c3 \u00d7 1014 M\u2299at z = 0 hence represents a 1.8\u03c3 \ufb02uctuation for the volume. As a result, the development of structure formation is somewhat more advanced compared to that of the cosmic mean, and we take that into account when drawing conclusions with respect to the universe as a whole. Equations governing motions of dark matter, gas and stars, and thermodynamic state of gas are followed, using the adaptive mesh \f\u2013 3 \u2013 re\ufb01nement cosmological hydrodynamic code Enzo (Bryan et al. 2014). The simulations include a metagalactic UV background (Haardt & Madau 2012) with self-shielding (Cen et al. 2005), a metallicity-dependent radiative cooling (Cen et al. 1995). Star particles are created in cells that satisfy a set of criteria (Cen & Ostriker 1992), essentially equivalent to the Kennicutt (1998) law. Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). At any epoch stellar particles are grouped using HOP (Eisenstein & Hut 1998) to create galaxy catalogs. For each galaxy we have its exact star formation history, given its member stellar particles formation times. None of the galaxies used in the analysis contains more than 1% in mass, within the virial radius, of dark matter particles other than the \ufb01nest particles. Galaxy catalogs are constructed from z = 0.62 to z = 1.40 at a redshift increment of \u2206z = 0.02 and from z = 1.40 to z = 6 at a redshift increment of \u2206z = 0.05. Thus, when we say, for example, galaxies of stellar masses 1010\u221211 M\u2299in the redshift range z = 2 \u22123, it means that we include galaxies with stellar masses from 1010 to 1011 M\u2299from 21 snapshots (z = 2, 2.05, ..., 2.95, 3). For the four redshift ranges analyzed, z = (0.62 \u22121, 1 \u22122, 2 \u22123, 3 \u22124), there are (5754, 9395, 4522, 1507) galaxies of stellar mass in the range 1010\u221211 M\u2299, and (628, 964, 232, 28) galaxies of stellar mass in the range 1011\u221212 M\u2299. Proper identi\ufb01cation of gas streams has not been demonstrated so far. Visual inspection may be able to pick out prominent ones, although it lacks the ability to separate out multi streams and becomes impractical for large samples. Real-space search for \ufb01lamentary structures for large-scale structure have enjoyed some successes (e.g., Bond et al. 2010) but the following two issues make them less usable for gas streams. First, non-radial streams could easily bend during travel, maybe resembling things that may look like spirals; even radial streams will bend due to \ufb02uid drag. Second, gas along a stream is generally broken up (due to thermal and gravitational instabilities as well as other interactions) to look more like a pearl necklace than a creek. We have explored using some constants of motion to devise an automated scheme and \ufb01nally focused on the angular momentum vector. We \ufb01nd that the following two variables the amplitude of the total speci\ufb01c angular momentum (J) and the cosine of the angle [cos(\u03b8)] between the total speci\ufb01c angular momentum and a \ufb01xed vector (say, z direction) de\ufb01ne a parameter space for identifying and separating out co-eval, distinct streams. Operationally, for a galaxy we accumulate in\ufb02ow gas \ufb02ux in the radial range (1 \u22121.3)rv (rv=virial radius) in the J \u2212cos(\u03b8) plane with 50 \u00d7 50 grid points, spanning uniformly the J range [0, 20] \u00d7 104 km/s kpc and cos(\u03b8) range [\u22121, 1]. In\ufb02ow gas is de\ufb01ned to be gas with the radial component of its velocity pointing to the center. Only gas with T \u2264105K is included. The mass \ufb02ux of each \ufb02uid cell i in the radial range (1 \u22121.3)rv is computed as 4\u03c0\u03c1ivi\u2206x2 i /\u03a3 \u2206x2 i r2 i , where \u03c1i is gas density, vi the radial velocity, \u2206xi the cell size, ri radial distance from the center, and the sum is performed over all cells in the radial range. The 4\u03c0 and the sum term serve to make sure that \ufb02uxes are properly normalized, when all the gas cells in the radial shell are collected, regardless of the thickness of the radial shell. Once mass \ufb02uxes are accumulated in the 2-d parameter plane, smoothing is applied to smooth out \ufb02uctuations among adjacent entries in the 2-d phase plane. The choice of the smoothing window size does not alter results in material way, as long as over smoothing is avoided; a 3-point boxcar smoothing is used. With the smoothed \ufb02ux map, we \f\u2013 4 \u2013 J (104 km/s kpc) cos(e) 1 2 3 0 1 2 3 4 5 6 7 8 9 \u22121 \u22120.5 0 0.5 1 log cold gas inflow flux (Msun/yr) \u22124 \u22123 \u22122 \u22121 0 1 2 Fig. 1.\u2014 Top panel: shows a 3-d visualization of a galaxy of stellar mass 4.7 \u00d7 1011 M\u2299at z = 3. The box has a width of 2.6 times the virial radius. The (yellow,purple) isodensity surfaces have values (\u223c10\u22122, \u223c10\u22121)cm\u22123. Bottom panel: shows the gas in\ufb02ow \ufb02ux in the radial range (1 \u22121.3)rv, in the two-dimensional J-cos(\u03b8) phase space. The total gas in\ufb02ow rate of the galaxy is 388 M\u2299yr\u22121 (and star formation rate of 254 M\u2299yr\u22121) and nine signi\ufb01cant streams are identi\ufb01ed. The top three streams make up 90% of the total in\ufb02ow rate and are labelled with numbers (1, 2, 3), with their respective in\ufb02ow rates being (211, 79, 61) M\u2299yr\u22121. \f\u2013 5 \u2013 employ a procedure analogous to the DENMAX scheme used to identify dark matter halos (Gelb & Bertschinger 1994). Each entry is propagated along the steepest uphill gradient until it reaches a local maximum of \ufb02ux and is said to belong to that local maximum. All entries in the 2-d phase plane belonging to a same maximum are collected together to de\ufb01ne one distinct stream, with a number of attributes, including mean location in the parameter plane, total \ufb02ux, \ufb02ux-weighted gas density, temperature. We rank order the streams according to their \ufb02uxes, and de\ufb01ne N90 to be the top number of streams that make up 90% of total concurrent cold gas in\ufb02ow rate. If the 90% falls between two streams, we linearly interpolate to \ufb01nd N90, which hence could be non-integer. When there is no signi\ufb01cant stream, N90 = 0. Figure 1 demonstrates how well this phase-space identi\ufb01cation scheme works. For this galaxy at z = 3 the identi\ufb01cation scheme \ufb01nds nine signi\ufb01cant streams with the top three streams making up 90% of the cold in\ufb02ux. Even through it is not easy to discern visually all streams, it appears that nine is consistent with the 3-d rendering in the top panel. The fact that the scheme picks out nine streams in this complex setting is a convincing demonstration of its e\ufb03cacy. It is evident that there are indeed three major streams around the virial sphere, seen as three prominent yellow tubes with purple spines. Part of the motivation of this paper is to demonstrate this method of cold stream identi\ufb01cation in a complex cosmological setting that may be used by other authors. 3. Results Figure 2 shows the PDF of the number of streams (N90). Three most important trends are immediately visible. First, larger galaxies tend to have more streams at z > 2, although for the two mass ranges considered the di\ufb00erences become insigni\ufb01cant at z \u22642. Second, N90 steadily and signi\ufb01cantly decreases with decreasing redshift. The median N90 is (2.3, 4.0) for (1010 \u22121011, 1011 \u2212 1012) M\u2299galaxies at z = 3 \u22124, which becomes (2.0, 3.5) at z = 2 \u22123, (1.6, 1.9) at z = 1 \u22122 and (1.0, 1.6) at z = 0.62 \u22121. Third, the rate of decrease of N90 with decreasing redshift appears to be faster for the higher mass galaxies. Figure 3 shows the PDF of cold in\ufb02ow rate per stream, \u02d9 M90, de\ufb01ned to be 90% of the total cold accretion rate divided by N90. There is a signi\ufb01cant decline with decreasing redshift, with the median \u02d9 M90 being (33, 52) M\u2299/yr for (1010 \u22121011, 1011 \u22121012) M\u2299galaxies at z = 3 \u22124, declining to (21, 50) M\u2299/yr at z = 2 \u22123, (13, 20) M\u2299/yr at z = 1 \u22122 and (5, 8) M\u2299/yr at z = 0.62 \u22121. The rapid decrease of both \u02d9 M90 and N90 (seen in Figure 2) with decreasing redshift makes it clear that the total cold gas in\ufb02ow rate has experienced a very dramatic decline with decreasing redshift a factor of \u223c10 from z = 3 \u22124 to z = 0.62 \u22121 at a given galaxy mass. This is consistent with the decline of the global evolution of star formation rate density seen in our simulations (Cen 2011) and observations (Hopkins & Beacom 2006). Analysis by Conselice et al. (2013) suggests that 66 \u00b1 20 of star formation be due to cold accretion, which would be further increased considering inevitable signi\ufb01cant out\ufb02ows, fully consistent with our predictions, although it is noted that they can not di\ufb00erentiate between accretion of cold streams or gas cooling from the hot halo. Simulations indicate, not shown here, the cold gas in\ufb02ow rate is on the order of and on average exceeds the star formation rate. \f\u2013 6 \u2013 0 1 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 PDF z=3\u22124 1010\u221211 1011\u221212 0 1 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 z=2\u22123 1010\u221211 1011\u221212 0 1 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 N90 PDF z=1\u22122 1010\u221211 1011\u221212 0 1 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 N90 z=0.62\u22121 1010\u221211 1011\u221212 Fig. 2.\u2014 shows the probability distribution functions (PDFs) of N90 in four separate redshift ranges, z = 3 \u22124 (top-left panel), z = 2 \u22123 (top-right panel), z = 1 \u22122 (bottom-left panel) and z = 0.62 \u22121 (bottom-right panel). In each panel, two di\ufb00erent stellar mass ranges are shown, 1010 \u22121011 M\u2299(blue histograms) and 1011 \u22121012 M\u2299(red histograms). The vertical dashed lines indicate the median of the PDF of the same color. Figure 4 shows the PDF of dimensionless spin parameter \u03bb (\u2261j/\u221a2GMvrv) for individual streams in the top N90, where j is the mass \ufb02ux-weighted mean speci\ufb01c angular momentum of a stream, and Mv and rv are the virial mass and radius of the galaxy. One feature is observed to stand out: lower mass galaxies (blue histograms) tend to have streams with higher \u03bb than more mass galaxies (red histograms). Whether this trend has some bearing on the dominance of elliptical galaxies at the high mass end among galaxies should be clari\ufb01ed with further studies. For the high stellar galaxies of 1011 \u22121012 M\u2299(red histograms) \u03bb evolves little over the entire redshift range, whereas the less massive subset (1010 \u22121011 M\u2299, blue histograms) displays a steady increase of \u03bb from z = 4 to z = 0.62. It is intriguing that the fraction of \u03bb exceeding 1 is substantial at z \u22642, which is likely instrumental to the emergence of large-scale spiral structures below z = 2. Figure 5 shows the PDF of mean density of in\ufb02ow cold gas. Three trends are noted. First, the stream density depends strongly on redshift, with the median being \u223c10\u22122cm\u22123 at z = 2 \u22124, \u223c10\u22123 \u221210\u22122.5cm\u22123 at z = 1 \u22122 and 10\u22123.5 \u221210\u22123cm\u22123 at z = 0.62 \u22121. Second, while the more massive galaxies, on average, tend to have somewhat higher stream gas density than less massive galaxies at lower redshift (z = 0.6 \u22121), the di\ufb00erence gradually diminishes towards higher redshift. This particular trend, while slightly puzzling, can be reconciled if there is a natural selection e\ufb00ect where strong streams can survive in the midst of gravitational heating environment. Third, at z = 0.62 \u22121 there is a dramatic increase of galaxies with very low density streams, which likely re\ufb02ects the increased importance of hot accretion at low redshift. \f\u2013 7 \u2013 0 25 50 75 100 125 150 0 0.1 0.2 PDF z=3\u22124 1010\u221211 1011\u221212 0 25 50 75 100 125 150 0 0.1 0.2 z=2\u22123 1010\u221211 1011\u221212 0 25 50 75 100 125 150 0 0.1 0.2 0.3 0.4 0.5 0.6 \u02d9 M90 (M\u2299/yr) PDF z=1\u22122 1010\u221211 1011\u221212 0 25 50 75 100 125 150 0 0.1 0.2 0.3 0.4 0.5 0.6 \u02d9 M90 (M\u2299/yr) z=0.62\u22121 1010\u221211 1011\u221212 Fig. 3.\u2014 shows the PDF of cold in\ufb02ow rate per stream, \u02d9 M90, de\ufb01ned to be 90% of the total cold accretion rate divided by N90, for four separate redshift ranges, z = 3\u22124 (top-left panel), z = 2\u22123 (top-right panel), z = 1\u22122 (bottom-left panel) and z = 0.62\u22121 (bottom-right panel). In each panel, two di\ufb00erent stellar mass ranges are shown, 1010 \u22121011 M\u2299(blue histograms) and 1011 \u22121012 M\u2299 (red histograms). The vertical dashed lines indicate the median of the PDF of the same color. 4. A New Physical Scenario for the Emergence of the Hubble Sequence The quantitative, new characterizations and trends presented in \u00a73 on cold gas streams their number, mass \ufb02ux, density and angular momentum provide the physical basis to construct a working framework. Rather than detailed quantitative descriptions, which are beyond the scope of this Letter and will be carried out separately, we provide a set of three key physical elements as a useful guide to investigating, in the context of the standard cold dark matter model, the general morphological trends of galaxies with redshift the emergence of the Hubble sequence. A consequential but necessary ansatz is that the formation of prominent spiral structures as well as star formation in galaxies have cosmological origins and are primarily fed by cold streams. \u2022 Origin of Small, Clumpy Galaxies at z > 2 While galaxy mergers and interactions may play varying roles, ultimately, the morphological traits of galaxy formation are expected to be largely governed by the nature of gas supply and dynamics, with feedback perhaps playing a role of regulation of the quantity of star formation. Given that most galaxies at z > 2 have N90 \u22652 cold gas streams of high gas density (ngas) that is more conducive to fragmentations (e.g., Dekel et al. 2009b), the expectation is that feeding of and interactions between multiple concurrent streams at high redshift would result in a population of galaxies with fragmented, clumpy and frequently multiple (gaseous and stellar) disks. This is in line with the observed increasing dominance of a mixture of disk-like, irregular and clumpy galaxies towards high redshift (e.g., F\u00a8 orster Schreiber et al. 2009; \f\u2013 8 \u2013 0 1 2 3 4 0 0.1 0.2 PDF z=3\u22124 1010\u221211 1011\u221212 0 1 2 3 4 0 0.1 0.2 z=2\u22123 1010\u221211 1011\u221212 0 1 2 3 4 0 0.1 0.2 h PDF z=1\u22122 1010\u221211 1011\u221212 0 1 2 3 4 0 0.1 0.2 h z=0.62\u22121 1010\u221211 1011\u221212 Fig. 4.\u2014 shows the PDF of \u03bb (\u2261j/\u221a2GMvrv) for individual streams in the top N90, for four separate redshift ranges, z = 3 \u22124 (top-left panel), z = 2 \u22123 (top-right panel), z = 1 \u22122 (bottomleft panel) and z = 0.62 \u22121 (bottom-right panel). In each panel, two di\ufb00erent stellar mass ranges are shown, 1010 \u22121011 M\u2299(blue histograms) and 1011 \u22121012 M\u2299(red histograms). The vertical dashed lines indicate the median of the PDF of the same color. Chevance et al. 2012; Murata et al. 2014). Interactions of streams are e\ufb00ective at producing low angular momentum gas, with the expectation that galaxies at high redshift are disproportionately small in size compared to their low redshift counterparts, a trend that is observed (e.g., Trujillo et al. 2006) and seen in simulations (e.g., Joung et al. 2009). \u2022 Emergence of Spiral Structures at z \u22642 Galaxies have multiple concurrent cold streams (N90) of high accretion rates ( \u02d9 M90), lower angular momenta (\u03bb) and high gas densities (ngas) at z > 2, each of which is detrimental to the formation of grand spiral structures. It appears that nature has arranged against grand design spiral formation at high redshift with plenty of insurance. While one can not come up with a set of su\ufb03cient conditions for the emergence of grand design spirals, it seems physically reasonable to assume that not having more than one concurrent major cold streams is requisite for the emergence of grand design spirals. Our analysis indicates that this condition is expected to occur at z \u22641, suggesting that major spiral galaxies begin to emerge at z \u22641. The signi\ufb01cantly larger \u03bb for galaxies in the stellar mass range 1010\u221211 M\u2299than 1011\u221212 M\u2299(see Figure 4) is interesting, implying that the largest galaxies in the universe at any redshift possess less favorable conditions to form large spirals. Between z = 1 \u22122, about one half of the galaxies have one or two concurrent streams, which we suggest give rise to \ufb02occulent spirals stemming from a collection of disjoint but relatively frequent in\ufb02ow streams. These expectations are in agreement with extant observational indications (e.g., Elmegreen & Elmegreen 2014). \f\u2013 9 \u2013 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 0 0.1 PDF z=3\u22124 1010\u221211 1011\u221212 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 0 0.1 z=2\u22123 1010\u221211 1011\u221212 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 0 0.1 0.2 0.3 log ngas (cm\u22123) PDF z=1\u22122 1010\u221211 1011\u221212 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 0 0.1 0.2 0.3 log ngas (cm\u22123) z=0.62\u22121 1010\u221211 1011\u221212 Fig. 5.\u2014 shows the PDF of mean density of in\ufb02ow cold gas in four separate redshift ranges, z = 3\u22124 (top-left panel), z = 2\u22123 (top-right panel), z = 1\u22122 (bottom-left panel) and z = 0.62\u22121 (bottomright panel). In each panel, two di\ufb00erent stellar mass ranges are shown, 1010 \u22121011 M\u2299(blue histograms) and 1011 \u22121012 M\u2299(red histograms). The mean density is averaged over all streams for each individual galaxy, weighted by in\ufb02ow mass \ufb02uxes of individual streams. The vertical dashed lines indicate the median of the PDF of the same color. \u2022 Conditions for Early Type Galaxy Formation The physical conditions for the emergence of early type galaxies are naturally diametrically opposed to those of irregular galaxies. For early type galaxies there is no cold gas stream with no recurrence. This condition is physically more natural than the proposed transition to hot accretion based on halo mass threshold (Kere\u02c7 s et al. 2005; Dekel & Birnboim 2006; Nelson et al. 2013), which would be inconsistent with signi\ufb01cant star formation in massive galaxies at high redshift (Dekel et al. 2009a). High density environment is shown to be a good proxy for the emergence of early type galaxies (Cen 2014b). Because of the association of massive halos with high overdensities of large-scale structure, more massive early type galaxies are expected to have emerged earlier, consistent with observations (e.g., Mortlock et al. 2013). For the same reason, early type galaxies are expected to be somewhat older in clusters than in \ufb01eld, in agreement with observations (e.g., Thomas et al. 2005). One also expects that, while early type galaxies occur at all redshifts, their abundance is expected to increase with decreasing redshift as more regions become dynamically hot, in agreement with observations (e.g., Renzini 2006). However, below z \u223c1 the rate of increase of the abundance of giant ellipticals is expected to drop o\ufb00, as the nonlinear Mnl starts to signi\ufb01cantly exceed the mass scales of giant ellipticals, in agreement with observations (e.g., Borch et al. 2006). \f\u2013 10 \u2013 The analysis program yt (Turk et al. 2011) is used to perform some of the analysis. Computing resources were in part provided by the NASA HighEnd Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. This work is supported in part by grant NASA NNX11AI23G." + }, + { + "url": "http://arxiv.org/abs/1403.5274v1", + "title": "Frequent Spin Reorientation of Galaxies due to Local Interactions", + "abstract": "We study the evolution of angular momenta of ($M_*=10^{10}-10^{12}\\msun$)\ngalaxies utilizing large-scale ultra-high resolution cosmological hydrodynamic\nsimulations and find that spin of the stellar component changes direction\nfrequently, caused by major mergers, minor mergers, significant gas inflows and\ntorques by nearby systems. The rate and nature of change of spin direction can\nnot be accounted for by large-scale tidal torques, because the latter fall\nshort in rates by orders of magnitude and because the apparent random swings of\nthe spin direction are inconsistent with alignment by linear density field. The\nimplications for galaxy formation as well as intrinsic alignment of galaxies\nare profound. Assuming the large-scale tidal field is the sole alignment agent,\na new picture emerging is that intrinsic alignment of galaxies would be a\nbalance between slow large-scale coherent torquing and fast spin reorientation\nby local interactions. What is still open is whether other processes, such as\nfeeding galaxies with gas and stars along filaments or sheets, introduce\ncoherence for spin directions of galaxies along the respective structures.", + "authors": "Renyue Cen", + "published": "2014-03-20", + "updated": "2014-03-20", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.GA" + ], + "main_content": "Introduction The angular momentum or spin of galaxies is a physical quantity that is far from being fully understood but is of fundamental importance to galaxy formation and cosmological applications. While N-body simulations have shed useful light on spin properties of dark matter halos (e.g., Vitvitska et al. 2002), it is expected that, given the vastly di\ufb00erent scales between the stellar component and dark matter halo component and di\ufb00erent physical processes governing stellar, gas and dark matter components, the angular momentum dynamics of galaxies may be quite di\ufb00erent and not necessarily inferable from N-body simulations with any reasonable accuracy. We herewith perform a detailed analysis of the dynamics of spin of galaxies in a full cosmological context, utilizing ab initio LAOZI cosmological hydrodynamic simulations of the standard cold dark matter model (Cen 2014) with an unprecedented 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1403.5274v1 [astro-ph.CO] 20 Mar 2014 \f\u2013 2 \u2013 galaxy sample size and ultra-high numerical resolution. This paper is the second in the series \u201cOn the Origin of the Hubble Sequence\u201d. 2. Method The reader is referred to Cen (2014) for detailed descriptions of our simulations and validations. Brie\ufb02y, we perform cosmological simulations with the adaptive mesh re\ufb01nement hydrocode, Enzo (The Enzo Collaboration et al. 2013). The periodic box has a size of 120h\u22121Mpc, within which a zoom-in box of a comoving size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 is emdedded. The resolution is better than 114h\u22121pc (physical). The cosmological parameters are the same as the WMAP7-normalized (Komatsu et al. 2010) \u039bCDM model. We identify galaxies using the HOP algorithm (Eisenstein & Hut 1998) operating on the stellar particles. A sample of \u2265300 galaxies with stellar masses greater than 1010 M\u2299are used. For each galaxy at z = 0.62 a genealogical line is constructed from z = 0.62 to z = 6 by connecting galaxy catalogs at a series of redshifts. Galaxy catalogs are constructed from z = 0.62 to z = 1.40 at a redshift increment of \u2206z = 0.02 (corresponding to \u2206t = 81Myr at z = 1) and from z = 1.40 to z = 6 at a redshift increment of \u2206z = 0.05 (corresponding to \u2206t = 80Myr at z = 2). The parent of each galaxy is identi\ufb01ed with the one at the next higher redshift catalog that has the most overlap in stellar mass. We compute the speci\ufb01c angular momentum vector \u20d7 ji for stars of each galaxy within a radius r at each output snapshot i. The time derivative of \u20d7 ji is computed as |d\u20d7 ji/dt| \u2261|\u20d7 ji+1 \u2212\u20d7 ji|/(ti+1 \u2212ti). (1) One notes that due to the \ufb01nite number of outputs for our simulation data, d\u20d7 j\u2217/dt is somewhat underestimated in cases of rapid changes of angular momentum on time scales shorter than our snapshot intervals. A similar de\ufb01nition for gas is also used. We denote t1 as the time required to change the spin vector by 1 degree of arc at each snapshot for each galaxy, de\ufb01ned as t1 \u2261 \u03c0 180(ti+1 \u2212ti)acos\u22121(\u02c6 ji+1 \u00b7 \u02c6 ji), (2) where \u02c6 ji is the unit vector of \u20d7 ji. For the \ufb01rst time, we address the evolution of the spin of galaxies statistically in a cosmological setting. All length units below will be physical. 3. Results Figure 1 shows the dot product of the unit vector of the speci\ufb01c angular momentum of the central 3kpc radius stellar region and an arbitrary \ufb01xed unit vector as a function of \f\u2013 3 \u2013 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 \u02c6 j\u2217\u00b7 \u02c6 n log M*=11.65 log M* 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 log M*=11.16 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 \u02c6 j\u2217\u00b7 \u02c6 n log M*=10.99 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 log M*=10.88 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 \u02c6 j\u2217\u00b7 \u02c6 n z log M*=10.5 0.5 1 2 3 4 5 \u22121 \u22120.5 0 0.5 1 z log M*=10.25 Fig. 1.\u2014 shows in blue the dot product of the unit vector of the speci\ufb01c angular momentum of the central 3kpc stellar region and an arbitrary \ufb01xed (in time) unit vector as a function of redshift. Each panel shows a random galaxy with its \ufb01nal stellar mass at z = 0.62 as indicated at the top of the panel. Also shown in each panel as a red dashed line is the logarithm of the stellar mass with an arbitrary vertical o\ufb00set. redshift in blue. It is visible that a signi\ufb01cant increase in stellar mass within a short period of time (i.e., mergers) is often accompanied by dramatic changes in angular momentum vectors. We note that the angular momentum vector of a galaxy over its history displays a substantial amount of change even in \u201cquiet\u201d times without major mergers. Ensuing analysis provides some physical insight into this. The top panel of Figure 2 shows the PDF of the time derivative of speci\ufb01c angular momentum of the central 3kpc radius stellar regions for galaxies of stellar mass in the range 1011 \u22121012 M\u2299. The middle panel shows the same as in the top panel, except it is for the central 7kpc radius stellar regions. We see that the overall rate of change of angular momenta is signi\ufb01cantly higher at z = 1 \u22123 compared to that at z = 0.6 \u22121. The distribution of |d\u20d7 j\u2217/dt| has an extended tail at the high-end, due to major mergers; due to our \ufb01nite time sampling these rates are capped by the frequency of our snapshots. \f\u2013 4 \u2013 1 2 3 4 0 0.1 0.2 log|d\u20d7 j\u2217(3kpc )/dt| (kpc km/s/Gyr) PDF z=0.62\u22121 z=1\u22122 z=2\u22123 1 2 3 4 0 0.1 0.2 log|d\u20d7 jg as(3kpc )/dt| (kpc km/s/Gyr) PDF Ms = 1011 \u22121012M\u2299 1 2 3 4 0 0.1 0.2 log|d\u20d7 j\u2217(7kpc )/dt| (kpc km/s/Gyr) PDF Fig. 2.\u2014 Top panel: the probability distribution function (PDF) of the amplitude of the time derivative of speci\ufb01c angular momentum of the central 3kpc radius stellar regions (see Eq 1) for galaxies of total stellar mass in the range 1011 \u22121012 M\u2299in three di\ufb00erent redshift ranges, z = 0.62 \u22121 (black histograms), z = 1 \u22122 (red histograms), z = 2 \u22123 (green histograms), respectively. As an intuitive example, if a Milky Way-like galaxy of size 10kpc and rotation velocity of 200km/s changes its spin direction by 90 degress in one current Hubble time, it would correspond to a value log |dj/dt| equal to 2.3 in the x-axis. Middle panel: same as the top panel but for the central 7kpc radius stellar region. Bottom panel: same as the top panel but for gas in the central 3kpc radius region. Consistent with the expected decline of major merger rate below z \u223c1, the high |d\u20d7 j\u2217/dt| tail of the distribution at z = 0.6\u22121 is signi\ufb01cantly less pronounced. No major di\ufb00erence is seen between 3kpc and 7kpc cases, suggesting that angular momentum changes within the two radii are approximately in tandem and our analysis is robust using 3kpc. The choice of 3\u22127 proper kpc is appropriate by noting that a (spiral, elliptical) galaxy of stellar mass 1012 M\u2299 is observed to have a size of (10.8, 15.1)kpc (Shen et al. 2003) for low redshift galaxies. The size roughly scales with the root of the stellar mass and decreases with increasing redshift (e.g., Trujillo et al. 2006). \f\u2013 5 \u2013 \u22123 \u22122 \u22121 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log CPDF mass and redshift dependence 1011\u221212, z=0.62\u22121 1011\u221212, z=1\u22122 1011\u221212, z=2\u22123 1010\u221211, z=0.62\u22121 1010\u221211, z=1\u22122 1010\u221211, z=2\u22123 \u22123 \u22122 \u22121 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log CPDF all Ms=1010\u221211Msun environment dependence b=1\u221210,z=0.62\u22121 b=102\u2212103,z=0.62\u22121 b=1\u221210,z=1\u22122 b=102\u2212103,z=1\u22122 \u22123 \u22122 \u22121 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log t1 (Gyr) log CPDF all Ms=1010\u221211Msun galaxy type dependence blue,z=0.62\u22121 blue,z=1\u22122 blue,z=2\u22123 red,z=0.62\u22121 red,z=1\u22122 red,z=2\u22123 Fig. 3.\u2014 Top panel: shows the cumulative PDF (CPDF) of the time taken to change the direction of spin of the central 3kpc radius stellar region by 1 degree of arc, t1 (Eq 2) for galaxies of total stellar mass in the range 1010 \u22121011 M\u2299(blue curves) and 1011 \u22121012 M\u2299 (red curves) in three di\ufb00erent redshift ranges, z = 0.62 \u22121 (solid curves), z = 1 \u22122 (dotted curves), z = 2 \u22123 (dashed curves), respectively. Middle panel: shows the of t1 for galaxies of total stellar mass in the range 1010 \u22121011 M\u2299in low-density (\u03b40.5 = 1 \u221210); solid curves) and high-density environment (\u03b40.5 = 102 \u2212103); dotted curves), in two redshift rangess, z = 0.62 \u22121 (black curves) and z = 1 \u22122 (magenta curves), respectively. The environment overdensity \u03b40.5 is de\ufb01ned to be the overdensity of total matter in a sphere of radius 0.5h\u22121Mpc comoving. Bottom panel: shows the CPDF of t1 for blue (g \u2212r < 0.6; blue curves) and red (g \u2212r > 0.6, red curves) galaxies of total stellar mass in the range 1010 \u22121011 M\u2299in three di\ufb00erent redshift ranges, z = 0.62 \u22121 (solid curves), z = 1 \u22122 (dotted curves), z = 2 \u22123 (dashed curves), respectively. \f\u2013 6 \u2013 The bottom panel of Figure 2 shows the PDF for gas in the central 3kpc radius region. We see that the speci\ufb01c angular momenta of the gas within central 3kpc change at rates 5 \u221210 times higher than that of stars (top panel of Figure 2). There is no doubt that gas in\ufb02ows contribute signi\ufb01cantly to the change of the stellar angular momentum in two ways. First, signi\ufb01cant gas in\ufb02ows at inclined angles to the stellar mid-plane may torque the stars (and vice versa). Second, new gas that reaches there will form new stars that have di\ufb00erent angular momentum vector and cause the overall angular momentum to change in both direction and magnitude. At high redshift the orientation of the gas in\ufb02ows on large scales are not well correlated with that of the stars or gas that is already there. Since the amount of gas tend to be smaller than that of stars, it is easier to alter the angular momentum of the gas than that of the stars. In the absence of major mergers, we expect minor stellar mergers could also alter the angular momentum vector. Figure 3 shows the CPDF of the time to change the direction of spin of the central 3kpc radius stellar region by 1 degree of arc, for dependence on mass and redshift (top panel), environment (middle panel) and galaxy type (bottom panel). Consistent with Figure 2 we see that the frequency of spin direction change increases with redshift; the median t1 decreases by 60\u221280% from z = 0.62\u22121 to z = 2\u22123 with the higher mass group corresponding to the high end of the range of change. The median t1 decreases by 10\u221220% from Ms = 1011\u221212 M\u2299 to Ms = 1010\u221211 M\u2299with the dependence on mass somewhat stronger at low redshift than at high redshift. That less massive galaxies tend to experience more rapid changes of speci\ufb01c angular momenta is anecdotally apparent in Figure 3. We also \ufb01nd large mis-alignment between inner stellar (and gas) regions with outer halos (not presented here), in broad agreement with the conclusions of Hahn et al. (2010). A dependence on environment is seen in the middle panel, with the median t1 decreasing by a factor of 1.9 \u22122.7 from \u03b40.5 = 1 \u221210 to \u03b40.5 = 102 \u2212103; the environment dependence weakens at higher redshift. This \ufb01nding that the spin direction of galaxies changes more frequently in dense environment can be attributed to enhanced local interactions there. In the bottom panel the dependence on galaxy type gives mixed trends. For blue (g \u2212r < 0.60) galaxies the median t1 decreases steadily from z = 2 \u22123 to z = 0.62 \u22121 by a factor of \u223c2.3, whereas for red (g \u2212r > 0.60) galaxies the median t1 hardly changes from z = 2\u22123 to z = 0.62\u22121. The median t1 for red galaxies is comparable to that of blue galaxies at z = 2\u22123; at lower redshift the median t1 for red galaxies becomes progressively lower compared to that of blue galaxies, mainly due to the latter increasing with decreasing redshift. In Cen (2014) we show that the vast majority of red galaxies do not gain signi\ufb01cant stellar mass in the red sequence. Thus we conclude that the rapid change of spin direction for red galaxies are due to torques by nearby galaxies, whereas blue galaxies are subject to all three local interactions gas accretion, stellar accretion and torques. It is instructive to put the frequency of spin direction change into some perspective. For \f\u2013 7 \u2013 a point mass of M at a distance d, the torque of M on the galaxy with a quadrupole moment Q and angular momentum \u20d7 J is \u03c4 = | d \u20d7 J dt | = 3 4 GMQ d3 sin(2\u03b8) (e.g., Peebles 1969), where \u03b8 is the angle between the separation vector and the symmetry axis of the galaxy. Expressing \u03c4 in terms of overdensity \u03b4 of the region centered on mass M, \u03c4 = \u03c0G\u03c10(1+z)3\u03b4Q sin(2\u03b8), where z is redshift, \u03c10 the mean mass density at z = 0. Approximating spirals as \ufb02at axisymmetric uniform disks with a = b = \u221ec (giving the quadrupole moment of Q = 2ma2/5) and full rotation support. This allows to express the torquing time tq, de\ufb01ned to be the time taken to change the spin direction by 1 degree of arc, tq = \u03c0 180 |\u20d7 j\u2217|m \u03c4(z, m, T), (3) giving tq = 2.5Gyr for \u03b4 = 200, z = 1 and sin(2\u03b8) = 1 for spiral galaxies. Comparing to the median t1 \u223c10\u22123 \u221210\u22122Gyr seen in Figure 3, it is evident that the rapid spin reorientation of galaxies can not possibly be due to tidal torques by large-scale structure. It is noted that the intrinsic alignment sourced by primordial large-scale gravitational \ufb01eld is inconsistent with the frequent directional change shown in Figure 1. Under the (unproven) assumption that the large-scale tidal \ufb01eld is the sole alignment agent, any alignment between galaxies on large-scales would result from a balance between the fast reorientation rate due to local processes and slow coherent torques by large-scale structure, which is expressed as the ratio t1 to tq, denoted as t1/(t1+tq). If the quadrupole of the galaxy is, in this case, produced by loal interactions, independent of the large-scale tidal \ufb01eld, the alignment in this simpli\ufb01ed model would be linear [instead of quadratic, see Hirata & Seljak (2004)] to the large-scale gravitational tidal \ufb01eld. We obtain \ufb01nally the expression for the mean value of t1/(t1 + tq) weighted by the distribution of t1 [P(t1), shown in Figure 3], denoted as \u03b7(z, m, T): \u03b7(z, m, T) \u2261 Z \u0012 t1 t1 + tq(z, m, T) \u0013 Pz,m,T(t1)dt1, (4) at redshift z for galaxies of mass m and type T (spiral or elliptical). We approximate elliptical galaxies as oblate axisymmetric spheroids with a = b = 2c and vrot/\u03c3 = 0.2, resulting in the quadrupole moment of Q = 3ma2/10. The sizes of galaxies are adopted from observations by Shen et al. (2003). The bias factor is from Tegmark et al. (2004) adjusted to \u03c38 = 0.8. The stellar mass to light ratio as a function of absolute magnitude is taken from from Kau\ufb00mann et al. (2003). We incorporate these into \u03c4 to get \u03c4(z, m, T) = \u03c0G\u03c10(1 + z)3D+(z)b(m)Q(m, T)[\u03b4 sin(2\u03b8)] (5) as well as into j in Equations (3) for di\ufb00erent galaxy types, where D+(z) is the linear density growth factor normalized to be unity at z = 0, b(m) is the bias factor of galaxies of mass m, \f\u2013 8 \u2013 10 11 12 \u22125 \u22124 \u22123 \u22122 logM\u2217(M\u2299) \u03b7(z, m, T ) ellipticals z=0.62\u22120.78 spirals z=0.62\u22120.78 ellipticals z=0.8\u22121.2 spirals z=0.8\u22121.2 ellipticals z=1.5\u22122.5 spirals z=1.5\u22122.5 Fig. 4.\u2014 shows \u03b7 (see Equation 4 for de\ufb01nition) as a function of galaxy mass and type at three di\ufb00erent redshift ranges. Spiral and elliptical galaxies are shown are shown in blue and red, respectively, for z = 0.62 \u22120.78 (solid dots), z = 0.8 \u22121.2 (open squares) and z = 1.5 \u22122.5 (stars). The model results are obtained using [\u03b4 sin(2\u03b8)] = 1 (see Equation 5). The errorbars in the x-axis indicate the mass bin sizes. The results using Equation (4), in conjunction with Equations (3, 5), are shown in Figure 4. As in the linear alignment model (e.g., Catelan et al. 2001), the di\ufb03culty is to de\ufb01ne a demarcating scale between local and linear large-scale structures. We tentatively have left the scalings to be relative, absorbed into [\u03b4 sin(2\u03b8)]. If compelled to give an estimate relevant to weak lensing, one might choose [\u03b4 sin(2\u03b8)] to be in the range 1\u221210. In this case, we get a tangential sheer \u03b3T that is 1\u221210 times \u03b7 in Figure 4, resulting in \u03b3T of \u2212(0.2\u22122)% for the most massive elliptical galaxies (i.e., luminous red galaxies, LRGs, red dots in Figure 4), which, coincidentally, falls in the range of observed GI (galaxy-gravitational tidal \ufb01eld) signal for LRGs (e.g., Mandelbaum et al. 2006; Hirata et al. 2007; Joachimi et al. 2011). The negative sign comes about, because the galaxies, under the torque of a central mass, have a tendency to align their disks in the radial direction that is dynamically stable. Three separate trends with respect to z, m and T are seen: the alignment (1) decreases with increasing redshift, (2) decreases with decreasing stellar mass, and (3) is larger for \f\u2013 9 \u2013 elliptical galaxies than for spiral galaxies. The \ufb01rst two trends are accounted for by trends of t1 seen in Figure 3. The last trend requires some discussion. The bottom panel of Figure 3 shows that ellipticals have shorter t1 than spirals, due to a large part to their residing in overdense environments and in addition to their having a lower overall speci\ufb01c angular momentum amplitude. However, also because the speci\ufb01c angular momentum of ellipticals is a factor of 5 lower than that of spiral galaxies, elliptical galaxies are easier to slew. It is argubly a relatively more straight-forward comparison to observations of (radial) alignments of satellite galaxies with respect to the central galaxies of groups and clusters. But this is in fact complicated by (at least) four issues. First, the observed detection and non-detection of radial alignment of galaxies around groups and clusters of galaxies concern radial ranges that are already mostly in the nonlinear regime (i.e., overdensity \u03b4 \u226b1). Second, most of the observed galaxy samples analyzed contain of order 100-10000 galaxies, hence statistical uncertainties are in the range of 1 \u221210%. Third, observed samples likely contain a large number of projected galaxies with physical separations that are much larger than their lateral distance from the cluster/group center; the degree of projection e\ufb00ects is strongly dependent on the orientation of the line of sight (e.g., viewing a cluster along a \ufb01lament) and signi\ufb01cantly complicates interpretation of results. Fourth, on some very small scales, binary interactions between a satellite and the central galaxy may play the dominant role. A combination of these factors may explain the current confused state with con\ufb02icting observational results (e.g., Bernstein & Norberg 2002; Pereira & Kuhn 2005; Agustsson & Brainerd 2006; Torlina et al. 2007; Faltenbacher et al. 2007; Hao et al. 2011). Nonetheless, we expect that the radial alignment, if exists, is expected to decrease with increasing redshift, perhaps already hinted by some observations (e.g., Hung & Ebeling 2012), and with decreasing cluster mass at a \ufb01xed radius. The simple model presented has two notable caveats. First, it assumes that the only alignment mechanism is gravitational torque by some large-scale structure. So far we have presented only the relative scalings among di\ufb00erent galaxies under this assumption, but not the absolute magnitude. We cannot justify this rather critical assumption with con\ufb01dence at this time. Second, one notes that a signi\ufb01cant portion of the galaxy spin direction reorientation is likely due to gas feeding and substructure merging. Thus, it is not unreasonable to expect that the gas feeding and substructure merging have some preferred directions, such as along the \ufb01laments and sheets. In this case, while galaxy spin direction changes frequenctly as shown here, it may do so with some degree of coherence over some scales (such as the scale of \ufb01laments), either temporaneously or through long-term memory of large-scale structure (e.g., Libeskind et al. 2012). If this were true, it then suggests that intrinsic alignments may be a result of balance between high-frequency random re-orientation at short time scales and some sort of large-scale \u201cmean\u201d feeding pattern on long time scales. There is some empirical evidence for galaxies to be aligned with large-scale structures in a sense that is consistent with this \u201cfeeding\u201d picture (e.g., Zhang et al. 2013; Li et al. 2013). It should be a priority \f\u2013 10 \u2013 to understand this issue systematically. 4." + }, + { + "url": "http://arxiv.org/abs/1403.5265v1", + "title": "Temporal Self-Organization in Galaxy Formation", + "abstract": "We report on the discovery of a relation between the number of star formation\n(SF) peaks per unit time, $\\nu_{\\rm peak}$, and the size of the temporal\nsmoothing window function, $\\Delta t$, used to define the peaks: $\\nu_{\\rm\npeak}\\propto\\Delta t^{1-\\phi}$ ($\\phi\\sim 1.618$). This relation holds over the\nrange of $\\Delta t=10$ to $1000$Myr that can be reliably computed, using a\nlarge sample of galaxies obtained from a state-of-the-art cosmological\nhydrodynamic simulation. This means that the temporal distribution of SF peaks\nin galaxies as a population is fractal with a Hausdorff fractal dimension equal\nto $\\phi-1$. This finding reveals, for the first time, that the superficially\nchaotic process of galaxy formation is underlined by a temporal\nself-organization up to at least one gigayear. It is tempting to suggest that,\ngiven the known existence of spatial fractals (such as the power-law two-point\nfunction of galaxies), there is a joint spatio-temporal self-organization in\ngalaxy formation. From an observational perspective, it will be urgent to\ndevise diagnostics to probe SF histories of galaxies with good temporal\nresolution to facilitate a test of this prediction. If confirmed, it would\nprovide unambiguous evidence for a new picture of galaxy formation that is\ninteraction driven, cooperative and coherent in and between time and space.\nUnravelling its origin may hold the key to understanding galaxy formation.", + "authors": "Renyue Cen", + "published": "2014-03-20", + "updated": "2014-03-20", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "Introduction Galaxy formation involves a large set of physical processes cosmological expansion, gravity, hydrodynamics, atomic physics and feedback from star formation, stellar evolution and black hole growth and spans large dynamic ranges in time (at least 0.1Myr to 10Gyr) and space (at least 1pc to 100Mpc). Some of the most interesting results on galaxy formation are thus obtained using large-scale simulations, providing fundamental insights on a variety of di\ufb00erent aspects (e.g., Frenk et al. 1988; Cen et al. 1994; Gnedin 1998; Klypin et al. 1999; Moore et al. 1999; Cen & Ostriker 1999; Wechsler et al. 2002; Abel et al. 2002; Bromm et al. 2002; Springel et al. 2005; Kere\u02c7 s et al. 2005; Hopkins et al. 2006; Croton et al. 2006; Naab et al. 2006; Bournaud et al. 2007; Diemand et al. 2008; Dekel et al. 2009; Schaye et al. 2010). The spatial distributions of galaxies have been extensively studied observationally, primarily 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1403.5265v1 [astro-ph.GA] 20 Mar 2014 \f\u2013 2 \u2013 at low redshift. Among the most striking is the nature\u2019s ability to maintain a powerlaw galaxy-galay two-point correlation function over a signi\ufb01cant range (\u223c0.1\u221210h\u22121Mpc) (e.g., Groth & Peebles 1977), although there is evidence of a slight in\ufb02ection at \u223c1 \u22122h\u22121Mpc in recent analysis (e.g., Zehavi et al. 2004). This spatial regularity is not inherited from the linear power spectrum but must be a result of cooperation between nonlinear evolution and galaxy formation. In self-gravitating systems, such as galaxies, the temporal and spatial structures may be related. This may be seen by two examples. First, for an isolated (nondissipative) spherical system, the collapse time of each shell (assuming no shell crossings) is uniquely determined by the interior mass and speci\ufb01c energy of the shell that in turn is determined by the density structures. Second, during the growth of a typical galaxy, in addition to direct acquisition of stars via mergers and accretion (along with dark matter), signi\ufb01cant spatial interactions may induce signi\ufb01cant star formation activities hence leave temporal imprints in its star formation history. Taking these indications together suggests that one should bene\ufb01t by tackling the problem of galaxy formation combining the spatial and temporal information. Here, as a step in that direction, we perform a novel analysis, utilizing the ab initio LAOZI adaptive mesh re\ufb01nement cosmological hydrodynamic simulation, to understand the statistical properties of star formation episodes in galaxies. 2. Simulations The reader is referred to Cen (2014) for detailed descriptions of our simulations and the list of its empirical validations therein. Brie\ufb02y, a zoom-in region of comoving size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 is embedded in a 120h\u22121Mpc periodic box and resolved to better than 114h\u22121pc (physical). We use the following cosmological parameters that are consistent with the WMAP7-normalized (Komatsu et al. 2011) \u039bCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100h km s\u22121Mpc\u22121 = 70 km s\u22121Mpc\u22121 and n = 0.96. Equations governing motions of dark matter, gas and stars, and thermodynamic state of gas are followed forward in time from redshift 100 to 0.62, using the adaptive mesh re\ufb01nement cosmological hydrodynamic code Enzo (The Enzo Collaboration et al. 2013), which includes all important microphysics and major feedback processes that are well measured. Stellar particles (equivalent to coeval stellar cluster of mass \u223c105 M\u2299) are created from gas clouds meeting certain physical conditions over time, based on the empirical Kennicutt-Schmidt law (Kennicutt 1998). Stellar particles at any time may be grouped together spatially using the HOP algorithm (Eisenstein & Hut 1998) to create galaxy catalogs, which are tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. For each galaxy we have its exact star formation history, given its member stellar particles formation times. A total of (2090, 965, 296, 94, 32, 10) galaxies are found with stellar masses greater than (109.5, 1010, 1010.5, 1011, 1011.5, 1012) M\u2299at z = 0.62. \f\u2013 3 \u2013 For each galaxy we create an uniform time grid of star formation rate at a time resolution of 3Myr from redshift 20 to 0.62, which we call the \u201cunsmoothed\u201d SF history, denoted as S(t). We then smooth S(t) using a square window of full width equal to ts to create a locally-averaged version, denoted as \u00af S(t), which is de\ufb01ned to be \u00af S(t) \u2261 1 ts R t+ts/2 t\u2212ts/2 S(t\u2032)dt\u2032. Another variable is then de\ufb01ned from \u00af S(t): \u03b4(t) \u2261S(t) \u2212\u00af S(t). We smooth \u03b4(t) with a gaussian window of radius tg to yield \u00af \u03b4(t). We obtain \ufb01nally Ss(t) \u2261\u00af S(t)+ \u00af \u03b4(t). We identify SF peaks in Ss(t) as follows. Each SF peak is de\ufb01ned as a contiguous region between two consecutive local minima in Ss(t), say, at time t1 and t2. We sum up S(t) in the same temporal region [t1, t2] to get the total stellar mass for the peak. For each galaxy, we catalog and rank order a complete list of peaks each containing the following information: the total stellar mass, the point in time of maximum SFR and the rank. The number of top SF peaks that make up 50% and 90% of total amount of stellar mass of a galaxy at z = 0.62 is denoted, n50 and n90, respectively. We note that the main purpose of smoothing \u03b4(t) with the gaussian window is to make the automated peak identi\ufb01cation method umambiguous. Thus, it is ts that serves as a time \u201cruler\u201d. We use tg = ts/2 and \ufb01nd the slope of the scaling relation found does not depend on ts/tg within the concerned accuracies. 0 1 2 3 4 5 6 7 8 0 1 2 3 log M*=11.12 log SFR (Msun/yr) original smoothed 0 1 2 3 4 5 6 7 8 0 1 2 3 log M*=10.73 0 1 2 3 4 5 6 7 8 0 1 2 log M*=10.52 0 1 2 3 4 5 6 7 8 0 1 2 log M*=10.31 t (Gyr) log SFR (Msun/yr) 3 4 5 0 1 2 log M*=10.31 t (Gyr) 4 4.1 4.2 4.3 4.4 4.5 0 1 2 log M*=10.31 t (Gyr) Fig. 1.\u2014 shows the star formation histories for four galaxies (the top row plus the bottomleft panel) selected semi-randomly covering mass range of interest at z = 0.62. The time starts at the big bang as zero. The red curves are for unsmoothed SF histories S(t). The blue curves are for the corresponding smoothed SF histories Ss(t), with ts = 200 Myr. In each panel, the galaxy stellar mass at z = 0.62 is indicated at the top. The bottom-middle and -right panels are zoom-in views of the same galaxy shown in the bottom-left panel. \f\u2013 4 \u2013 3. Results We start by showing the star formation histories for four galaxies in Figure 1. We see that our adaptive smoothing scheme appropriately retains major SF peaks but smooths out high-frequency peaks on scales smaller than the ruler size ts, exactly serving the purpose. We also see that there are temporal structures from \u223c1Myr to \u223c1Gyr. Although it is di\ufb03cult to quantify visually the nature of the temporal structures, there is a hint that a signi\ufb01cant SF peak is often sandwiched by periods of diminished SF activities or less signi\ufb01cant SF peaks. It is evident that the histories of individual galaxies vary substantially with respect to both the trend on long time scales and \ufb02uctuations on short time scales. Anectodal evidence that is consistent with the global evolution of SFR density (Hopkins & Beacom 2006) is that, for the galaxy population as a whole, the majority of galaxies are on a downward trend of SFR with increasing time (decreasing redshift) from t \u223c2 \u22123Gyr (corresponding to z = 2 to 3). It is seen that SF in galaxies is usually not monolithic. A typical galaxy is found to have a polylithic temporal structure of star formation, consisting of a series of quasi-monoliths occurring in time in an apparently chaotic fashion. Not only is there no evidence that a typical galaxy forms most of its stars in a single burst, but also the SF history over any scale does not display a form that may be represented by any simple analytic functions (such as an exponential). A qualitatively similar appearance of oscillatory star formation rates are seen 1 2 3 4 5 6 7 8 910 15 20 25 30 35 40 0 0.1 0.2 n50 (red) & n90 (blue) PDF n50 n90 median of n50 median of n90 Fig. 2.\u2014 shows the probability distribution function (PDF) of the number of top SF peaks contributing to 50% (n50) and 90% (n90), respectively, of total stellar mass at z = 0.62 for all galaxies more massive than 1010 M\u2299. The vertical red and and blue dashed lines indicate the median of the respective historgrams. The peaks are identi\ufb01ed with ts = 200Myr. \f\u2013 5 \u2013 in Hopkins et al. (2013), although detailed quantitative comparisons are not available at this time. One take-away message is this: galaxy formation is a chaotic process and conclusions about the galaxy population as a whole based on an unrepresentative sample of galaxies should be taken cautiously. Another is that the often adopted simple temporal pro\ufb01les for star formation (such as exponential decay or delta function) in interpreting observational results should be reconsidered. We now turn to quantitative results. Figure 2 shows the PDFs of n50 and n90 with ts = 200Myr. We see that the number of peaking containing 50% of stellar mass (n50) falls in the range of \u223c1 \u221210 peaks, whereas the number of peaks containing 90% of stellar mass (n90) displays a much broader range of \u223c5\u221240. We note that, had we restricted the galaxy stellar mass range to 1010\u221211 or 1011\u221212 M\u2299, the results do not change signi\ufb01cantly. It is clear that there are large variations from galaxy to galaxy with respect to individual SF histories, as was already hinted in in Figure 1. Behind this chaos, however, collectively, an order is found, as will be shown in Figure 4. 1 2 3 0 1 2 log n50 & n90 log M*=11.12 90% q90=\u22120.585 50% q50=\u22120.49 1 2 3 0 1 2 log M*=10.73 q90=\u22120.515 q50=\u22120.39 1 2 3 0 1 2 log ts (Myr) log n50 & n90 log M*=10.52 q90=\u22120.63 q50=\u22120.55 1 2 3 0 1 2 log ts (Myr) log M*=10.31 q90=\u22120.535 q50=\u22120.55 Fig. 3.\u2014 shows n50 (red dots) and n90 (blue squares) as a function of temporal smoothing window ts for the four galaxies shown in Figure 1. Linear \ufb01ts to the log ts log n50 and log ts log n90 are shown as dashed lines with the respective colors. Figure 3 shows n50 (red dots) and n90 (blue squares) as a function of temporal smoothing window ts for the four galaxies shown in Figure 1. We see that powerlaw \ufb01ts n50 \u221dt\u03c650 s and n90 \u221dt\u03c690 s provide reasonable approximations. Collecting all galaxies with stellar masses greater than 1010 M\u2299at z = 0.62 the results are shown in Figure 4. The top panel of Figure 4 shows the PDF of \u03c650 (red histogram) and \u03c690 (blue historgram). We see that there \f\u2013 6 \u2013 are substantial variations among galaxies, which is expected. The most signi\ufb01cant point is that a typical galaxy has \u03c650 and \u03c690 around \u22120.6. In other words, the galaxy population, collectively taken as a whole, displays signi\ufb01cant orderliness. This point is re-enforced in the bottom panel of Figure 4, which is similar to Figure 3. But here, instead of showing powerlaw \ufb01ts for individual galaxies, we compute the median of n50 (red dots) and n90 (blue squares) for all galaxies \ufb01rst as a function of ts and then show the \ufb01ts to the medians. It is intriguing that a slope about \u22120.618 (= 1 \u2212\u03c6) provides a quite good \ufb01t, where \u03c6 = 1.618 is often called the golden ratio. \u22121 \u22120.9 \u22120.8 \u22120.7 \u22120.6 \u22120.5 \u22120.4 \u22120.3 \u22120.2 \u22120.1 0 0 0.05 0.1 q50 (red) & q90 (blue) PDF q50 q90 median(q50)=\u22120.612 median(q90)=\u22120.574 1 2 3 0 1 2 log ts (Myr) log median n50 & n90 median n50 median n90 n50=1.38(ts/1Gyr)\u22120.618 n90=5.83(ts/1Gyr)\u22120.618 Fig. 4.\u2014 Top panel shows the PDF of \u03c650 (red histogram) and \u03c690 (blue historgram) in the \ufb01t n50 \u221dt\u03c650 s and n90 \u221dt\u03c690 s for all galaxies with stellar masses greater than 1010 M\u2299at z = 0.62. The vertical red and and blue dashed lines indicate the median of the red and blue historgrams, respectively. Bottom panel shows the median of n50 (red dots) and n90 (blue squares), respectively, for all galaxies with stellar masses greater than 1010 M\u2299at z = 0.62, as a function of temporal smoothing window ts. The vertical errorbars indicate the 25%-75% range. The red and and blue dashed lines indicate \ufb01ts with a slope \u22120.618. 4. Discussion and" + }, + { + "url": "http://arxiv.org/abs/1311.5916v2", + "title": "On the Origin of the Hubble Sequence: I. Insights on Galaxy Color Migration from Cosmological Simulations", + "abstract": "An analysis of more than 3000 galaxies resolved at better than 114 pc/h at\nz=0.62 in a LAOZI cosmological adaptive mesh refinement hydrodynamic simulation\nis performed and insights gained on star formation quenching and color\nmigration. The vast majority of red galaxies are found to be within three\nvirial radii of a larger galaxy, at the onset of quenching when the specific\nstar formation rate experiences the sharpest decline to fall below\n~10^{-2}-10^{-1}/Gyr (depending on the redshift). We shall thus call this\nmechanism \"environment quenching\", which encompasses satellite quenching. Two\nphysical processes are largely responsible: ram-pressure stripping first\ndisconnects the galaxy from the cold gas supply on large scales, followed by a\nlonger period of cold gas starvation taking place in high velocity dispersion\nenvironment, during the early part of which the existing dense cold gas in the\ncentral region (<10kpc) is consumed by in situ star formation. Quenching is\nfound to be more efficient, but not faster, on average, in denser environment.\nThroughout this quenching period and the ensuing one in the red sequence\ngalaxies follow nearly vertical tracks in the color-stellar-mass diagram. In\ncontrast, individual galaxies of all masses grow most of their stellar masses\nin the blue cloud, prior to the onset of quenching, and progressively more\nmassive blue galaxies with already relatively older mean stellar ages continue\nto enter the red sequence. Consequently, correlations among observables of red\ngalaxies - such as the age-mass relation - are largely inherited from their\nblue progenitors at the onset of quenching. While the color makeup of the\nentire galaxy population strongly depends on environment, which is a direct\nresult of environment quenching, physical properties of blue galaxies as a\nsub-population show little dependence on environment.", + "authors": "Renyue Cen", + "published": "2013-11-22", + "updated": "2014-01-15", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction The bimodal distribution of galaxy colors at low redshift is well established (e.g., Strateva et al. 2001; Blanton et al. 2003a; Kau\ufb00mann et al. 2003; Baldry et al. 2004). The \u201cblue cloud\u201d, 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1311.5916v2 [astro-ph.CO] 15 Jan 2014 \f\u2013 2 \u2013 sometimes referred to as the \u201cstar formation sequence\u201d (Salim et al. 2007), is occupied by starforming galaxies, while the \u201cred sequence\u201d galaxies appear to have little ongoing star formation (SF). It has been argued that this bimodality suggests that SF of the blue cloud galaxies en route to the red sequence must be turned o\ufb00promptly to prevent them from lingering in the green valley between the blue and red peaks. A number of physical mechanisms have been proposed to cause this apparent \u201cquenching\u201d of SF. Galaxy mergers have been suggested to trigger strong and rapid SF that subsequently drives gas away and shuts down further SF activities in a sudden fashion. However, recent observations show that galaxies in the green valley do not show merger signatures, perhaps disfavoring the merger scenario (e.g., Mendez et al. 2011). Feedback from active galactic nuclei (AGN) has also been suggested to provide quenching, but observational evidence for this scenario has been either inconclusive or at best circumstantial (e.g., Bundy et al. 2008; Santini et al. 2012; Bongiorno et al. 2012; Rosario et al. 2013). Some recent studies based on large data sets do not \ufb01nd evidence for AGN feedback playing a role in galaxy color migration (e.g., Zheng et al. 2007; Xue et al. 2010; Aird et al. 2012; Harrison et al. 2012; Swinbank et al. 2012; Mendel et al. 2013). External, environmental e\ufb00ects may have played an important role in shaping galaxy colors. High density environments are observed to be occupied primarily by early type (elliptical and S0) red galaxies the \u201cdensity-morphology relation\u201d (e.g., Oemler 1974; Dressler 1980; Postman & Geller 1984) with giant elliptical galaxies anchoring the centers of rich clusters of galaxies (e.g., Kormendy et al. 2009). This relation is consistent with the larger trend of the galaxy population appearing bluer in more underdense regions in the local universe (e.g., Goto et al. 2003; G\u00b4 omez et al. 2003; Tanaka et al. 2004; Rojas et al. 2004). Di\ufb00erent types of galaxies are seen to cluster di\ufb00erently and have di\ufb00erent environment-dependencies, in the same sense as the density-morphology relation (e.g., Davis & Geller 1976; Hogg et al. 2003; Balogh et al. 2004; Kau\ufb00mann et al. 2004; Park et al. 2007; Coil et al. 2008; Zehavi et al. 2011). Recent quantitative studies have yielded richer details on SF dependence on halo mass and environment, probing their relationships at higher redshifts. For example, using a large group catalog from the Sloan Digital Sky Survey (SDSS) Data Release 2, Weinmann et al. (2006) \ufb01nd that at \ufb01xed luminosity the fraction of early-type galaxies increases with increasing halo mass and this mass dependence is smooth and persists over the entire mass range probed without any break or feature at any mass-scale. From a spectral analysis of galaxies at z = 0.4\u22120.8 based on the ESO Distant Cluster Survey, Poggianti et al. (2009) \ufb01nd that the incidence of K+A galaxies increases strongly with increasing velocity dispersion of the environment from groups to clusters. McGee et al. (2011), examining the SF properties of group and \ufb01eld galaxies from SDSS at z \u223c0.08 and from ultraviolet imaging with GALEX at z \u223c0.4, \ufb01nd that the fraction of passive galaxies is higher in groups than the \ufb01eld at both redshifts, with the di\ufb00erence between the group and \ufb01eld growing with time and larger at low masses. With the NOAO Extremely Wide-Field Infrared Imager (NEWFIRM) Survey of the All-wavelength Extended Groth strip International Survey (AEGIS) and Cosmic Evolution Survey (COSMOS) \ufb01elds, Whitaker et al. (2011) show evidence for a bimodal color distribution between quiescent and star-forming galaxies that persists to z \u223c3. Presotto et al. (2012) study the evolution of galaxies located within groups using the group catalog obtained from zCOSMOS spectroscopic data and the complementary photometric data from the COSMOS \f\u2013 3 \u2013 survey at z = 0.2 \u22120.8 and \ufb01nd the rate of SF quenching to be faster in groups than in the \ufb01eld. Muzzin et al. (2012) analyze galaxy properties at z = 0.85 \u22121.20 using a spectroscopic sample of 797 cluster and \ufb01eld galaxies drawn from the Gemini Cluster Astrophysics Spectroscopic Survey, \ufb01nding that post starburst galaxies with M \u2217= 109.3\u221210.7 M\u2299are three times more common in highdensity regions compared to low-density regions. Based on data from the zCOSMOS survey Tanaka et al. (2012) perform an environment study and \ufb01nd that quiescent galaxies prefer more massive systems at z = 0.5 \u22121. Rasmussen et al. (2012), analyzing GALEX imaging of a statistically representative sample of 23 galaxy groups at z \u223c0.06, suggest an average quenching timescale of \u22652Gyr. Mok et al. (2013), with deep GMOS-S spectroscopy for 11 galaxy groups at z = 0.8 \u22121, show that the strongest environmental dependence is observed in the fraction of passive galaxies, which make up only \u223c20 per cent of the \ufb01eld in the mass range Mstar = 1010.3\u22121011.0 M\u2299but are the dominant component of groups. Using SDSS (z \u223c0.1) and the All-Wavelength Extended Groth Strip International Survey (AEGIS; z \u223c1) data, Woo et al. (2013) \ufb01nd a strong environmental dependence of quenching in terms of halo mass and distance to the centrals at both redshifts. The widespread observational evidence of environment quenching is unsurprising theoretically. In regions of overdensity, whether around a large collapsed halo or unvirialized structure (e.g., a Zel\u2019dovich pancake or a \ufb01lament), gas is gravitationally shock heated when converging \ufb02ows meet. In regions \ufb01lled with hot shock-heated gas, multiple gasdynamical processes would occur. One of the most important gasdynamical processes is ram-pressure stripping of gas, when a galaxy moves through the ambient hot gas at a signi\ufb01cant speed, which includes, but is not limited to, the infall velocity of a satellite galaxy. The theoretical basis for the ram-pressure stripping process is laid down in the seminal work of Gunn & Gott (1972). Recent works with detailed simulations of this e\ufb00ect on galaxies (in non-cosmological settings) include those of Mori & Burkert (2000), Quilis et al. (2000), Kronberger et al. (2008), Bekki (2009) and Tonnesen & Bryan (2009). Even in the absence of ram-pressure stripping, ubiquitous supersonic and transonic motions of galaxies of complex acceleration patterns through ambient medium (intergalactic or circumgalactic medium) subject them to the Raleigh-Taylor and Richtmyer-Meshkov instabilities. Large shear velocities at the interfaces between galaxies and the ambient medium allow the Kelvin-Helmholtz (KH) instability to play an important role. When these processes work in tandem with ram-pressure displacements, the disruptive e\ufb00ects are ampli\ufb01ed. For example, the KH instability time scale is substantially shorter for a non-self-gravitating gas cloud (e.g., Murray et al. 1993) than for one sitting inside a virialized dark matter halo (e.g., Cen & Riquelme 2008). Another important process in hot environments is starvation of cold gas that is fuel for SF (e.g., Larson et al. 1980; Balogh et al. 2000; Dekel & Birnboim 2006). In regions with hightemperature and high-entropy cooling of hot gas is an ine\ufb03cient process for fueling SF, an important point noted long ago to account for the basic properties (mass, size) of galaxies (e.g., Binney 1977; Rees & Ostriker 1977; Silk 1977). This phenomenon may be understood by considering the dependence of cooling time on the entropy of the gas: the gas cooling time can be written \f\u2013 4 \u2013 as tcool(T, S) = S3/2 \u0014 3 2 \u0010 \u00b5e \u00b5 \u00112 kB T 1/2\u039b(T) \u0015 (Scannapieco & Oh 2004) 1. It follows that the minimum cooling time of a gas parcel just scales with S3/2. As a numerical example, for a gas parcel of entropy S = 109K cm2 (say, for temperature 107K and density 10\u22123cm\u22123) and metallicity 0.1 Z\u2299, its cooling time is no shorter than the Hubble time at z = 1 hence the gas can no longer cool e\ufb03ciently to fuel SF. It may be that the combination of cold gas removal and dispersal by ram-pressure stripping, hydrodynamic instabilities, and cold gas starvation, all of which are expected to become increasingly important in more massive environments, plays a primary role in driving the color migration from the blue cloud to the red sequence. In dense environments, gravitational tidal (stripping and shock) e\ufb00ects and relatively close \ufb02y-bys between galaxies (e.g., Moore et al. 1996) also become important. To understand the overall e\ufb00ect on SF quenching by these external processes in the context of the standard cold dark matter model, a realistic cosmological setting is imperative, in order to capture complex external processes that are likely intertwined with large variations of internal properties of galaxies. In this paper we perform ab initio Large-scale Adaptive-mesh-re\ufb01nement Omniscient Zoom-In cosmological hydrodynamic simulations, called LAOZI Simulations, to obtain a large sample of galaxies to, for the \ufb01rst time, perform a chronological and statistical investigation on a very large scale. The large simulated galaxy sample size and very high resolution of LAOZI simulations provide an unprecedented opportunity to undertake the study presented. Our study shares the spirit of the work by Feldmann et al. (2011), who examine the evolution of a dozen galaxies falling onto a forming group of galaxies, with a substantial improvement in the statistical treatment, the simulation resolution, the range of environment probed, and the analysis scope. Feedback from AGN is not included in this simulation, partly because of its large uncertainties and present lack of de\ufb01nitive driving sources and primarily due to our intention to focus on external e\ufb00ects. Internal e\ufb00ects due to SF are automatically included and we \ufb01nd no evidence that SF or merger triggered SF plays a primary role in quenching from our study. The outline of this paper is as follows. In \u00a72 we detail our simulations (\u00a72.1), method of making galaxy catalogs (\u00a72.2), construction of histories of galaxies (\u00a72.3), tests and validation of simulations (\u00a72.4) and the produced bimodal distribution of galaxies colors (\u00a72.5). Results are presented in \u00a73, organized in an approximately chronological order, starting with the ram-pressure stripping e\ufb00ects in \u00a73.1, followed by the ensuing period of gas starvation in hot environment in \u00a73.2. In \u00a73.3 we discuss stellar mass growth, evolution of stellar mass function of red galaxies and present galaxy color migration tracks. \u00a73.4 gives an example of consequences of the found color migration picture galaxy age-mass relation. We present observable environmental dependence of galaxy makeup at z = 0.62 in \u00a73.5. Conclusions are given in \u00a74. 1 where kB is Boltzmann\u2019s constant, T temperature and \u039b cooling function, \u00b5 = 0.62 and \u00b5e = 1.18 for ionized gas that we are concerned with, S is the gas entropy de\ufb01ned as S \u2261 T n2/3 in units of K cm2 (n is gas number density). If one conservatively adopts the lowest value of the term inside the bracket at the cooling peak at temperature Tmin \u223c105.3K, it follows that the minimum cooling time of a gas parcel just scales with S3/2. \f\u2013 5 \u2013 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the AMR Eulerian hydro code, Enzo (Bryan & Norman 1999; Joung et al. 2009). First we run a low resolution simulation with a periodic box of 120 h\u22121Mpc (comoving) on a side. We identify a region centered on a cluster of mass of \u223c 3 \u00d7 1014 M\u2299at z = 0. We then resimulate with high resolution of the chosen region embedded in the outer 120h\u22121Mpc box to properly take into account the large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of a re\ufb01ned region. The re\ufb01ned region has a comoving size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and represents a +1.8\u03c3 matter density \ufb02uctuation on that volume. The dark matter particle mass in the re\ufb01ned region is 1.3 \u00d7 107h\u22121 M\u2299. The re\ufb01ned region is surrounded by three layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 84 times that in the re\ufb01ned region. We choose the mesh re\ufb01nement criterion such that the resolution is always smaller than 111h\u22121pc (physical), corresponding to a maximum mesh re\ufb01nement level of 13 at z = 0. An identical comparison run that has four times better resolution of 29pc/h was also run down to z = 3 and some relevant comparisons between the two simulations are made to understand e\ufb00ects of limited resolution on our results. The simulations include a metagalactic UV background (Haardt & Madau 1996), and a model for self-shielding of UV radiation (Cen et al. 2005). They include metallicity-dependent radiative cooling (Cen et al. 1995). Our simulations also solve relevant gas chemistry chains for molecular hydrogen formation (Abel et al. 1997), molecular formation on dust grains (Joung et al. 2009), and metal cooling extended down to 10 K (Dalgarno & McCray 1972). Star particles are created in cells that satisfy a set of criteria for SF proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from SF is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell, which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). The primary advantages of this supernova energy based feedback mechanism are three-fold. First, nature does drive winds in this way and energy input is realistic. Second, it has only one free parameter eSN, namely, the fraction of the rest mass energy of stars formed that is deposited as thermal energy on the cell scale at the location of supernovae. Third, the processes are treated physically, obeying their respective conservation laws (where they apply), allowing transport of metals, mass, energy and momentum to be treated selfconsistently and taking into account relevant heating/cooling processes at all times. We allow the entire feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier initial mass function (IMF) is 6.6 \u00d7 10\u22126M\u2217c2 (where c is the speed of light), for an amount M\u2217of star formed. Taking into account the contribution of prompt Type I supernovae, we use eSN = 1\u00d710\u22125 in our simulations. Observations of local starburst galaxies indicate that nearly all of the SF produced kinetic energy is used to power galactic superwinds (e.g., \f\u2013 6 \u2013 Heckman 2001). Supernova feedback is important primarily for regulating SF and for transporting energy and metals into the intergalactic medium. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported in a physically sound (albeit still approximate at the current resolution) way. We use the following cosmological parameters that are consistent with the WMAP7-normalized (Komatsu et al. 2010) \u039bCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100h km s\u22121Mpc\u22121 = 70 km s\u22121Mpc\u22121 and n = 0.96. These parameters are consistent with those from Planck \ufb01rst-year data (Planck Collaboration et al. 2013) if we average Planck derived H0 with SN Ia and HST based H0. We note that the size of the re\ufb01ned region, 21 \u00d7 24 \u00d7 20h\u22123Mpc3, is still relatively small and the region biased. This is, of course, designed on purpose. Because of that, however, we are not able to cover all possible environment, such as the center of a void. Also because of that, we have avoided addressing any measures that requires a precise characterization of the abundance of any large galaxy systems, such as the mass function or luminosity function of massive galaxies or groups. Despite that, measures that are characterized as a function of environment/system masses should still be valid. Our environment coverage is substantially larger than probed in, for example, Feldmann et al. (2011). In Tonnesen & Cen (2012) we show that the present simulation box (C box) (run to z = 0 with a lower resolution previously) spans a wide range in environment from rich clusters to the \ufb01eld, and there is a substantial overlap in the \ufb01eld environment with another simulation centered on a void (V box). It is the density peaks higher than we model here (i.e., more massive clusters of galaxies) that we fail to probe. As it should be clear later, this shortcoming should not a\ufb00ect any of our conclusions, which may be appropriately extrapolated. 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999) operating on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy down to mass of \u223c109 M\u2299are clearly identi\ufb01ed separately in most cases. The luminosity of each stellar particle in each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, SFR, luminosities in \ufb01ve SDSS bands (and various colors) and others. At a spatial resolution of 159pc (physical) with thousands of well resolved galaxies at z \u223c0.6\u22126, the simulated galaxy catalogs present an excellent (by far, the best available) tool to study galaxy formation and evolution. \f\u2013 7 \u2013 2.3. Construction of Histories of Simulated Galaxies When we start the analysis for this paper, the simulation has reached z = 0.62. For each galaxy at z = 0.62 a genealogical line is constructed from z = 0.62 to z = 6 by connecting galaxy catalogs at a series of redshifts. Galaxy catalogs are constructed from z = 0.62 to z = 1.40 at a redshift increment of \u2206z = 0.02 and from z = 1.40 to z = 6 at a redshift increment of \u2206z = 0.05. The parent of each galaxy is identi\ufb01ed with the one at the next higher redshift catalog that has the most overlap in stellar mass. We call galaxies with g \u2212r < 0.55 \u201cblue\u201d, those with g \u2212r = 0.55 \u22120.65 \u201cgreen\u201d and those with g \u2212r > 0.65 \u201cred\u201d, in accord with the bimodal color distribution that we will show below and with that of observed galaxies (e.g., Blanton et al. 2003b), where g and r are magnitudes of SDSS g and r bands. In subsequent analysis, we will examine gasdynamic processes, e.g., cold gas loss or lack of cold gas accretion, under the working hypothesis that ram-pressure stripping and gas starvation are the primary detrimental processes to star formation. We should assume that other processes, such as hydrodynamical instabilities (e.g., RT, KH, tidal shocks, etc), may either be \u201clumped together\u201d with ram-pressure stripping or play some role to enhance cold gas destruction that is initiated by ram-pressure stripping. A word on tidal stripping may be instructive. It is noted that while tidal stripping would a\ufb00ect both stars and gas, ram-pressure operates only on the latter. As one will see later, in some cases the stellar masses of galaxies decrease with time, which are likely due to tidal e\ufb00ects. A simple argument suggests that ram-pressure e\ufb00ects are likely to be more far-reaching spatially and are more consistent with the environment e\ufb00ects becoming e\ufb00ective at 2-3 virial radii than tidal stripping that we will show later. Let us take a speci\ufb01c example to illustrate this. Let us assume that the primary and infalling galaxies have a velocity dispersion of \u03c31 and \u03c32, respectively, and that they both have isothermal sphere density pro\ufb01les for both dark matter and baryons. The virial radius is proportional to its velocity dispersion in each case. Under such a con\ufb01guration, we \ufb01nd that the tidal radius for the satellite galaxy at its virial radius is equal to the virial radius of the primary galaxy. On the other hand, the ram-pressure force on the gas in the satellite at its virial radius is already equal to the gravitational restoring by the satellite, when the satellite is (\u03c31/\u03c32) virial radii away from the primary galaxy. In reality, of course, the density pro\ufb01les for dark matter and baryons are di\ufb00erent and neither is isothermal, and the gas may display a varying degree of non-sphericity. But the relative importance of ram-pressure and tidal strippings is likely to remain the same for relatively di\ufb00use gas. The relative situation is unchanged, if one allows the gas to cool and condense. As an example, if the gas within the virial radius of the satellite and the primary galaxies in the above example is allowed to shrink spherically by a factor of 10 in radius (we will continue to assume that the velocity dispersion or rotation velocity remains \ufb02at and at the same amplitude), we \ufb01nd that the tidal stripping radius is now a factor of 10 smaller than before (equal to 0.1 times the virial radius of the primary galaxy), while the new ram-pressure stripping radius is \u03c31/\u03c32 times the new tidal stripping radius. As a third example, if the gas within the virial radius of the satellite galaxy in the above example is allowed to shrink spherically by a factor of 10 in radius but the gas in the primary galaxy does not shrink in size, it can be shown that in this case the tidal stripping radius is equal to 0.1 times the virial radius of the primary \f\u2013 8 \u2013 galaxy, while the new ram-pressure stripping radius is now 0.1 times (sigma1/sigma2) times the virial radius of the primary galaxy. As the last example, if the gas within the virial radius of the satellite galaxy in the above example is allowed to shrink by a factor of 10 in radius to become a disk but the gas in the primary galaxy does not shrink in size, it can be shown that in this case the tidal stripping radius is equal to 0.1 times the virial radius of the primary galaxy. The new ram-pressure stripping radius depends on the orientation of the motion vector and the normal of the disk: if the motion vector is normal to the disk, the tidal stripping radius is 0.1 times \u03c31/\u03c32 times the virial radius of the primary galaxy; if the motion vector is in the plane to the disk, the tidal stripping radius is zero. 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 log variable quantities 11.5 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 11.2 log ram pressure\u22122 log sSFR (Gyr\u22121) log halo mass\u221210 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 t (Gyr) log variable quantities 11.2 log SFR (Msun/yr) log cold gas (<10kpc) \u2212 10 log cold gas (<100kpc) \u2212 10 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 t (Gyr) 11.1 Fig. 1.\u2014 four panels show the histories of six variables for four randomly selected red galaxies with stellar mass of \u223c1011 M\u2299at z = 0.62: log SFR (in M\u2299/yr) (solid dots), log ram pressure (in Kelvin cm\u22123) 2 (stars), log cold gas within 10kpc (in M\u2299) -10 (open circles), log cold gas within 100kpc (in M\u2299) -10 (open squares), log sSFR (in Gyr\u22121) (down-pointing triangles) and log halo mass (in M\u2299) -10 (solid diamonds); The color of symbols at any given time corresponds the color of the galaxy at that time. The logarithm of the stellar mass at z = 0.62 is indicated in the upper-right corner in each panel. The vertical dashed line in each panel indicates the location of tq. We denote a point in time when the galaxy turns from blue to green as tg, a point in time when the galaxy turns from green to red as tr. Convention for time is that the Big Bang occurs at t = 0. We identify a point in time, searched over the range tg\u22122Gyr to tg+1Gyr, when the derivative of SFR with respect to time, dSFR/dt, is most negative, as tq (q stands for quenching); in practice, to reduce uncertainties due to temporal \ufb02uctuations in SFR, tq is set to equal to t(n+1) \f\u2013 9 \u2013 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 log variable quantities 10.2 3 4 5 6 7 8 \u22123 \u22122 \u22121 0 1 2 3 4 10.1 log ram pressure\u22122 log sSFR (Gyr\u22121) log halo mass\u221210 3 4 5 6 7 8 \u22124 \u22123 \u22122 \u22121 0 1 2 3 t (Gyr) log variable quantities 10.1 log SFR (Msun/yr) log cold gas (<10kpc) \u2212 10 log cold gas (<100kpc) \u2212 10 3 4 5 6 7 8 \u22124 \u22123 \u22122 \u22121 0 1 2 3 t (Gyr) 10.2 Fig. 2.\u2014 four panels show the histories of six variables for four randomly selected red galaxies with stellar mass of \u223c1010 M\u2299at z = 0.62: log SFR (in M\u2299/yr) (solid dots), log ram pressure (in Kelvin cm\u22123) 2 (stars), log cold gas within 10kpc (in M\u2299) -10 (open circles), log cold gas within 100kpc (in M\u2299) -10 (open squares), log sSFR (in Gyr\u22121) (down-pointing triangles) and log halo mass (in M\u2299) -10 (solid diamonds); The color of symbols at any given time corresponds the color of the galaxy at that time. The logarithm of the stellar mass at z = 0.62 is indicated in the upper-right corner in each panel. The vertical dashed line in each panel indicates the location of tq. when the sliding-window di\ufb00erence (SFR(n + 3) \u2212SFR(n)/(t(n + 3) \u2212t(n)) is most negative, where t(1), t(2), ..., t(n), ... are the times of our data outputs, as noted earlier. Galaxies at tq are collectively called SFQs for star formation quenching galaxies. To demonstrate the reliability and accuracy of identi\ufb01cation of tq we show in Figure 1 the histories for a set of four randomly selected red galaxies at z = 0.62 of stellar mass \u223c1011 M\u2299. The vertical dashed line in each panel shows tq, which is the location of steepest drop of SFR (solid dots). In all four cases, our method identi\ufb01es the location accurately. Figure 2 is similar to Figure 1 but for galaxies of stellar mass \u223c1010 M\u2299, where we see our method identi\ufb01es tq with a similar accuracy. Similarly, we identify a point in time in the range tg\u22122Gyr to tg+1Gyr, when the derivative of the amount of cold gas, (M10, M30, M100) within radial ranges (0 \u221210, 0 \u221230, 0 \u2212100)kpc, with respect to time is most negative as (t30, t100, t300), respectively. We de\ufb01ne cold gas as gas with temperature less than 105K. The exponential decay time scale of SFR at tq is de\ufb01ned by \u03c4q \u2261 (d ln SFR/dt)\u22121. The exponential decay time scale of (M10, M30, M100) at (t30, t100, t300) are de\ufb01ned \f\u2013 10 \u2013 by [\u03c410 \u2261(d ln M10/dt)\u22121, \u03c430 \u2261(d ln M30/dt)\u22121, \u03c4100 \u2261(d ln M100/dt)\u22121]. The time interval between tq and tr is denoted as tqr, The time duration that the galaxy spends in the green valley before turning red is called tgreen. The time duration the galaxy has spent in the red sequence by z = 0.62 is denoted as tred. We make a needed simpli\ufb01cation by approximating the ram-pressure, denoted as p300, by p300 \u2261\u03c16(300)T(300), where \u03c16(300) and T(300), respectively, are the mean density of gas with temperature \u2265106K and T(300) the mean mass-weighted gas temperature within a proper radius of 300kpc centered on the galaxy in question. This tradeo\ufb00is made thanks chie\ufb02y to the di\ufb03culty of de\ufb01ning precisely the motion of a galaxy relative to its ambient gas environment, where the latter often has complex density and velocity structures, and the former has complex, generally nonspherical gas distribution geometry. In a gravitationally shock heated medium, this approximation should be reasonably good, because the ram-pressure is approximately equal to thermal pressure in post-shock regions. We de\ufb01ne a point in time searched over the time interval between tq\u22122Gyr and tq+1Gyrs, when the derivative of p300 with respect to time is maximum as tram, intended to serve as the point in time when ram-pressure has the steepest rise, As stated in the introduction, it is convenient to express gas cooling time that is proportional to gas entropy to the power 3/2, S3/2. Thus, we approximate gas starvation from large scales by the value of environmental entropy S300, de\ufb01ned to be the average gas entropy within a top-hat sphere of proper radius 300kpc. For convenience, frequently used symbols and their de\ufb01nitions are given in Table 1. 2.4. Tests and Validation of Simulation The galaxy formation simulation in a cosmological setting used here includes sophisticated physical treatment, ultra-high resolution and a very large galaxy sample to statistically address cosmological and astrophysical questions. While this simulation represents the current state-of-theart in these respects, feedback from SF is still far from being treated from \ufb01rst principles. Thus, it is necessary that we validate the feedback prescription empirically. In Cen (2012b) we presented an examination of the damped Lyman alpha systems (DLAs) and found that the simulations, for the \ufb01rst time, are able to match all observed properties of DLAs, including abundance, size, metallicity and kinematics. In particular, the metal distribution in and around galaxies over a wide range of redshift (z = 0 \u22125) is shown to be in excellent agreement with observations (Rafelski et al. 2012). The scales probed by DLAs range from stellar disks at low redshift to about one half of the virial radius at high redshift. In Cen (2012a) we further show that the properties of O VI absorption lines at low redshift, including their abundance, Doppler-column density distribution, temperature range, metallicity and coincidence between O VII and O VI lines, are all in good agreement with observations (Danforth & Shull 2008; Tripp et al. 2008; Yao et al. 2009). The agreement between simulations and observations with respect to O VI lines is recently shown to extend to the correlation between galaxies and O VI lines, the relative incidence ratio of O VI around red to blue galaxies, the amount of oxygen mass around red and blue galaxies as well \f\u2013 11 \u2013 Table 1. De\ufb01nitions of symbols and names symbol/name de\ufb01nition/meaning tg a point in time when galaxy has g \u2212r = 0.55 tr a point in time when galaxy has g \u2212r = 0.65 M10 amount of cold gas within a radius of 10kpc M30 amount of cold gas within a radius of 30kpc M100 amount of cold gas within a radius of 100kpc tq a point in time of quenching for SFR t10 a point time of quenching for M10 t30 a point time of quenching for M30 t100 a point time of quenching for M100 tram a point in time of largest \ufb01rst derivative of ram-pressure w.r.t time \u03c4q exponential decay time of SFR at tq \u03c410 exponential decay time of M10 at t10 \u03c430 exponential decay time of M30 at t30 \u03c4100 exponential decay time of M100 at t100 \u2206M\u2217 stellar mass change between tq and tr \u2206M10 M10 mass change between tq and tr \u2206M30 M30 mass change between tq and tr \u2206M100 M100 mass between tq and tr rSFR e e\ufb00ective radius of young stars formed within the past 100Myr T300 environmental temperature within physical radius of 300kpc S300 environmental entropy within physical radius of 300kpc p300 environmental pressure within physical radius of 300kpc \u03b42 environmental overdensity within comoving radius of 2h\u22121Mpc d/rc v distance to primary galaxy in units of virial radius of primary galaxy tqr time duration from tq to tr tgreen time spent in green valley tred time spent in red sequence Mc h halo mass of primary galaxy Ms \u2217/M c \u2217 stellar mass ratio of satellite to primary galaxy \f\u2013 12 \u2013 as cold gas around red galaxies (Cen 2013). In addition to agreements with observations with respect to circumgalactic and intergalactic medium, we \ufb01nd that our simulations are able to match the global SFR history (the Madau plot) and galaxy evolution (Cen 2011a), the luminosity function of galaxies at high (Cen 2011b) and low redshift (Cen 2011a), and the galaxy color distribution (Cen 2011a; Tonnesen & Cen 2012), within observational uncertainties. In Cen (2011a) we show that our simulations reproduce many trends in the global evolution of galaxies and various manifestations of the cosmic downsizing phenomenon. Speci\ufb01cally, our simulations show that, at any redshift, the speci\ufb01c star formation rate of galaxies, on average, correlates negatively with galaxy stellar mass, which seems to be the primary physical process for driving the cosmic downsizing phenomena observed. Smoothed particle hydrodynamic (SPH) simulations and semi-analytic methods, in comparison, appear to produce a positive correlation between the speci\ufb01c star formation rate of galaxies and galaxy stellar mass, which is opposite to what we \ufb01nd (e.g., Weinmann et al. 2012). These broad agreements between our simulations and observations indicate that, among others, our treatment of feedback processes from SF, i.e., the transport of metals and energy from galaxies, from SF sites to megaparsec scale (i.e., from interstellar to intergalactic medium) are realistically modeled as a function of distance and environment, at least in a statistical sense, and it is meaningful to employ our simulated galaxies, circumgalactic and intergalactic medium for understanding physical processes and for confrontations with other, independent observations. In order to determine what galaxies in our simulations to use in our subsequent analysis, we make an empirical numerical convergence test. Top-right panel in Figure 3 shows comparisons between galaxies of two simulations at z = 3 with di\ufb00erent resolutions for the luminosity functions in rest-frame g and r bands. The \ufb01ducial simulation has a resolution of 114pc/h and an identical comparison run has four times better resolution of 29pc/h. We are not able to make comparisons at redshift substantially lower than z = 3 at this time. In any case, we expect that the comparison at z = 3 is a more stringent test, because the resolution e\ufb00ect is likely more severe at higher redshift than at lower redshift in a hierarchical growth model where galaxies become increasingly larger with time. The comparisons are best done statistically, because not all individual galaxies can be identi\ufb01ed at a one-to-one basis due to resolution-dependent star formation and merging histories. Comparisons with respect to other measures, such as stellar mass function, SFR, etc, give comparable convergence. Based on results shown, we decide to place a lower stellar mass limit of 109.5 M\u2299, which is more than 75% complete for almost all relevant quantities, to the extent that we are able to make statistical comparisons between these two runs with respect to the global properties of galaxies (stellar mass, luminosity, SFR, sSFR, etc). In terms of checking the validity and applicability of the simulations, we also make comparisons for the galaxy cumulative mass function at z = 1 with observations in the top-left panel of Figure 3. We see that the simulated galaxies have a higher abundance than observed by a factor of 4 \u22125 in the low mass end and the di\ufb00erence increases towards higher mass end. This di\ufb00erence is expected, because the simulation volume is an overdense region that has a higher galaxy density overall and progressively higher densities for more rare, higher mass galaxies. This di\ufb00erence is also borne out in the comparisons between simulated and observed rest-frame g band galaxy luminosity \f\u2013 13 \u2013 9 10 11 \u22124 \u22123 \u22122 \u22121 log M* (Msun) log n(>M*) (h3Mpc\u22123) CMF@z=1 van der Burg 2013 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 \u221220 \u22124 \u22123 \u22122 \u22121 Mg & Mr log n( 3 \u00d7 1010 M\u2299 (magenta), at z = 0.62. The g-r color distributions show clear bimodalities for all three subsets of galaxies, with the red peak becoming more prominent for less luminous galaxies at z = 0.62, consistent with recent observations (e.g., Bell et al. 2004; Willmer et al. 2006; Bundy et al. 2006; \f\u2013 15 \u2013 Faber et al. 2007). We also caution that one should not overstate the success in this regard for two reasons. First, on the simulation side, since our simulation volume does not necessarily represent an \u201caverage\u201d volume of the universe, a direct comparison to observations would be di\ufb03cult. Second, observations at high redshift (i.e., z \u223c0.62) are perhaps less complete than at low redshift, and identi\ufb01cation of low mass (and especially low surface brightness) galaxies, in particular those that are satellite galaxies and red, may be challenged at present (e.g., Knobel et al. 2013). Our main purpose is to make a comparative study of galaxies of di\ufb00erent types in the simulation and to understand how blue galaxies turn red. It is intriguing to note that there is no lack of red dwarf galaxies. While a direct comparison to observations with respect to abundant red dwarf galaxies can not be made at z = 0.62, future observations may be able to check this. Since our simulation does not include AGN mechanical feedback, this suggests that the bimodal nature of galaxy colors does not necessarily require AGN feedback for galaxies in the mass ranges examined. This \ufb01nding is in agreement with Feldmann et al. (2011), who \ufb01nd that AGN feedback is not an essential ingredient for producing quiescent, red elliptical galaxies in galaxy groups. While SF feedback is included in our simulation, our subsequent analysis shows that environmental e\ufb00ects play the dominant role in driving galaxy color evolution and consequently color bimodality. Our results do not, however, exclude the possibility that AGN feedback may play an important role in regulating larger, central galaxies, such as cD galaxies at the centers of rich clusters of galaxies, for which we do not have a su\ufb03cient sample to make a statistical statement. Our earlier comparison between simulated luminosity functions of galaxies at z = 0 and SDSS observations indicates that some additional feedback, likely in the way of AGN, may be required to suppress star formation in the most massive galaxies (Cen 2011a). 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 0.05 0.1 0.15 g\u2212r PDF(g\u2212r) 3\u00d7109\u22121\u00d71010Msun 1\u00d71010\u22123\u00d71010Msun >3\u00d71010Msun Fig. 4.\u2014 shows g \u2212r color distributions of simulated galaxies in three stellar mass ranges, 3\u00d7109 \u2212 1 \u00d7 1010 M\u2299(black), 1 \u00d7 1010 \u22123 \u00d7 1010 M\u2299(cyan) and > 3 \u00d7 1010 M\u2299(magenta), at z = 0.62. \f\u2013 16 \u2013 3. Results Most of our results shown are presented through a variety of comparisons of the dependencies of galaxies of di\ufb00erent types on a set of environmental variables, to learn how galaxies change color. We organize our analysis in an approximately chronological order. In \u00a73.1 we focus on processes around the \u201cquenching\u201d time, tq, followed by the ensuing period of gas starvation in hot environment in \u00a73.2. In \u00a73.3 we discuss stellar mass growth, evolution of stellar mass function of red galaxies and present galaxy color migration tracks. \u00a73.4 gives an example of consequences of the color migration picture galaxy age-mass relation. We present observable environmental dependence of galaxy makeup at z = 0.62 in \u00a73.5. 3.1. Ram-Pressure Stripping: Onset of Star Formation Quenching 0 1 2 3 1.5 2 2.5 3 log S300@tq (keV cm2) log oq (Myr) 0 1 2 3 4 1.5 2 2.5 3 log p300@tq (K cm\u22123) log oq (Myr) \u22121 0 1 2 1.5 2 2.5 3 log 1+b2@tq log oq (Myr) 1 2 3 4 5 1.5 2 2.5 3 d/rv c@tq log oq (Myr) red: satellites black: centrals M* = 3\u00d7 109 M* = 1010 M* = 1011 Fig. 5.\u2014 shows \u03c4q, the exponential decay time of SFR, against four environmental variables at tq: ram-pressure p300 on 300kpc proper scale, environmental entropy S300 on 300kpc proper scale, distance to primary galaxy d/rc v in units of the primary galaxy\u2019s virial radius and environmental overdensity \u03b42 on 2h\u22121Mpc comoving scale. The magenta solid dots with dispersions are the means. Figure 5 shows the quenching time scale \u03c4q (star formation rate exponential decay time) against four environmental variables at the quenching time tq: ram-pressure p300, environmental entropy S300, distance to primary galaxy d/rc v and environmental overdensity \u03b42. It is useful to make clear some nomenclature here. We have used the distance to the primary galaxy, d/rc v, as an environment \f\u2013 17 \u2013 variable, which runs from zero to values signi\ufb01cant above unity. This is merely saying that any galaxy (except the most massive galaxy in the simulation) can \ufb01nd a larger galaxy at some distance, not necessarily at d/rc v \u22641. The de\ufb01nition of \u201csatellite galaxies\u201d is reserved only for those galaxies with d/rc v \u22641, shown clearly as red circles in the low-left panel of Figure 5. The black circles, labeled as \u201ccentrals\u201d are galaxies with d/rc v > 1, i.e., those that are not not \u201csatellite galaxies\u201d. The observation that galaxies are being quenched at all radii d/rc v > 1 as well as d/rc v < 1 indicates that the most likey physical mechanism for the onset of quenching is ram-pressure. Tidal stripping is not expected to be e\ufb00ective at removing gas (or stars) at d/rc v > 1 (see \u00a72.3 for a discussion). The fact that \u03c4q decreases with increasing p300 is self-consistent with ram-pressure being responsible for the onset of quenching. The outcome that \u03c4q only very weakly anti-correlates with p300 indicates that the onset of quenching is some \u201cthreshold\u201d event, which presumably occurs when the ram-pressure exceeds gravitational restoring force (i.e., the threshold), thus strongly re-enforcing the observation that ram-pressure is largely responsible for the onset of quenching. A \u201cthreshold\u201d type mechanism \ufb01ts nicely with the fact that the dispersion of \u03c4q at a given p300 is substantially larger than the correlation trend, because galaxies that cross the \u201cthreshold\u201d are expected to depend on very inhomogeneous internal properties among galaxies (see Figure 6 below). The weak anticorrelation between \u03c4q and \u03b42 stems from a broad positive correlation between p300 and \u03b42. The fact that there is no discernible correlation between \u03c4q and S300 indicates that the onset of quenching is not initiated by gas starvation. The most noticeable contrast to the weak trends noted above is the di\ufb00erence between satellite galaxies (at d/rc v < 1) and central galaxies (at d/rc v > 1), in that \u03c4q of the former is lower than that of the latter by a factor of \u223c2. This is naturally explained as follows. First, at d/rc v < 1 ram-pressure stripping and tidal stripping operate in tandem to accelerate the gas removal process, whereas at d/rc v > 1 ram-pressure stripping operates \u201calone\u201d to remove gas on somewhat longer time scales. Second, at d/rc v > 1 ram-pressure stripping is, on average, less strong than at d/rc v < 1. Possible internal variables that a\ufb00ect the e\ufb00ectiveness of ram-pressure stripping include the relative orientation of the normal of the gas disk and the motion vector, rotation velocity of the gas disk, whether gas disk spiral arms are trailing or not at time of ram-pressure stripping, gas surface density amplitude and pro\ufb01le, dark matter halo density pro\ufb01le. As an obvious example, galaxies that have their motion vector and disk normal aligned are likely to have maximum rampressure stripping e\ufb00ect, everything else being equal. In the other extreme when the two vectors are perpendicular to each other, the ram-pressure stripping e\ufb00ect may be minimized. Needless to say, given many factors involved, the onset of ram-pressure stripping e\ufb00ect will be multi-variant. We elaborate on the multi-variant nature of ram-pressure stripping with one example. The top-panel of Figure 6 shows \u03c4q as a function of the stellar surface density \u03a3e within the e\ufb00ective stellar radius re. We see a signi\ufb01cant positive correlation between \u03c4q and \u03a3e in the sense that it takes longer to rampressure-remove cold gas with higher central surface density (hence higher gravitational restoring force) galaxies. While this positive correlation between \u03c4q and \u03a3e is consistent with observational indications (e.g., Cheung et al. 2012), the underlying physical origin is in a sense subtle. Since rampressure stripping is a \u201cthreshold\u201d event, as noted earlier, when ram-pressure force just exceeds the internal gravitational restoring force, one would have expected that a high surface density would \f\u2013 18 \u2013 yield a shorter dynamic time hence a shorter \u03c4q. This is in fact an incorrect interpretation. Rather, the gas in the central regions where \u03a3e is measured is immune to ram-pressure stripping in the vast majority of cases (see Figure (8) below). Instead, a higher \u03a3e translates, on average, to a larger scale where gas is removed, which has a longer dynamic time hence a longer \u03c4q. 8 9 10 2.5 3 3.5 log Ye (Msun/kpc2) log tqr (Myr) 3\u00d7 109 1010 1011 8 9 10 1.5 2 2.5 3 log oq (Myr) red: satellites black: centrals Fig. 6.\u2014 Top panel: the exponential decay time scale of SFR, \u03c4q, as a function of the stellar surface density \u03a3e within the stellar e\ufb00ective radius re. Bottom panel: the time interval between onset of quenching and the time the galaxy turns red, tqr as a function of \u03a3e. The magenta dots are the averages at a given x-axis value. Taken together, we conclude that, while a high ram-pressure provides the conditions for rampressure stripping to take e\ufb00ect, the e\ufb00ectiveness or timescale for gas removal by ram-pressure stripping also depend on the internal structure of galaxies. It is very interesting to note that, unlike between tq and \u03a3e, tqr (the time interval between the onset of quenching tq and the time when the galaxy turns red) and \u03a3e shown in the bottom-panel of Figure 6, if anything, is weakly anticorrelated. We attribute this outcome to the phenomenon that galaxies with higher central surface \f\u2013 19 \u2013 density have a shorter time scale for consuming the existing cold gas hence, once the overall cold gas reservoir is removed. This explanation will be elaborated more later. \u22121 \u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 tq\u2212tram (Gyr) PDF M*>3\u00d7 109 M*>3\u00d7 1010 Fig. 7.\u2014 the histograms of tq \u2212tram for red galaxies at z = 0.62, where tram is the point in time when the derivative of p300 with respect to time is maximum, and tq is the onset of quenching time for SFR. The vertical thick lines show the medians of the corresponding histograms of the same colors, and the vertical thin lines for each color are for 25% and 75% percentiles. We have made the case above that ram-pressure stripping is primarily responsible for the onset of quenching process based on evidence on the dependence of exponential decay time of SFR at the onset of quenching on environment variables. We now make a direct comparison between tq and tram. Figure (7) shows the histograms of tq \u2212tram. We see that the time di\ufb00erence between the two is centered around zero, indicating a casual connection between the onset of SFR quenching and the rapid rise of ram-pressure. The width of the distribution of a few hundred Myrs re\ufb02ects the fact that the exact strength of ram-pressure stripping required to dislodge the gas varies greatly, depending on many variables as discussed above. This is as yet the strongest supporting evidence for ram-pressure stripping being responsible for the onset of quenching, especially considering the conjunctional evidence that the onset of quenching could occur outside the virial radius of a larger neighboring galaxy where tidal stripping is expected to be less e\ufb00ective and yet ram-pressure is expected to become important. Evidence so far supports the notion that ram-pressure stripping is the initial driver for the decline of SFR in galaxies that are en route to the red sequence. The immediate question is then: What region in galaxies does ram-pressure stripping a\ufb00ect? To answer this question, we need to compare the amount of cold gas available at tq with the amount of star formation that ocurrs subsequently. We compute the following ratios: the ratios of the amount of stars formed during the time interval from tq to the time the galaxy turns red (tr) to the di\ufb00erence between the amount of \f\u2013 20 \u2013 \u22122 \u22121 0 1 0.1 0.2 0.3 0.4 M*>3x109 PDF \u22123 \u22122 \u22121 0 1 2 0 0.1 0.2 0.3 0.4 M*>3x1010 log\u22126 M*/6 MX PDF X=10 X=30 X=100 Fig. 8.\u2014 shows the distribution of \u2212\u2206M\u2217/\u2206MX at three radial ranges, X = (10, 30, 100). \u2206M\u2217is the amount of stars formed during the time interval from the onset of quenching tq to the time the galaxy turns red tr, and \u2206MX is the di\ufb00erence of the amount of cold (T < 105K) gas within a radius X kpc between tr and tq. The vertical dashed lines show the medians of the corresponding histograms of the same colors. cold (T < 105K) gas at tq and tr, denoted as (\u2212\u2206M\u2217/\u2206M10, \u2212\u2206M\u2217/\u2206M30, \u2212\u2206M\u2217/\u2206M100) within three radii (10, 30, 100)kpc. The minus signs are intended to make the ratios positive, since stellar mass typically increase with time, whereas the cold gas mass for galaxies being quenched decreases. Note that \u2206M\u2217could be negative. We set a \ufb02oor value to the above ratios at 10\u22123. Figure (8) shows the distributions of \u2212\u2206M\u2217/\u2206MX, where X = (10, 30, 100). We see that for r \u226410kpc the peak of the distribution (red histograms) for all stellar masses is at \u2212\u2206M\u2217/\u2206M10 > 1, typically in the range of 2 \u221210, with the vast majority of cases at > 1. For a larger radius r \u226430kpc the distribution (green histograms) for all stellar masses is now peaked at \u2212\u2206M\u2217/\u2206M30 \u223c1 with about 50% at < 1, while for a still larger radius r \u2264100kpc the distribution (green histograms) it is now shifted to \u2212\u2206M\u2217/\u2206M100 \u22641 and more than 50% at less than (0.01, 0.4) for galaxies of stellar mass > (3 \u00d7 109, 3 \u00d7 1010) M\u2299, respectively. This is unambiguous evidence that ram-pressure stripping removes the majority of cold gas on scales \u226530kpc, while the cold gas within 10kpc is una\ufb00ected by ram-pressure stripping and consumed by subsequent in situ star formation. It is noted that the lack of e\ufb00ect on the cold gas at r < 10kpc by ram-pressure stripping seems universal, the gas removal by ram-pressure stripping at larger radii r > 30kpc varies substantially from galaxy to galaxy, which we argue is consistent with the large variations of \u03c4q seen in Figure 5. We thus conclude \f\u2013 21 \u2013 \u22121 \u22120.5 0 0.5 1 0 0.2 0.4 0.6 log re (SFR) (kpc) PDF M*=3\u00d7 109 M*>3\u00d7 1010 \u22120.6 \u22120.4 \u22120.2 0 0.2 0 0.2 0.4 0.6 dre(SFR)/dlnSFR (kpc) PDF red M*>3\u00d7 109 M*>3\u00d7 1010 Fig. 9.\u2014 Left panel shows the distribution of the e\ufb00ective radius of stars formed in the last 100Myrs, re(SFR), at tq for galaxies in two stellar mass ranges. The vertical dot-dashed line indicates the e\ufb00ective resolution of the simulation, taken from the bottom-right panel in Figure 3. Right panel shows the distribution of the ratio of the decline of re(SFR) with respect to the decline of SFR, d re(SFR)/d ln SFR at tq. In each panel, we di\ufb00erentiate between galaxies in three separate stellar mass ranges. The vertical lines show the medians of the corresponding histograms of the same colors. that there is continued nuclear SF in the quenching phase. Feldmann et al. (2011), based on a smaller sample of simulated galaxies that form a group of galaxies with a spatial resolution of 300pc (compared to 160pc here), \ufb01nd that in situ star formation is responsible for consuming a substantial fraction of the residual gas on small scales after gas accretion is stopped subsequent to the infall, consistent with our results. This outside-in ram-pressure stripping picture and continuous SF in the inner region that emerges from the above analysis has important implications and observable consequences, consistent with the latest observations (e.g., Gavazzi et al. 2013). We quantify how centrally concentrated the star formation is at the outset of SF quenching in Figure (9), in part to assess our ability to resolve SF during the quenching phase. The left panel shows the distribution of the e\ufb00ective radius of stars formed in the last 100Myrs prior to tq, denoted as rSFR e , for galaxies in two stellar mass ranges. The right panel shows the distribution of the ratio of the decline of rSFR e with respect to the decline of SFR, drSFR e /d ln SFR at tq. It is evident from Figure (9) that more massive galaxies tend to have larger rSFR e , as expected. It is also evident that the recent formation for the vast majority of galaxies occurs within a radius of a few kiloparsecs. It is noted that ongoing SF in a signi\ufb01cant fraction of galaxies with stellar masses \u22643 \u00d7 109 M\u2299is under-resolved, as indicated by the vertical dot-dashed line in the left panel. However, none of our subsequent conclusions would be much altered by this numerical e\ufb00ect, because (1) all of our conclusions appear to be universal across the stellar mass ranges and (2) the inner region of 10kpc is not much a\ufb00ected by ram-pressure stripping anyway (thus underresolving a small central fraction within 10kpc does not a\ufb00ect the overall ram-pressure stripping e\ufb00ects). What is interesting is that more than 50% of galaxies in both stellar mass ranges have negative values of drSFR e /d ln SFR at tq, indicating that, when the SFR decreases in the quenching \f\u2013 22 \u2013 phase, star formation proceeds at progressively larger radii in the central region. This result, while maybe somewhat counter-intuitive, is physically understandable. We attribute this inside-out star formation picture to the star formation rate surface density being a superlinear function of gas surface density in the Kennicutt-Schmidt (Schmidt 1959; Kennicutt 1998) law. The picture goes as follows: when gas supply from large scales (\u223c100kpc) is cut o\ufb00and under the assumption that gas in the central region does not re-distribute radially, the SFR diminishes faster with decreasing radius in the central region where SF occurs, causing the e\ufb00ective SF radius to increase with time and star formation rate to decline faster than cold gas content, while the overall SFR is declining. In summary, ram-pressure stripping is ine\ufb00ective in removing cold gas that is already present on scales of \u226410kpc but most e\ufb00ective in removing less dense gas on larger scales of \u226530kpc. The chief role played by ram-pressure stripping appears to disconnect galaxies from their cold reservoir on scales that are much larger the typical stellar radii. The time scale in question is then on the order of the dynamical time of galaxies at close to the virial radius. 3.2. Starving Galaxies to the Red Sequence and Environmental Sphere of In\ufb02uence The previous subsection details some of the e\ufb00ects on galaxies being quenched due to gas removal by ram-pressure stripping (in conjunction with other hydrodynamical processes) along with consumption by concurrent SF. Our attention is now turned to the subsequent evolution. Figure 10 plots tqr against four environmental variables at tq. From all panels we consistently see the expected trends: the time interval tqr from onset of quenching tq to turning red tr, on average, decreases with increasing environmental pressure, increasing environmental entropy, increasing environmental overdensity and decreasing distance to the primary galaxy. While there is a discernible di\ufb00erence in tqr between satellite galaxies and central galaxies, the di\ufb00erence is substantially smaller than that in the initial exponential decay time scale of SFR \u03c4q (see Figure 5). This observation makes it clear that the onset of quenching initiated by ram-pressure stripping does not determine the overall duration of quenching. Since all the environment variables used tend to broadly correlate with one another higher density regions tend to have higher temperatures, higher gas entropy and higher pressure it is not surprising that we see tqr are correlated with all of them in the expected sense. Earlier we have shown that tqr is weakly anti-correlated with the stellar surface density at re, \u03a3e (see bottom-panel of Figure 6). This suggests that the overall duration from onset of quenching to turning red is not a matter of a galaxy\u2019s ability to hold on to its existing cold gas but rather the extent of the external gas supply condition, i.e., environment. This hypothesis is signi\ufb01cantly a\ufb03rmed by noticing that the strongest anti-correlation is found between tqr and S300, among all environment variables examined. Thus, we conclude, given available evidence, that the eventual \u201cpush\u201d of galaxies into the red sequence is not as a spectacular event as the initial onset of quenching that is triggered by a cuto\ufb00of large-scale gas supply due to ram-pressure stripping, and is essentially the process of gas starvation, when the galaxy has entered a low cold gas density and/or high temperature and/or high velocity dispersion environment. We present distributions of tqr in Figure 11. The top-left panel shows the distribution of tqr for satellite galaxies (those with d/rc h \u22641), grouped into three primary halo mass ranges: \f\u2013 23 \u2013 0 1 2 2.5 3 3.5 log S300@tq (keV cm2) log tqr (Myr) 0 1 2 3 4 5 2.5 3 3.5 log p300@tq (K cm\u22123) log tqr (Myr) M* = 3\u00d7 109 M* = 1010 M* = 1011 \u22121 0 1 2 2.5 3 3.5 log 1+b2@tq log tqr (Myr) red: satellites black: centrals 1 2 3 4 5 2.5 3 3.5 d/rv c@tq log tqr (Myr) Fig. 10.\u2014 shows tqr (time interval from the onset of quenching to the time the galaxy turns red) against four environmental variables at tq: ram-pressure p300 on 300kpc proper scale, environmental entropy S300 on 300kpc proper scale, distance to primary galaxy d/rc v in units of the primary galaxy\u2019s virial radius and environmental overdensity \u03b42 on 2h\u22121Mpc comoving scale. The red dash line in the upper-right panel is intended to indicate a visually noticeable trend. Red circles are satellite galaxies at tq, i.e., within the virial radius of a larger galaxy, and black circles are for non-satellite galaxies. The size of each circle indicates the stellar mass of a galaxy, as shown in the legend in the lower-left panel. M c h = 1011 \u22121012 M\u2299(black), M c h = 1012 \u22121013 M\u2299(green), M c h > 1013 M\u2299(red); the medians of the distributions are (1.2,1.3,1.2)Gyr, respectively. The top-right panel shows the distribution of tqr for satellite galaxies grouped into three ranges of the ratio of satellite to cental stellar mass: M s \u2217/M c \u2217= 0.1 \u22121 (black), M s \u2217/M c \u2217= 0.01 \u22120.1 (green), M s \u2217/M c \u2217= 0.001 \u22120.01 (red); the medians of the distributions of the three groups are nearly identical at \u223c1.3Gyr. The bottom-left panel shows the distribution of tqr for primary galaxies (those with d/rc h > 1), grouped into two halo mass ranges: M c h = 1010 \u22121011 M\u2299(black), and M c h = 1011 \u22121012 M\u2299(green). We see that the medians of the distributions are 1.2Gyr for both mass ranges. The bottom-right panel plots the distribution of all satellite galaxies and all central galaxies, along with a simple gaussian \ufb01t to the combined set. A look of the bottom-right panel of Figure 11 suggests that there is practically no di\ufb00erence between the two distributions. At \ufb01rst sight, this may seem incomprehensible. A closer examination reveals the underlying physics. \f\u2013 24 \u2013 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 log tqr (Gyr) PDF satellites Mh c=1011\u221212 Mh c=1012\u221213 Mh c>1013 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 log tqr (Gyr) PDF satellites M* s/M* c=0.1\u22121 M* s/M* c=10\u22122\u221210\u22121 M* s/M* c=10\u22123\u221210\u22122 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 log tqr (Gyr) PDF centrals Mh c=1010\u221211 Mh c=1011\u221212 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0 0.1 0.2 0.3 0.4 log tqr (Gyr) PDF all satellites all centrals all galaxies Fig. 11.\u2014 Top-left panel: shows the distribution of tqr for satellite galaxies at z = 0.62, separated into three primary halo mass ranges: M c h = 1011 \u22121012 M\u2299(black), M c h = 1012 \u22121013 M\u2299(green), M c h > 1013 M\u2299(red). Top-right panel: shows the distribution of tqr for satellite galaxies at z = 0.62, separated into three ranges of satellite stellar mass to primary stellar mass ratio: M s \u2217/M c \u2217= 0.1 \u22121 (black), M s \u2217/M c \u2217= 0.01 \u22120.1 (green), M s \u2217/M c \u2217= 0.001 \u22120.01 (red). Bottom left panel: shows the distribution of tqr for primary galaxies at z = 0.62, separated into three primary halo mass ranges: M c h = 1010 \u22121011 M\u2299(black), M c h = 1011 \u22121012 M\u2299(green), M c h > 1012 M\u2299(red). The three vertical dashed lines of order (thin, thick, thin) are the (25%, 50%, 75%) percentiles for the histograms of the same color. Bottom right panel: shows the distribution of tqr for all satellite galaxies (blue), all primary galaxies (red) and all galaxies (black) at z = 0.62. An eye-balling lognormal \ufb01t is shown as the magenta line (see Eq 2). Figure 12 shows the distributions of d/rc v at tq for satellite (red) and central (black) red galaxies at z = 0.62. While it is not a surprise that the vast majority of the satellite galaxies at z = 0.62 have their onset of quenching taking place at d/rc v \u22643 at tq, it is evident that the same appears to be true for the central galaxies at z = 0.62. This observation supports the picture that both satellite and central red galaxies at z = 0.62 have been subject to similar environment e\ufb00ects that turn them red. It is noted again that this statement that red central galaxies have been subject to similar processes as the red satellite galaxies has been quantitatively con\ufb01rmed in Figure 11. \f\u2013 25 \u2013 0 1 2 3 4 5 0 0.05 0.1 0.15 d/rv c@tq PDF M* > 3\u00d7 109 satellites at z=0.62 M* > 3\u00d7 109 centrals at z=0.62 Fig. 12.\u2014 shows the distribution of the relative distance d/rc v of progenitors at tq of red galaxies at z = 0.62 for two subsets of galaxies: the red histogram for those that are within the virial radius of a larger galaxy (i.e., satellite galaxies at z = 0.62) and the black histogram for those that are not within the virial radius of a larger galaxy at z = 0.62. The thick blue vertical dashed lines are 50% percentiles for all galaxies being quenched and the thin blue vertical dashed lines are 25% and 75% percentiles. The suggestion by Wetzel et al. (2013b) that some central galaxies are ejected satellite galaxies is consistent with our \ufb01ndings here. Our study thus clearly indicates that one should not confuse red central galaxies with their being quenched by processes other than environment. In fact, all available evidence suggests that it is environment quenching that plays the dominant role for the vast majority of galaxies that turn red, whether they become satellite galaxies at z = 0.62 or not. Feldmann et al. (2011), using a much smaller sample of simulated galaxies that form a group of galaxies, \ufb01nd that quenching of gas accretion starts at a few virial radii from the group center, in good agreement with our results. It is seen in Figure 12 that only about 20% of the onset of galaxy quenching occurs as satellites, i.e., within the virial radius of a larger galaxy, consistent with conclusion derived by others (e.g., van den Bosch et al. 2008). In the bottom-right panel of Figure 11 we provide an approximate \ufb01t to the distribution of tqr for all quenched galaxies normalized to galaxies at z = 0.62 as f(log tqr) = 1 2 log tmed \u221a 2\u03c0 exp \u0002 \u2212(log tqr/ log tmed \u22121)2/8 \u0003 , (2) where tqr and tmed are in Gyr and log tmed = 0.08 \u22121.5 \u00d7 log((1 + z)/1.62). The adopted log tmed = 0.08 \u22121.5 \u00d7 log(1 + z)/1.62 dependence on z is merely an estimate of the time scale, had it scaled \f\u2013 26 \u2013 with redshift proportional to the dynamical time of the universe. One is cautioned not to apply this literally. Nevertheless, it is likely that the median quenching time at lower redshift is longer than \u223c1.2Gyr at z = 0.62, perhaps in the range of 2 \u22123Gyr. Incidentally, this estimated quenching time, if extrapolated to z = 0, is consistent with theoretical interpretation of observational data in semi-analytic modeling or N-Body simulations (e.g., Taranu et al. 2012; Wetzel et al. 2013a). In semi-analytic modeling (e.g., Kimm et al. 2009), the quenching time is often taken to be a delta function. In other words, the satellite quenching process is assumed to be uniform, independent of the internal and external properties of the satellites. Our simulation results (see Eq 2) indicate that such a simplistic approach is not well motivated physically. We suggest that, if a spread in quenching time is introduced in the semi-analytic modeling, an improvement on the agreement between predictions based on semi-analytic modeling and observations may result in. In summary, we \ufb01nd that, within the environmental sphere of in\ufb02uence, galaxies are disconnected with their large-scale cold gas supply by ram-pressure stripping, and subsequently lack of gas cooling and/or accretion in high velocity environment ensures a prolonged period of gas starvation that ultimately turns galaxies red. This applies to satellite galaxies as well as the vast majority of \u201capparent\u201d central red galaxies. The dominance of environment quenching that is found in ab initio cosmological simulations here is in accord with observations (e.g., van den Bosch et al. 2008; Peng et al. 2012; Kovac et al. 2013). 3.3. Color Migration Tracks On its way to the red sequence, a galaxy has to pass through the green valley. Do all galaxies in the green valley migrate to the red sequence? We examine the entire population of green galaxies in the redshift range z = 1 \u22121.5. Tracing these green galaxies to z = 0.62, we \ufb01nd that for galaxies with stellar masses greater than (109.5, 1010, 1010.5) M\u2299, respectively, (40%, 40%, 48%) of galaxies in the green valley at z = 1 \u22121.5, do not become red galaxies by z = 0.62. While this is an important prediction of our simulations, we do not provide more information on how one might tell apart these two di\ufb00erent population of galaxies in the green valley, except to point out that attempts to identify galaxies in the green valley as progenitors of red galaxies may generate some confusion. We examine the distributions (not shown) of the time that red galaxies spent in the green valley, tgreen, en route to the red sequence. The trends with respect to Mh and M s \u2217/M c \u2217seen are similar to those seen in Figure 11. No signi\ufb01cant di\ufb00erentiation among halo masses of central galaxies is visible, once again supportive of environment quenching. Overall, one may summarize the results in three points. First, tgreen is almost universal, independent of being satellites or not, the mass, or the ratio of masses. Second, the range tgreen = 0.30 \u00b1 0.15Gyr appears to enclose most of the galaxies, although there is a signi\ufb01cant tail towards the high end for satellites in low mass central halos. Third, comparing tgreen \u223c0.3Gyr to the interval from onset of quenching to the time galaxy turning red of tqr = 1.2 \u22121.3Gyr, it indicates that, from the onset of quenching to turning red, typical galaxies spend about 25% of the time in the green valley. Let us now examine the migration tracks of galaxies that eventually enter the red sequence. \f\u2013 27 \u2013 9 10 11 12 \u22120.2 \u22120.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log M* (Msun) g\u2212r t(green)=200Myrs t(green)=500Myrs Fig. 13.\u2014 shows the evolutionary tracks of 30 semi-randomly selected galaxies on the stellar mass M\u2217-g-r color plane. The 30 galaxies are selected to be clustered around three masses, M\u2217= (109.5, 1010.1, 1011) M\u2299. Each track has a circle attached at the end of the green period to indicate the time spent in the green valley. We shall call this diagram \u201cskyrockets\u201d diagram of galaxy color migration. Figure 13 shows the color-stellar mass diagram for 30 semi-randomly selected red galaxies. It is striking that the color evolution in the green valley and red sequence is mostly vertical, i.e., not accompanied by signi\ufb01cant change in stellar mass. This means that the stellar mass growth of most galaxies must occur in the blue cloud. One can see easily that the blue tracks are mostly moving from lower left to upper right with time for g\u2212r \u22640.3, indicating that galaxies grow when in the blue cloud. In the blue cloud it is seen that there are occasional horizontal tracks, representing mergers that maintain overall color. These are mergers that do not result in red galaxies. The examples of these include the two most massive galaxies in the plot with \ufb01nal stellar masses of \u223c1011.6 M\u2299, where there is a major binary merger of (1011.25 + 1011.25) M\u2299at g \u2212r = 0.26. There are also cases where the tracks temporarily go from north-west to south-east, indicating signi\ufb01cant/major mergers that trigger starbursts that render the remnant galaxies bluer. This anecdotal evidence that galaxies do not signi\ufb01cantly grow mass in the red sequence will be con\ufb01rmed below quantitatively. Feldmann et al. (2011), using a small sample of simulated galaxies that form a group of galaxies, \ufb01nd that mergers and signi\ufb01cant mass growth in galaxies occur, prior to their entering groupd environment, consistent with the \ufb01ndings here. Thus, this \u201cskyrockets\u201d diagram of color-stellar mass evolution in Figure 13 turns out to be a fair representation of typical tracks of galaxies that become red galaxies. We address the stellar mass growth of red galaxies quantitatively in two di\ufb00erent ways. The left panel of Figure 14 shows the histogram of the ratio of stellar mass of red galaxies at z = 0.62 to their progenitor\u2019s stellar mass at the onset of quenching tq. We see that the overall stellar mass growth of red galaxies since the onset of quenching is relatively moderate, with the vast majority of \f\u2013 28 \u2013 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 M*/M* q PDF >3\u00d7109 >3\u00d71010 9 10 11 \u22124 \u22123 \u22122 \u22121 10% increase log M* (Msun) log n(>M*) (h3Mpc\u22123) CMF of red galaxies z=0.62 CMF of red galaxies z=0.86 Fig. 14.\u2014 Left panel: the histogram of the ratio of stellar mass of red galaxies at z = 0.62 to their progenitor\u2019s stellar mass at the onset of quenching tq, for two stellar mass ranges of the red galaxies at z = 0.62. Right panel: cumulative stellar mass functions of red galaxies at z = 0.62 (blue) and z = 1 (magenta). For the red galaxies at z = 0.62 we \ufb01nd that the median value of tq corresponds to redshift z \u223c0.86, thus the choice of z = 0.86. galaxies gaining less than 30% of their stellar mass during this period, consistent with observations (e.g., Peng et al. 2010, 2012). There is a non-negligible fraction of galaxies that experience a decline of stellar mass, due to tidal interactions and collisions. There is 5 \u221210% of red galaxies that gain more than \u226540% of their stellar mass during this period, possibly due to mergers and accretion of satellite galaxies. We do not address red galaxies more massive than 1012 M\u2299because of lack of a statistically signi\ufb01cant red sample. Since these larger galaxies tend to reside at the centers of groups and clusters, there is a larger probability that AGN feedback may play a signi\ufb01cant role in them. Empirical evidence suggests that radio jets get extinguished in the near vicinity of the central galaxies in groups/clusters (e.g., McNamara & Nulsen 2007), in sharp contrast to AGNs in isolated galaxies where jets, seen as large radio lobes, appear to deposit most of their energy on scales much larger than the star formation regions. Thus, AGN feedback in the central massive galaxies in clusters/groups may be energetically important to have a major e\ufb00ect on gas cooling and star formation in them (e.g., Omma & Binney 2004). Thus, our neglect of AGN feedback in the simulation cautions us to not draw any de\ufb01nitive conclusion with respect to this special class of galaxies at this time. The stellar mass growth of individual red galaxies shown in the left panel of Figure 14 contains very useful information. However, it does not address a related but separate question: How does the stellar mass function of red galaxies evolve with redshift? We address this question here. We compute the cumulative stellar mass function of red galaxies at z = 0.62 and z = 0.86, separately, and show them in the right panel of Figure 14. We see that for red galaxies with stellar masses greater than \u223c3 \u00d7 1010 M\u2299, when matched in abundance, the stellar masses grow a factor of \u223c1.6 from z = 0.86 to z = 0.62, much larger than 10% (for about 75% of galaxies) seen in the left panel of Figure 14. We refrain from making a direct comparison to observations in this case, because our \f\u2013 29 \u2013 limited simulation volume is highly biased with respect to the massive end of the mass function. We strict ourselves to a comparative analysis of galaxies in our simulation volume and ask the question of how red galaxies in our simulation volume grow with time. The most important point to note is that this apparent growth of stellar mass of red galaxies based on abundance matching could not be due to growth of individual red galaxies in the red sequence, since the actual stellar mass increase since the onset of quenching is moderate, \u226410% typically, seen in the left panel of Figure 14. Physically, this suggests that dry mergers do not play a major role in the \u201capparent\u201d stellar mass growth of red galaxies, consistent with observations (e.g., Pozzetti et al. 2007). Rather, galaxies grow their stellar mass when they are still in the blue cloud, illustrated in Figure 13. A physical picture of galaxy color migration emerges based on our results. The migration from the blue cloud to the red sequence proceeds in a staggered fashion: stellar masses of individual galaxies continuously grow, predominantly in the blue cloud, and blue galaxies over the entire mass range continuously migrate into the red sequence over time. Galaxies migrate from the blue cloud to the red sequence almost vertically in the usual color-magnitude diagram (see Figure 13). For simplicity we will call this type of color migration \u201cVertical Tracks\u201d, which correspond most closely to \u201cB tracks\u201d proposed by Faber et al. (2007), with the growth since the onset of quenching being moderate (\u226430%). 3.4. Galaxy Age-Mass and Age-Environment Relations The vertical tracks found have many implications on observables. The \ufb01rst question one asks is this: if galaxies follow the vertical tracks, is the galaxy age-mass relation consistent with observations? We address this question in this subsection. Figure 15 shows a scatter plot of red galaxies in the stellar mass M\u2217-mean galaxy formation time tf plane at z = 0.62 (top) and z = 1 (bottom), where tf is stellar formation time, not lookback time. The red galaxies are subdivided into two groups: centrals (black circles) and satellites (red circles). For the purpose of comparison to observations, we only show galaxies with high surface brightness of \u00b5B < 23 mag arcsec\u22122 (e.g., Impey & Bothun 1997). Several interesting results can be learned. First, no systematic di\ufb00erence between satellite and central galaxies is visible, supporting earlier \ufb01ndings that there is no appreciable di\ufb00erences between satellites and centrals with respect to duration from quenching to turning red tqr (Figure 11). Second, at any given redshift, the brightest red galaxies are relatively \u201cold\u201d (but not necessarily the oldest), of ages of several billion years (age = tH \u2212tf and tH = (7.85, 5.94)Gyr for z = (0.62, 1)), consistent with observations. Third, at stellar masses greater than 1010.2\u221210.7 M\u2299red galaxies have a nearly uniform mean age; the age spread at a given stellar mass of \u223c1Gyrs is consistent with observations (e.g., Demarco et al. 2010). Fourth, fainter red galaxies are younger than brighter red galaxies in the mass range 109.5\u221210.5 M\u2299; we see that the age di\ufb00erence between the two ends of the mass range is \u223c2.5Gyr and 1.3Gyr, respectively, at z = 0.62 and z = 1, suggesting a steepening with decreasing redshift of the age di\ufb00erence between galaxies of di\ufb00erent masses in the red sequence. Demarco et al. (2010) \ufb01nd an age di\ufb00erence between the faint and bright ends of red sequence galaxies of \u223c2Gyr at z = 0.84, in \f\u2013 30 \u2013 9 10 11 12 2 3 4 5 6 tf(high mass)=3.2 Gyr log tf (Gyr) red galaxies@z=0.62 centrals satellites mean symptotic 9 10 11 12 2 3 4 tf(high mass)=2.8 Gyr log M* (Msun) log tf (Gyr) red galaxies@z=1 centrals satellites mean symptotic Fig. 15.\u2014 shows a scatter plot of red galaxies in the stellar mass M\u2217-mean galaxy formation time tf plane at z = 0.62 (top) and z = 1 (bottom), where tf is stellar formation time, not lookback time. The red galaxies are subdivided into two groups: centrals (black circles) and satellites (red circles). For the purpose of comparison to observations, we only show galaxies with high surface brightness of < 23B mag arcsec\u22122 (e.g., Impey & Bothun 1997). The green horizontal dashed lines indicate the mean formation redshift of the most luminous red galaxies at the two redshifts. The magenta dots are the averages of tf at the stellar mass bins. excellent agreement with our results. The physical origin for the steepening with decreasing redshift of the age di\ufb00erence between galaxies of di\ufb00erent masses in the red sequence is traceable to the steepening of speci\ufb01c SFR with stellar mass with decreasing redshift that is, in a fundamental way, related to the cosmic downsizing phenomenon (Cen 2011a). It is interesting to note that, in Figure 15, scatters notwithstanding, there appears to be a critical stellar mass of \u223c1010.2\u221210.7 M\u2299, above which the age (or formation time) of red galaxies \ufb02attens out to a constant value. At least for the redshift range that we have examined, z = 0.62\u22121, this critical stellar mass appears to be redshift independent. At still higher redshift we do not have enough statistics to see if this critical mass remains the same. This critical mass is tantalizingly close to the division mass of \u223c1010.5 M\u2299discovered by Kau\ufb00mann et al. (2003) at low redshift, which appears to demarcate a number of interesting trends in galaxy properties. This physical origin of this mass is unclear and deferred to a future study. Given the \u201cvertical tracks\u201d, i.e., lack of signi\ufb01cant stellar mass growth subsequently to quench\f\u2013 31 \u2013 9 10 11 12 2 3 4 5 6 log tf (Gyr) red blue@z=0.80\u22120.94 9 10 11 12 2 3 4 5 6 non\u2212red blue@z=0.80\u22120.94 9 10 11 12 1.2 1.5 2 log M* (Msun) log tf (Gyr) blue@z=3, res=114pc/h 9 10 11 12 1.2 1.5 2 zf=4 log M* (Msun) blue@z=3, res=29pc/h Fig. 16.\u2014 shows the stellar mass M\u2217-mean galaxy formation time tf scatter plot for blue galaxies at z = 0.80\u22120.94 that become red galaxies (top left) and that do not become red galaxies (top right). Each small group of mostly linearly aligned circles is one galaxy that appears multiple times (maximum is 8). The blue dots indicate average values. The green horizontal dashed lines and the magenta dots are the same in the top panel of Figure (15), indicating the mean formation time of the most luminous red galaxies and the average formation time of red galaxies at z = 0.62. Bottom left panel: shows the stellar mass M\u2217-mean galaxy formation time tf scatter plot for blue galaxies at z = 3 with the \ufb01ducial resolution 114pc/h. Bottom right panel: shows the stellar mass M\u2217-mean galaxy formation time tf scatter plot for blue galaxies at z = 3 with the four times better resolution of 29pc/h. The blue dots indicate average values. ing, one may ask this: is the age-mass relation of red galaxies inherited from their blue progenitors? We will now address this question. To select progenitors of red galaxies at z = 0.62, we note that the majority of galaxies that turn red by z = 0.62 have tqr = 1\u22121.7Gyr. Thus, we choose galaxies in the redshift range z = 0.80\u22120.94 (8 snapshots with z = (0.80, 0.82, 0.84, 0.86, 0.88, 0.90, 0.92, 0.94)), where the Hubble time di\ufb00erences between z = 0.62 and z = 0.80 and z = 0.94 are (1.0, 1.7)Gyr, respectively, enclosing the vast majority of blue progenitors of red galaxies at z = 0.62 near the onset of quenching. We separate the blue galaxies into two groups: one group contains the blue progenitors of z = 0.62 red galaxies and the other group other blue galaxies that have not turned into red galaxies by z = 0.62. Figure 16 shows the stellar mass M\u2217-mean galaxy formation time tf scatter plot for blue galaxies at z = 0.80 \u22120.94 that are progenitors of red galaxies at z = 0.62 (top left) and those that do not become red galaxies (top right). Each small group of mostly linearly aligned circles is one galaxy that appears multiple times (maximum is 8). Within the scatters we see that the green dashed line, borrowed from Figure 15, provides a good match to the near constant \f\u2013 32 \u2013 age at the high mass end for the progenitors of red galaxies. The magenta dots, borrowed from Figure 15, match well the trend for the blue dots in the mass range 109.5\u221210.5 M\u2299. These results are fully consistent with our initial expectation based on the observation (of our simulation) of two physical processes: (1) that stellar mass growth is moderate during tqr hence evolution during tqr does not signi\ufb01cantly alter the mean star formation time of each galaxy, (2) less massive forming galaxies have higher sSFR than massive galaxies, causing a steepening of the age-mass relation at the low mass end. This explains the physical origin of the age-mass relation seen in Figure 15. It is prudent to make sure that these important general trends seen in the simulation are robust. In the bottom two panels of Figure 16 we make a comparison between blue galaxies of two simulations with di\ufb00erent resolutions, at z = 3. The bottom-left panel is from the \ufb01ducial simulation with a resolution of 114pc/h and the bottom-right panel is from an identical simulation with four times better resolution of 29pc/h. We see that both the age-mass trend at low mass end and the near constancy of stellar age at the high mass end are shared by the two simulations, suggesting that results from our \ufb01ducial simulation are su\ufb03ciently converged for the general trends presented at the level of concerned accuracies. A comparison between the top left and top right panels in Figure 16 makes it clear that the age-mass relation of the blue progenitors of red galaxies at quenching is, to a large degree, shared by blue galaxies that do not become red galaxies by z = 0.62. One subtle di\ufb00erence is that the most massive non-progenitor blue galaxies are slightly younger than the most massive progenitors of red galaxies, suggesting that the blue progenitors of red galaxies, on \u201ctheir way\u201d to become red galaxies, have started to \u201cforeshadow\u201d quenching e\ufb00ects mildly. 3.5. Environmental Dependencies of Various Galaxy Populations At a given redshift the cumulative environmental e\ufb00ects are imprinted on the relative distribution of galaxies of di\ufb00erent color types and possibly on the properties of galaxies within each type. We now present predictions of our simulations with respect to these aspects. Figure 17 shows distributions of three types of galaxies as a function of distance to the primary galaxy in units of the virial radius of the primary galaxy at z = 0.62. All galaxies with distance larger than 4 virial radii of the primary galaxy are added to the bin with d/rc v = 4 \u22125. We use the total galaxy population above the respective stellar mass threshold as a reference sample and distributions in the top (blue galaxies), middle (green galaxies) and bottom (red galaxies) panels are normalized relative to reference sample. Comparing the top (blue galaxies), middle (green galaxies) and bottom (red galaxies) panels, we see clear di\ufb00erences of environmental dependencies of the three types of galaxies. For blue galaxies, there is a de\ufb01cit at d/rc v \u22642, which is compensated by a comparable excess at d/rc v \u22653. The range d/rc v = 2 \u22123 seems to mark the region where an excess of green galaxies, about one half of which will become red galaxies during the next 1 \u22121.7Gyr. It is useful to recall that not all galaxies in the green valley will turn into red galaxies, which perhaps has contributed in part to some of the \u201cirregularities\u201d of the distribution of the green galaxies (middle panel). For red galaxies, we see a mirror image of blue galaxies: there is an excess at d/rc v \u22642 and a de\ufb01cit d/rc v \u22653. This trend is in agreement with observational indications (e.g., Woo et al. \f\u2013 33 \u2013 1 2 3 4 \u22120.5 0 0.5 1 PDFb/PDFt\u22121 blue galaxies >3\u00d7109 >1\u00d71010 >3\u00d71010 1 2 3 4 \u22120.5 0 0.5 PDFg/PDFt\u22121 green galaxies 0 1 2 3 4 5 \u22121 0 1 red galaxies d/rv c PDFr/PDFt\u22121 Fig. 17.\u2014 Top panel shows distribution of the distance to the nearest primary galaxy for blue galaxies at z = 0.62, PDFb/PDFt \u22121. The middle panel shows the normalized distribution of green galaxies, PDFg/PDFt \u22121. The bottom panel shows the normalized distribution of red galaxies, PDFr/PDFt \u22121. All galaxies with distance larger than 4 virial radii of the primary galaxy are added to the bin with d/rc v = 4 \u22125. 2013). The emerging picture found here that satellite quenching plays a dominant role in quenching galaxies is in accord with observations (e.g., van den Bosch et al. 2008; Peng et al. 2010, 2012). Figure 18 shows distributions of three types of galaxies as a function of environmental entropy S300. We see that the excess of red galaxies starts at S300 = 100 keV cm2 and rises toward higher entropy regions for red galaxies. The trend for blue galaxies is almost an inverted version of that for red galaxies. The trend for green galaxies lie in-between those for blue and red galaxies, as expected. In Cen (2011a) we put forth the notion that a critical entropy Sc = 100 keV cm2 (at z = 0.62 and weakly dependent on redshift), marks a transition to a regime of ine\ufb03cient gas cooling hence cold gas starvation, because above this entropy the gas cooling time exceeds the Hubble time. This is borne out with our more detailed analysis here. We also plot (not shown here) distributions of three types of galaxies as a function of the environmental pressure p300 and environmental overdensity \u03b42, respectively, and \ufb01nd that the trend is broadly similar to that see in Figure 18. Overall, our results are in accord with the observed density-morphology relation (e.g., Oemler 1974; Dressler 1980; Postman & Geller 1984; Cooper et al. 2006; Tanaka et al. 2007; Bundy et al. 2006; Quadri et al. 2012; Muzzin et al. 2012), and with the general observed trend of galaxy population becoming \f\u2013 34 \u2013 1 2 3 \u22120.5 0 1 PDFb/PDFt\u22121 blue galaxies >3\u00d7109 >1\u00d71010 >3\u00d71010 1 2 3 \u22120.5 0 1 PDFg/PDFt\u22121 green galaxies 1 2 3 \u22121 0 1 2 3 4 log S300 (keV cm2) PDFr/PDFt\u22121 red galaxies Fig. 18.\u2014 Top panel shows the normalized environmental entropy distribution of blue galaxies at z = 0.62, PDFb/PDFt \u22121. The middle panel shows the normalized di\ufb00erence distribution of green and blue galaxies, PDFg/PDFb \u22121. The bottom panel shows the normalized di\ufb00erence distribution of red and blue galaxies, PDFr/PDFb \u22121. bluer or mean/median speci\ufb01c star formation rate becoming higher towards underdense regions in the local universe (e.g., Lewis et al. 2002; Goto et al. 2003; G\u00b4 omez et al. 2003; Tanaka et al. 2004; Rojas et al. 2004). Having examined the dependencies of three types of galaxies on environmental variables, we now explore the dependencies on two additional variables: the mass of the halo of the primary galaxy and the secondary to primary galaxy stellar mass ratio. Figure 19 shows fractions of three populations of galaxies in terms of color (red, green, blue) as a function of secondary to primary galaxy stellar mass ratio. The (left, middle, right) columns are for primary galaxies of halo masses in three ranges (1011\u221212 M\u2299,1012\u221213 M\u2299,1013\u221214 M\u2299) respectively. The four rows from top to bottom are for secondaries within four di\ufb00erent radial shells centered on the primary galaxy (\u2264rc v, [1\u22122]rc v, [2\u22123]rc v, [3 \u22124]rc v). We adopt the following language to make comparative statements: the environment quenching is important if the fraction of blue galaxies is less than the fraction of red galaxies and vice versa. We see two separate trends in Figure 19. First, more massive environments are more able to quench star formation; for primary galaxies with halo masses in the range of 1013 \u22121014 M\u2299 the quenching appears to extend at least to [2 \u22123]rc v, whereas for primary galaxies with lower halo \f\u2013 35 \u2013 \u22122 \u22121 0 0.5 0.9 1 Mh c=1013\u221214 \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) \u22123 \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) log M* s/M* c \u22123 \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) log M* s/M* c \u22122 \u22121 0 0.5 0.9 1 Mh c=1012\u221213 PDF (d=[2\u22123]rv c) \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) \u22123 \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) log M* s/M* c \u22123 \u22122 \u22121 0 0.5 0.9 1 PDF (d=[2\u22123]rv c) log M* s/M* c \u22122 \u22121 0 0 0.5 0.9 Mh c=1011\u221212 PDF (d0.1L* galaxies in high-resolution, large-scale\ncosmological hydrodynamic simulations is examined with respect to three\ncomponents: (cold, warm, hot) with temperatures equal to (<10^5, 10^{5-6},\n>10^6)K, respectively. The warm component is compared, utilizing O VI\n\\lambda\\lambda 1032, 1038 absorption lines, to observations and agreement is\nfound with respect to the galaxy-O VI line correlation, the ratio of O VI line\nincidence rate in blue to red galaxies and the amount of O VI mass in\nstar-forming galaxies. A detailed account of the sources of warm halo gas\n(stellar feedback heating, gravitational shock heating and accretion from the\nintergalactic medium), inflowing and outflowing warm halo gas metallicity\ndisparities and their dependencies on galaxy types and environment is also\npresented. Having the warm component securely anchored, our simulations make\nthe following additional predictions. First, cold gas is the primary component\nin inner regions, with its mass comprising 50% of all gas within\ngalacto-centric radius r=(30,150)kpc in (red, blue) galaxies. Second, at\nr>(30,200)kpc in (red, blue) galaxies the hot component becomes the majority.\nThird, the warm component is a perpetual minority, with its contribution\npeaking at ~30% at r=100-300kpc in blue galaxies and never exceeding 5% in red\ngalaxies. The significant amount of cold gas in low-z early-type galaxies found\nin simulations, in agreement with recent observations (Thom et al.), is\nintriguing, so is the dominance of hot gas at large radii in blue galaxies.", + "authors": "Renyue Cen", + "published": "2013-04-11", + "updated": "2013-04-11", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction Galaxy formation and evolution is the central astrophysical problem in cosmology. The basic parameters of the cosmological framework the standard cosmological constantdominated cold dark matter (DM) model (LCDM) (e.g., Krauss & Turner 1995; Bahcall 1Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1304.3466v1 [astro-ph.CO] 11 Apr 2013 \f\u2013 2 \u2013 et al. 1999) are largely \ufb01xed to an accuracy of \u223c10% or better. The LCDM model is able to explain a variety of observations on scales greater than \u223c1Mpc, including high redshift supernovae (e.g., Perlmutter et al. 1998; Riess et al. 1998; Astier et al. 2006), the cosmic microwave background (e.g., Komatsu et al. 2011; Planck Collaboration et al. 2013), largescale distribution of galaxies (e.g., Tegmark et al. 2004; Percival et al. 2007), X-ray cluster abundance (e.g., Allen et al. 2008) and Ly\u03b1 forest (e.g., Croft et al. 2002; Seljak et al. 2005). An important component of the astrophysical problem gravitational formation and evolution of halos that host galaxies is well understood, through N-body simulations (e.g., Jenkins et al. 2001; Bullock et al. 2001; Wechsler et al. 2002; Diemand et al. 2007) and analytic models (e.g., Bond et al. 1991; Lacey & Cole 1993; Sheth & Tormen 1999; Mo & White 2002; Cooray & Sheth 2002). The gastrophysics of galaxy formation and feedback, on the other hand, is far from being adequately understood. Alternative approaches that parameterize and then infer physical processes based on \ufb01nding best matches to observations, such as the semi-analytic methods (e.g., Somerville & Primack 1999; Benson et al. 2003) and the halo-occupation distribution (HOD) method (e.g., Berlind & Weinberg 2002; Zheng et al. 2007), have been successful but have limited predictive power. More importantly, in semianalytic methods the treatment of galaxy formation is halo based and largely decoupled from that of the intergalactic medium, which in fact has dramatically evolved with time. At z = 2 \u22126 most of the baryons are found to be in the Ly\u03b1 forest, a relatively cold phase of temperature of \u223c104K, as indicated by both observations (e.g., Rauch et al. 1997) and simulations (e.g., Cen et al. 1994). By z = 0 most of the baryons in the intergalactic medium have been heated up, primarily by gravitational shocks, to temperatures that are broadly peaked at about 106K, the so-called Warm-Hot Intergalactic Medium (WHIM) (e.g., Cen & Ostriker 1999). The \u201cab initio\u201d, more predictive approach of direct cosmological hydrodynamic simulations, after having made steady progress (e.g., Evrard et al. 1994; Katz et al. 1996; Teyssier 2002; Kere\u02c7 s et al. 2005; Hopkins et al. 2006; Oppenheimer & Dav\u00b4 e 2006; Governato et al. 2007; Naab et al. 2007; Gnedin et al. 2008; Joung et al. 2009; Cen 2011), begin to be able to make statistically signi\ufb01cant and physically realistic characterizations of the simultaneous evolution of galaxies and the intergalactic medium. It is the aim of this writing to quantify the composition of the halo gas in low redshift galaxies, using state-of-the-art high resolution (460h\u22121pc), large-scale (thousands of galaxies) cosmological hydrodynamic simulations with advanced treatments of star formation, feedback and microphysics. Our focus here is on gas that is in the immediate vicinities of galaxies, on galactocentric distances of 10 \u2212500kpc, where the exchanges of gas, metals, energy and momentum between galaxies and the intergalactic medium (IGM) primarily take place. We shall broadly term it \u201ccircumgalactic medium (CGM)\u201d or \u201chalo gas\u201d. Understanding halo gas is necessary before a satisfactory theory of galaxy formation and evolution may be constructed. The present theoretical study is also strongly motivated observationally, in light of recent rapid accumulation of data by HST observations enabling detailed \f\u2013 3 \u2013 comparisons between galaxies and the warm component (T \u223c105 \u2212106K) of their CGM at low redshift (e.g., Chen & Mulchaey 2009; Prochaska et al. 2011b; Tumlinson et al. 2011a; Tripp et al. 2011). We shall dissect halo gas at low redshift (z < 0.5) into three components, (cold, warm, hot) gas with temperature (< 105, 105 \u2212106, > 106)K, respectively. A large portion of our presentation is spent on quantifying O VI \u03bb\u03bb1032, 1038 absorption lines and the overall properties of warm halo gas and comparing them to observations in as much detail as possible. Feedback processes, while being treated with increased physical sophistication, are still not based on \ufb01rst principles due primarily to resolution limitations in large-scale cosmological simulations. Thus, it is imperative that our simulations are well validated and anchored by requiring that some key and pertinent aspects of our simulations match relevant observations. The O VI line, when collisionally ionized, has its abundance peaked at a temperature of T = 105.3\u22125.7K and thus is an excellent proxy for the the warm gas. After validating our simulations with respect to the observed properties of O VI absorption lines, we present the overall composition of low redshift halo gas. We \ufb01nd that, for (red,blue) galaxies more luminous than 0.1L\u2217the cold gas of T < 105K, on average, dominates the halo gas budget within a radius of (30, 150)kpc. Beyond a radius of (30, 200)kpc for (red,blue) galaxies the hot gas of T > 106K dominates. The warm component remains a smallest minority at all radii, peaking at \u223c30% at \u223c100 \u2212300kpc for blue galaxies but never exceeding 5% for red galaxies. The following physical picture emerges for the physical nature of the warm gas component. The warm halo gas has a cooling time much shorter than the Hubble time and hence is \u201ctransient\u201d, with their presence requiring sources. To within a factor of two we \ufb01nd that, for low-z \u22650.1L\u2217red galaxies contributions to warm halo gas from star formation feedback (Fr), accretion of intergalactic medium (Ar) and gravitational shock heating (Gr) are (Fr, Ar, Gr) = (30%, 30%, 40%). For blue \u22650.1L\u2217galaxies contributions to warm halo gas from the three sources are (Fb, Ab, Gb) = (48%, 48%, 4%). The mean metallicity of warm halo gas in (red, blue) galaxies is (\u223c0.25 Z\u2299, \u223c0.11 Z\u2299). Environmental dependence of O VI-bearing halo gas is as follows. In low density environments the metallicity of in\ufb02owing warm gas is substantially lower than that of out\ufb02owing warm gas; the opposite is true in high density environments. The outline of this paper is as follows. In \u00a72.1 we detail simulation parameters and hydrodynamics code, followed by a description of our method of making synthetic O VI spectra in \u00a72.2, which is followed by a description of how we average the two separate simulations C (cluster) and V (void) run in \u00a72.3. Results are presented in \u00a73. A detailed comparison of galaxy-O VI absorber correlation is computed and shown to match observations in \u00a73.1, followed in \u00a73.2 by an analysis of the ratio of O VI absorber incidence rates around blue and red galaxies that is found to be consistent with observations. A detailed examination of the \f\u2013 4 \u2013 physical origin and properties of the warm gas in low-z halo is given in \u00a73.3. The overall composition of low-z halo gas is given in \u00a73.4 and conclusions are summarized in \u00a74. 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the AMR Eulerian hydro code, Enzo (Bryan 1999; Bryan & Norman 1999; O\u2019Shea et al. 2005). The version we use is a \u201cbranch\u201d version (Joung et al. 2009), which includes a multi-tiered re\ufb01nement method that allows for spatially varying maximum re\ufb01nement levels, when desired. This Enzo version also includes metallicity-dependent radiative cooling extended down to 10 K, molecular formation on dust grains, photoelectric heating and other features that are di\ufb00erent from or not in the public version of Enzo code. We use the following cosmological parameters that are consistent with the WMAP7-normalized (Komatsu et al. 2011) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. These parameters are also consistent with the latest Planck results (Planck Collaboration et al. 2013), if one adopts the Hubble constant that is the average between Planck value and those derived based on SNe Ia and HST key program (Riess et al. 2011; Freedman et al. 2012). We use the power spectrum transfer functions for cold dark matter particles and baryons using \ufb01tting formulae from Eisenstein & Hut (1998). We use the Enzo inits program to generate initial conditions. First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed two regions separately, one centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299and the other centered on a void region at z = 0. We then resimulate each of the two regions separately with high resolution, but embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. We name the simulation centered on the cluster \u201cC\u201d run and the one centered on the void \u201cV\u201d run. The re\ufb01ned region for \u201cC\u201d run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and that for \u201cV\u201d run is 31 \u00d7 31 \u00d7 35h\u22123Mpc3. At their respective volumes, they represent 1.8\u03c3 and \u22121.0\u03c3 \ufb02uctuations. The root grid has a size of 1283 with 1283 dark matter particles. The initial static grids in the two re\ufb01ned boxes correspond to a 10243 grid on the outer box. The initial number of dark matter particles in the two re\ufb01ned boxes correspond to 10243 particles on the outer box. This translates to initial condition in the re\ufb01ned region having a mean interparticle-separation of 117h\u22121kpc comoving and dark matter particle mass of 1.07 \u00d7 108h\u22121 M\u2299. The re\ufb01ned region is surrounded by two layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle \f\u2013 5 \u2013 mass 83 times that in the re\ufb01ned region. The initial density \ufb02uctuations are included up to the Nyquist frequency in the re\ufb01ned region. The surrounding volume outside the re\ufb01ned region is also followed hydrodynamically, which is important in order to properly capture matter and energy exchanges at the boundaries of the re\ufb01ned region. Because we still can not run a very large volume simulation with adequate resolution and physics, we choose these two runs of moderate volumes to represent two opposite environments that possibly bracket the universal average. We choose a varying mesh re\ufb01nement criterion scheme such that the resolution is always better than 460/h proper parsecs within the re\ufb01ned region, corresponding to a maximum mesh re\ufb01nement level of 9 above z = 3, of 10 at z = 1\u22123 and 11 at z = 0\u22121. The simulations include a metagalactic UV background (Haardt & Madau 2012), and a model for shielding of UV radiation by atoms (Cen et al. 2005). The simulations also include metallicity-dependent radiative cooling and heating (Cen et al. 1995). We clarify that our group has included metal cooling and metal heating (due to photoionization of metals) in all our studies since Cen et al. (1995) for the avoidance of doubt (e.g., Wiersma et al. 2009; Tepper-Garc\u00b4 \u0131a et al. 2011). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c105\u22126 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell (i.e., weighting is equal to the inverse of density), which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the whole feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating, as in nature. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported into right directions in a physically sound (albeit still approximate at the current resolution) way, at least in a statistical sense. In our simulations metals are followed hydrodynamically by solving the metal density continuity equation with sources (from star formation feedback) and sinks (due to subsequent star formation). Thus, metal mixing and di\ufb00usion through advection, turbulence and other hydrodynamic processes are properly treated in our simulations. The primary advantages of this supernova energy based feedback mechanism are threefold. First, nature does drive winds in this way and energy input is realistic. Second, it has only one free parameter eSN, namely, the fraction of the rest mass energy of stars formed that is deposited as thermal energy on the cell scale at the location of supernovae. Third, the processes are treated physically, obeying their respective conservation laws (where \f\u2013 6 \u2013 they apply), allowing transport of metals, mass, energy and momentum to be treated selfconsistently and taking into account relevant heating/cooling processes at all times. We use eSN = 1 \u00d7 10\u22125 in these simulations. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier IMF translates to eSN = 6.6 \u00d7 10\u22126. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy (due to Type II supernovae) is used to power galactic superwinds (e.g., Heckman 2001). Given the uncertainties on the evolution of IMF with redshift (i.e., possibly more top heavy at higher redshift) and the fact that newly discovered prompt Type I supernovae contribute a comparable amount of energy compared to Type II supernovae, it seems that our adopted value for eSN is consistent with observations and physically realistic. The validity of this thermal energy-based feedback approach comes empirically. In Cen (2012b) the metal distribution in and around galaxies over a wide range of redshift (z = 0 \u22125) is shown to be in excellent agreement with respect to the properties of observed damped Ly\u03b1 systems (Rafelski et al. 2012), whereas in Cen (2012a) we further show that the properties of O VI absorption lines at low redshift, including their abundance, Doppler-column density distribution, temperature range, metallicity and coincidence between O VII and O VI lines, are all in good agreement with observations (Danforth & Shull 2008; Tripp et al. 2008; Yao et al. 2009). This is non-trivial by any means, because they require that the transport of metals and energy from galaxies to star formation sites to megaparsec scale be correctly modeled as a function of distance over the entire cosmic timeline, at least in a statistical sense. 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL (Galaxy Isochrone Synthesis Spectral Evolution Library) stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (ugriz) and others. At a spatial resolution of proper 460pc/h with more than 2000 well resolved galaxies at z = 0, this simulated galaxy catalog presents an excellent (by far, the best available) tool to study circumgalactic medium around galaxies at low reshift. \f\u2013 7 \u2013 In some of the analysis we perform here we divide our simulated galaxy sample into two sets according to the galaxy color. We shall call galaxies with g \u2212r < 0.6 blue and those with g \u2212r > 0.6 red. It is found that g \u2212r = 0.6 is at the trough of the galaxy bimodal color distribution of our simulated galaxies (Cen 2011; Tonnesen & Cen 2012), which agrees well with that of observed low-z galaxies (e.g., Blanton et al. 2003). 2.3. Generation of Synthetic O VI Absorbers The photoionization code CLOUDY (Ferland et al. 1998) is used post-simulation to compute the abundance of O VI, adopting the shape of the UV background calculated by Haardt & Madau (2012) normalized by the intensity at 1 Ryd determined by Shull et al. (1999) and assuming ionization equilibrium. We generate synthetic absorption spectra given the density, temperature, metallicity and velocity \ufb01elds in simulations. Each absorption line is identi\ufb01ed by the velocity (or wavelength) interval between one downward-crossing and the next upward-crossing points at \ufb02ux equal to 0.99 (\ufb02ux equal to unity corresponds to an unabsorbed continuum \ufb02ux) in the spectra. We do not add instrumental and other noises to the synthetic spectra. Since the absorption lines in question are sparsely distributed in velocity space, their identi\ufb01cations have no signi\ufb01cant ambiguity. Column density, equivalent width, Doppler width, mean column density weighted velocity and physical space locations, mean column density weighted temperature, density and metallicity are computed for each line. We sample the C and V run, respectively, with 72, 000 and 168, 000 random lines of sight at z = 0, with a total pathlength of \u2206z \u223c2000. A total of \u223c30, 000 \u226550 mA O VI absorbers are identi\ufb01ed in the two volumes. While a detailed Voigt pro\ufb01le \ufb01tting of the \ufb02ux spectrum would have enabled closer comparisons with observations, simulations suggest that such an exercise does not necessarily provide a more clarifying physical understanding of the absorber properties, because bulk velocities are very important and velocity substructures within an absorber do not necessarily correspond to separate physical entities (Cen 2012a). 2.4. Averaging C and V Runs The C and V runs at z = 0 are used to obtain an \u201caverage\u201d of the universe. This cannot be done precisely without much larger simulation volumes, which is presently not feasible. Nevertheless, we make the following attempt to obtain an approximate average. The number density of galaxies with luminosity greater than 0.1L\u2217in SDSS r-band in the two runs is found to be 3.95 \u00d7 10\u22122h3Mpc\u22123 and 1.52 \u00d7 10\u22122h3Mpc\u22123, respectively, in the C and V box. We \ufb01x the weighting for C and V run for the purpose of averaging statistics of the C and V runs by requiring that the average density of galaxies with luminosity greater than 0.1L\u2217in SDSS r-band in the simulations to be equal to the observed global value of \f\u2013 8 \u2013 2.87 \u00d7 10\u22122h3Mpc\u22123 by SDSS (Blanton et al. 2003). In the results shown below we use this method to obtain averages of statistics, where doing so allows for some more quantitative comparisons with observed data. 3. Results 3.1. Galaxy-O VI Absorber Correlation at z = 0 \u22120.5 1 1.5 2 2.5 3 3.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log rp (kpc) P(50mA, >0.1L*, z=0\u22120.2 (full sim; 2m error) W>50mA, >0.1L*, z=0\u22120.2 (photoioniz; 2m error) Chen & Mulchaey (2009): z=0\u22120.5 (obs) Prochaska et al (2011): z=0\u22120.2 (obs) Tumlinson et al (2011): z=0.1\u22120.36 (obs) Fig. 1.\u2014 Cumulative probability distribution functions of \u226550 mA O VI absorbers of \ufb01nding \u22650.1L\u2217galaxies at z = 0\u22120.2 from simulations with 2\u03c3 errorbars (red solid curves). The distribution functions at z = 0\u22120.2 are obtained by averaging z = 0 and z = 0.2 results with equal weighting. Also shown as symbols are observations from Chen & Mulchaey (2009) (solid diamonds), Prochaska et al. (2011a) (open squares) and Tumlinson et al. (2011b) (solid dots). Because the impact parameter of Tumlinson et al. (2011b) samples reaches only 150kpc, we have normalized their data points by matching their rp = 150kpc point to the rp = 150kpc point of Prochaska et al. (2011a). The blue dashed curve is produced when only photoionized O VI lines with temperature T \u22643 \u00d7 104K in our simulations are used. The \u03c7 square per degree of freedom for the red solid curve using all observed data points is 1.2, whereas it is 7.6 for the blue dashed curve. Figure 1 shows the cumulative probability distribution functions of \u226550 mA O VI absorbers of \ufb01nding \u22650.1L\u2217galaxies at z = 0\u22120.2 from simulations as well as observations. \f\u2013 9 \u2013 We \ufb01nd good agreement between simulations and observations, quanti\ufb01ed by the \u03c7 square per degree of freedom of 1.2. In comparison, if using only the low temperature (T < 3 \u00d7 104K) O VI absorbers in the simulations, the cumulative probability is no longer in reasonable agreement with observations, with the \u03c7 square per degree of freedom equal to 7.6; this exercise, however, only serves as an illustration of what a photoionization dominated model may produce. It will be very interesting to make a similar calculation directly using SPH simulations that have predicted the dominance of photoionized O VI absorbers even for strong O VI absorbers as shown here (e.g., Oppenheimer et al. 2012). This signi\ufb01cant di\ufb00erence found with respect to the strong O VI absorber-galaxy cross correlations between the photoionization and collisional ionization dominated models stems from the relative di\ufb00erence in the locations of strong O VI absorbers in the two models. In the collisional ionization dominated model (Cen 2012a; Shull et al. 2011) the strong O VI absorbers are spatially closer to galaxies in order to have high enough temperature (hence high O VI abundance) and high enough density to make strong O VI absorbers, whereas in the photoionization dominated model (Tepper-Garc\u00b4 \u0131a et al. 2011; Oppenheimer et al. 2012) they have to be su\ufb03ciently far from galaxies to have low enough densities to be photoionized to O VI. Additional requirement in the latter for production of strong O VI absorbers is high metallicity (\u22650.1 Z\u2299) to yield high enough O VI columns, as found in SPH simulations (Tepper-Garc\u00b4 \u0131a et al. 2011; Oppenheimer et al. 2012). 3.2. O VI Absorbers Around Blue and Red Galaxies Observations have shown an interesting dichotomy of O VI incidence rate around blue and red galaxies. Figure 2 shows the cumulative probability distribution functions of NOVI > 1014 cm\u22122 O VI absorbers of \ufb01nding a (red, blue) galaxy of luminosity of \u22650.1L\u2217at z = 0.2 from simulations. In Figure 3 we show the ratio of the cumulative radial distribution per red galaxy to per blue galaxy of \u22650.1L\u2217at z = 0.2, compared to observations. It is seen in Figure 3 that in the r = 50\u2212300kpc range the ratio of incidence rate of strong O VI absorbers around red galaxies to that around blue galaxies is about 1:5, in quantitative agreement with observations. In the case with photoionized O VI absorbers only, the fraction of O VI absorbers around red galaxies is much lower and lies signi\ufb01cantly below the observational estimates, although the present small observational sample prevents from reaching strong statistical conclusions based on this ratio alone. We see in Figure 2 that statistical uncertainties of the radial probability distribution of simulated O VI absorbers are already very small due to a signi\ufb01cant number of simulated galaxies and a still larger number of simulated absorbers used. What limits the ability to make \ufb01rm statistical statements is the sample size of observational data. Hypothetically, if the mean remains the same, a factor of two smaller errorbars would render the photoioniza\f\u2013 10 \u2013 1 1.5 2 2.5 3 \u22123 \u22122 \u22121 0 log rp (kpc) log P(1014, >0.1L* blue, full sample N>1014, >0.1L* red, full sample N>1014, >0.1L* blue, photo subsample N>1014, >0.1L* red, photo subsample Fig. 2.\u2014 The cumulative probability distribution functions of O VI absorbers with column density greater than 1014 cm\u22122 of \ufb01nding a (red, blue) galaxy of luminosity of \u22650.1L\u2217(in SDSS r-band) at z = 0.2 with (red dashed, blue solid) curves from simulations with 10\u03c3 errorbars. The (red dotted curve, blue dot-dashed curve) are the corresponding functions for the subset of O VI absorbers that have temperature T \u22643 \u00d7 104K in our simulations. tion dominated model inconsistent with observations at \u22652\u03c3 con\ufb01dence level, whereas our collisionally dominated model would be consistent with observations within 1\u03c3. 3.3. Physical Origin of O VI Absorbers We now turn to an analysis to give a physical description for the origin of O VI absorbers in the CGM, in the context of the cold dark matter based model. While this section is interesting on its own for physically understanding halo gas, the next section on halo gas mass decomposition is not predicated on it. First, we ask whether the warm gas traced by O VI absorbers requires signi\ufb01cant energy input to be sustained over the Hubble time. Figure 4 shows the metal mass in the warm gas (T = 105 \u2212106K) distributed in the density-metallicity phase space. We \f\u2013 11 \u2013 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 0.5 log rp (kpc) P(red)/P(blue) full simulated sample NOVI>1014 photoionized subsample NOVI>1014 Tumlinson et al 2011 (z=0.1\u22120.36) Fig. 3.\u2014 The ratio of cumulative radial probability distribution function of O VI absorbers of equivalent width (W) greater than 50mA per red galaxy to that per blue galaxy of \u22650.1L\u2217 (in SDSS r-band) at redshift z = 0.2 from simulations (the black solid curve, 2\u03c3 errorbars). The same ratio for photoionized O VI absorbers (T \u22643 \u00d7 104K) only is shown as the green dashed curve. Also shown as an open circle is the observation by Tumlinson et al. (2011b). see that most of warm metals is concentrated in a small phase space region centered at (n, Z) = (10\u22125cm\u22123, 0.15 Z\u2299). We note that the amount of gas mass in the WHIM is 40% of total gas averaged over the simulation volumes, in agreement with previous simulations (e.g., Cen & Ostriker 1999; Dav\u00b4 e et al. 2001; Cen & Ostriker 2006) and other recent simulations (Smith et al. 2011; Dav\u00b4 e et al. 2010; Shen & Kelly 2010; Tornatore et al. 2010). For this gas we \ufb01nd that the cooling time is tcool = 6\u00d7108 yrs (assuming a temperature of 105.5K), shorter than the Hubble time by a factor \u226520. For the strong O VI absorbers considered here, the cooling time is still shorter. In other words, either (1) energy is supplied to sustain existing O VI gas or (2) new warm gas is accreted or (3) some hotter gas needs to continuously cool through the warm phase. Since O VI gas by itself does not de\ufb01ne a set of stable systems and is spatially well mixed or in close proximity with other phases of gas, this suggests that the O VI gas in halos is \u201ctransient\u201d in nature. \f\u2013 12 \u2013 Fig. 4.\u2014 shows the metal mass in the warm gas (T = 105 \u2212106K) distributed in the density-metallicity phase space. This is a good proxy for O VI bearing gas. The amount of gas mass in the WHIM is 40% of total gas in the simulation volume. We consider three sources of warm halo gas: mechanical feedback energy from stellar evolution, gravitational binding energy released from halo formation and interactions, and direct accretion from the IGM. This simpli\ufb01cation sets a framework to make a quantitative assessment of these three sources for warm gas that we now describe. We denote Fb and Fr as the O VI incidence rate (in some convenient units) per blue and red \u22650.1L\u2217galaxy due to star formation feedback energy heating, Gb and Gr as those due to gravitational heating, and Ab and Ar as those due to accreted gas from the IGM. It is useful to stress the distinction between G and A. A is gas directly accreted from IGM that is either already warm or heated up to be warm by compression upon accretion onto the halo. On the other hand, G is gas that is shock heated to the warm phase or to a hotter phase that cools back down to become \f\u2013 13 \u2013 0 1 2 3 5 6 7 8 9 10 11 12 13 log r (kpc) log gas mass (0.1L*, z=0.2 T=105\u22126K, red >0.1L*, z=0.2 T>106K, red >0.1L*, z=0.2 T<105K, blue >0.1L*, z=0.2 T=105\u22126K, blue >0.1L*, z=0.2 T>106K, blue >0.1L*, z=0.2 Tumlinson et al (2011) Thom et al (2012) Fig. 5.\u2014 shows the cumulative gas mass as a function of radius for cold (dashed curves), warm-hot (solid curves) and hot gas (dotted curves) around blue (blue curves) and red (red curves) galaxies at z = 0.2. Also shown the the black triangle is the lower limit from observations of Tumlinson et al. (2011a) for star forming galaxies. The horizontal blue and red dot-dashed lines are the amount of warm gas the respective star formation rate can possibly produce. Additional data point for cold (T < 105K) gas in early-type galaxies within 150kpc is also plotted as the the red square with the errorbars indicating an estimated vertical range from observations of Thom et al. (2012) based on 15 early-type galaxies. warm. Restricting our analysis to within a galactocentric radius of 150kpc and reading o\ufb00 numbers from the red curve in Figure 3, we obtain two relations: Fb + Gb + Ab = 5, Fr + Gr + Ar = 1. (1) An additional reasonable assumption is now made: feedback heating rate Sr (Sb) is proportional to average star formation rate SFRr (SFRb), which in turn is proportional to their respective gas accretion rate Ar (Ab). This assumption allows us to lump Fr and Ar (Fb and \f\u2013 14 \u2013 Ab): Sb = Fb + Ab = C \u00d7 SFRb Sr = Fr + Ar = C \u00d7 SFRr, (2) where C is a constant. We will return to determine Fr and Ar (Fb and Ab) separately later. Equation (1) is now simpli\ufb01ed to: Sb + Gb = 5, Sr + Gr = 1. (3) The ratio of SFRb to SFRr can be computed directly in the simulations, found to be 8.4. Rounding it down to 8 and combining it with Equation (2) give Sb/Sr = 8. (4) Lastly, a direct assessment of the relative strength of gravitational heating of warm gas in blue and red galaxies is obtained by making the following ansatz: the amount of hot T \u2265106K gas is proportional to the overall heating rate, to which the gravitational heating rate of warm gas is proportional. Figure 5 shows the gas mass of the three halo gas components interior to the radius shown in the x-axis. Within the galactocentric radius of 150kpc it is found that the amount of hot halo gas per red \u22650.1L\u2217galaxy is twice that of per blue \u22650.1L\u2217galaxy: Gr/Gb = 2. (5) Solving Equations (3,4,5) yields Sb = 24/5, Gb = 1/5; Sr = 3/5, Gr = 2/5. (6) The estimate given in Equations (5) is admittedly uncertain. Therefore, an estimate on how sensitively conclusions depend on it is instructive. We \ufb01nd that, if we had used Gr/Gb = 1 (instead of 2), we would have obtained Sb = 32/7, Gb = 3/7, Sr = 4/7, Gr = 3/7; had we used Gr/Gb = 1/2, we would have obtained Sb = 4, Gb = 1, Sr = 1/2, Gr = 1/2. Thus, a relatively robust conclusion for the sources of warm halo gas emerges: (1) for red \u22650.1L\u2217 galaxies (Fr + Ar) and Gr have the same magnitude, (2) for blue \u22650.1L\u2217galaxies (Fb + Ab) overwhelmingly dominates over Gb. It is prudent to have a consistency check for the conclusion that star formation feedback may dominate heating of warm gas that produces the observed O VI absorbers in blue galaxies. In Figure 5 the horizontal blue dot-dashed line is obtained by assuming a Chabrier-like \f\u2013 15 \u2013 IMF that is used in the simulations, which translates to 2/3\u00d710\u22125SFR\u00d7tcool\u00d7c2/(k105.5K), where c is speed of light and k Boltzmann constant; also assumed is that 2/3 of the initial supernova energy is converted to gas thermal energy, which is the asymptotic value for Sedov explosions, SFR the respective average star formation rate per \u22650.1L\u2217blue galaxy, tcool = 6 \u00d7 108yrs an estimated cooling time for warm halo gas. From this illustration we see that with about 20% e\ufb03ciency of heating warm gas, star formation feedback energy is already adequate for accounting for all the observed warm gas around blue galaxies. We therefore conclude that the required energy from star formation feedback to heat up the warm gas is available and our conclusions are self-consistent, even if the direct accretion contribution is zero, which we will show is not. Our results on warm gas mass are also in reasonable agreement with observations of Tumlinson et al. (2011a), so is the oxygen mass contained in the warm component, as shown in Figure 6. Let us now determine Fr and Ar (Fb and Ab) individually in the following way. We compute warm metal mass that have inward and outward radial velocities within a radial shell at r = [50, 150]kpc separately for all red > 0.1L\u2217galaxies and all blue > 0.1L\u2217galaxies, denoting in\ufb02ow warm metal mass as MZ(vr < 0) and out\ufb02ow warm metal mass as MZ(vr > 0), where vr is radial velocity of a gas element with positive being out\ufb02owing and negative being in\ufb02owing. We de\ufb01ne the in\ufb02ow warm metal fraction as fin \u2261MZ(vr < 0)/(MZ(vr < 0) + MZ(vr > 0)), which is listed as the \ufb01rst of the three elements in each entry in Table 1 under the column |vr| > 0 km/s. We also compute the mean metallicities (in solar units) for the in\ufb02ow and out\ufb02ow warm gas, which are the second and third of the three elements in each entry in Table 1. Four separated cases are given: (1) red galaxies in C run (C red), (2) red galaxies in V run (V red), (3) blue galaxies in C run (C blue), (4) blue galaxies in V run (V blue). In order to make sure that in\ufb02ow and out\ufb02ow are not confused with random motions of gas in a Maxwellian like distribution, we separately limit the magnitude of infall and out\ufb02ow radial velocities to greater than 100 km/s and 250 km/s, and listed the computed quantities under the third column |vr| > 100 km/s and the fourth column |vr| > 250 km/s, respectively. Table 1. Warm In\ufb02ow and Out\ufb02ow at r = [50 \u2212150]kpc Radial Shell |vr| > 0 km/s |vr| > 100 km/s |vr| > 250 km/s (fin, Zin/ Z\u2299, Zout/ Z\u2299) (fin, Zin/ Z\u2299, Zout/ Z\u2299) (fin, Zin/ Z\u2299, Zout/ Z\u2299) C red (58%, 0.27, 0.17) (58%, 0.29, 0.17) (59%, 0.31, 0.17) V red (51%, 0.21, 0.29) (61%, 0.18, 0.26) (65%, 0.11, 0.33) C blue (54%, 0.099, 0.10) (55%, 0.099, 0.10) (55%, 0.099, 0.10) V blue (52%, 0.10, 0.14) (52%, 0.09, 0.16) (46%, 0.08, 0.24) \f\u2013 16 \u2013 0 1 2 3 4 5 6 7 8 9 10 log r (kpc) log oxygen mass (0.1L*, z=0.2 T=105\u22126K, red >0.1L*, z=0.2 T>106K, red >0.1L*, z=0.2 T<105K, blue >0.1L*, z=0.2 T=105\u22126K, blue >0.1L*, z=0.2 T>106K, blue >0.1L*, z=0.2 Tumlinson et al 2011 Fig. 6.\u2014 shows the cumulative metal mass as a function of radius for cold (dashed curves), warm-hot (solid curves) and hot gas (dotted curves) around blue (blue curves) and red (red curves) galaxies at z = 0.2. It is interesting to \ufb01rst take a closer look at the di\ufb00erence in metallicities between in\ufb02ow and out\ufb02ow gas. The warm in\ufb02ow gas in red galaxies in the C run has consistently higher metallicity than warm out\ufb02ow gas, Zin = (0.27 \u22120.31) Z\u2299versus Zout = 0.17 Z\u2299. The opposite holds for red galaxies in the V run: Zin = (0.11 \u22120.21) Z\u2299versus Zout = (0.26 \u22120.33) Z\u2299. The warm in\ufb02ow gas in blue galaxies in the C run has about the same metallicity as warm out\ufb02ow gas at Z = (0.09 \u22120.1) Z\u2299. The warm in\ufb02ow gas in blue galaxies in the V run, on the other hand, has a substantially lower metallicity than the warm out\ufb02ow gas, Zin = (0.08 \u22120.10) Z\u2299versus Zout = (0.14 \u22120.24) Z\u2299. Except in the case of C blue, we note that the in\ufb02ow and out\ufb02ow gas has di\ufb00erent metallicities, with the di\ufb00erence being larger when a higher \ufb02ow velocitiy is imposed in the selection. This di\ufb00erence in metallicity demonstrates that the warm in\ufb02ows and out\ufb02ows are distinct dynamical entities, not random motions in a well-mixed gas, making our distinction of in\ufb02ows and out\ufb02ows physically meaningful. A physical explanation for the metallicity trends found can be made as follows. In low density environment (i.e., in the V run) circumgalactic medium has not been enriched to a high level and hot gas is not prevalent. As a result, warm (and possibly cold) in\ufb02ows of relatively low metallicities still exist at low redshift. The progression from blue to red galaxies in the V run re\ufb02ects a progression from very low density regions (i.e., true voids) to dense \ufb01laments and group environments, with higher metallicities for both \f\u2013 17 \u2013 in\ufb02ows and out\ufb02ows in the denser environments in the V run; but the di\ufb00erence between in\ufb02ow and out\ufb02ow metallicities remains. For red galaxies in high density environments (C run) the circumgalactic medium has been enriched to higher metallicities. Higher cooling rates of higher-metallicity gas in relatively hot environments preferentially produces highermetallicity warm gas that originates from hot gas and has now cooled to become warm gas. The blue galaxies in the C run are primarily in cosmic \ufb01laments and the metallicity of the in\ufb02ow gas is about 0.1 Z\u2299, which happens to coincide with the metallicity of the out\ufb02ow gas. One needs to realize that at the radial shell r = [50 \u2212150]kpc over which the tabulated quantities are computed, the out\ufb02ow gas originated in star forming regions has loaded a substantial amount of interstellar and circumgalactic medium in the propagation process. Let us now turn to the warm in\ufb02ow and out\ufb02ow metal mass. It appears that the fraction of in\ufb02ow warm metals (out of all warm metals) lies in a relatively narrow range fin = 45 \u221265%. For our present purpose we will just say Fr = Ar and Fb = Ab. Armed with these two relations our best estimates for various contributions to the observed warm halo metals, as a good proxy for the O VI absorption, can be summarized as follows. \u2022 For red \u22650.1L\u2217galaxies at z = 0.2 contributions to warm metals in the halo gas from star formation feedback (Fr), accretion of intergalactic medium (Ar) and gravitational shock heating (Gr) are (Fr, Ar, Gr) = (30%, 30%, 40%). \u2022 For blue \u22650.1L\u2217galaxies at z = 0.2 contributions to warm metals in the halo gas from the three sources are (Fb, Ab, Gb) = (48%, 48%, 4%). \u2022 Dependencies of warm halo gas metallicities on galaxy type and environment are complex but physically understandable. For red galaxies, the metallicity of in\ufb02owing warm gas increases with increasing environmental overdensity, whereas that of out\ufb02owing warm gas decreases with increasing environmental overdensity. For blue galaxies, the metallicity of in\ufb02owing warm gas depends very weakly on environmental overdensity, whereas that of out\ufb02owing warm gas decreases with increasing environmental overdensity. As a whole, the mean metallicity of warm halo gas in red galaxies is \u223c0.25 Z\u2299, while that of blue galaxies is \u223c0.11 Z\u2299. We suggest that these estimates of source fractions are not seriously in error on average, if one is satis\ufb01ed with an accuracy of a factor of two. The relative metallicity estimates should be quite robust with errors much smaller than a factor of two. It is stressed that these estimates are averaged over many red and blue galaxies and one is not expected to have been led to think that the correlations (such as between warm gas mass and SFR) hold strictly for individual galaxies. Rather, we expect large variations from galaxy to galaxy, even at a \ufb01xed star formation rate. Figure 7 makes this important point clear, which shows that, while there is a positive correlation between warm metal mass within 150kpc radius and SFR for galaxies with non-negligible SFR (i.e., appearing in the SFR range shown), a \f\u2013 18 \u2013 dispersion of \u223c1 dex in warm metal mass at a \ufb01xed SFR in the range of 0.1 \u2212100 M\u2299yr\u22121 exists. The goodness of the \ufb01t can be used as a way to rephrase this signi\ufb01cant dispersion. If one assumes that the errorbar size is each log mass determination for each shown galaxy is 1, one \ufb01nds that the chi-square per degree of the \ufb01tting line (green) is 0.80, indicating that the correlation between log MZ(T = 105\u22126K) and log SFR is only good to about 1 dex in warm metal gas mass. \u22123 \u22122 \u22121 0 1 2 3 4 5 6 7 8 9 log SFR (Msun/yr) log MZ(T=105\u22126) (Msun) MZ(<150kpc) Fig. 7.\u2014 shows the metal mass in the warm gas MZ within a galactocentric radius of 150kpc as a function of the SFR of the galaxy at z = 0.2. Each red dot is a galaxy. The green curve shows the best linear regression, log MZ(T = 105\u22126K)/ M\u2299= 0.32 log SFR + 7.1, for the galaxies shown. 3.4. Composition of Low-z Halo Gas In Cen (2012a) we show that the properties of O VI absorption lines at low redshift, including their abundance, Doppler-column density distribution, temperature range, metallicity and coincidence between O VII and O VI lines, are all in good agreement with observations (Danforth & Shull 2008; Tripp et al. 2008; Yao et al. 2009). In the above we have shown that O VI-galaxies relations as well as oxygen mass in galaxies in the simulations are also in excellent agreement with observations. These tests together are non-trivial and lend us signi\ufb01cant con\ufb01dence to now examine the overall composition of halo gas at low-z. Figure 8 shows the di\ufb00erential (left panel) and cumulative (right panel) mass fractions of each gas component as a function of galactocentric distance for red (red curves) and blue \f\u2013 19 \u2013 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log r (kpc) gas mass fractions (r) 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log r (kpc) gas mass fractions (0.1L*, z=0.2 T=105\u22126K, red >0.1L*, z=0.2 T>106K, red >0.1L*, z=0.2 T<105K, blue >0.1L*, z=0.2 T=105\u22126K, blue >0.1L*, z=0.2 T>106K, blue >0.1L*, z=0.2 Fig. 8.\u2014 shows the di\ufb00erential (left panel) and cumulative (right panel) gas mass fractions as a function of radius for cold (dashed curves), warm-hot (solid curves) and hot gas (dotted curves) around blue (blue curves) and red (red curves) > 0.1L\u2217galaxies at z = 0.2. 0 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log r (kpc) metals mass fractions (0.1L*, z=0.2 T=105\u22126K, red >0.1L*, z=0.2 T>106K, red >0.1L*, z=0.2 T<105K, blue >0.1L*, z=0.2 T=105\u22126K, blue >0.1L*, z=0.2 T>106K, blue >0.1L*, z=0.2 Fig. 9.\u2014 shows the di\ufb00erential (left panel) and cumulative (right panel) gas metals mass fractions as a function of radius for cold (dashed curves), warm-hot (solid curves) and hot gas (dotted curves) around blue (blue curves) and red (red curves) galaxies at z = 0.2. (blue curves) galaxies. We note that the \ufb02uctuating behaviors (mostly in the di\ufb00erential functions on the left panel) are due to occasional dense cold clumps in neighboring galaxies. Overall, we see that within about (10,30)kpc for (red, blue) galaxies the cold (T < 105K) gas component completely dominates, making up about (80%, > 95%) of all gas at these radii. For both > 0.1L\u2217(red, blue) galaxies cold gas remains the major component up to r = (30, 150)kpc, within which its mass comprises 50% of all gas. At r > (30, 200)kpc for (red, blue) galaxies the hot (T > 106K) gas component dominates. The warm gas component, \f\u2013 20 \u2013 while having been extensively probed observationally, appears to be a minority in both red and blue galaxies at all radii. The warm component\u2019s contribution to the overall gas content reaches its peak value of \u223c30% at r = 100\u2212300kpc for blue galaxies, whereas in red galaxies it is negligible at r < 10kpc and hovers around 5% level at r = 30\u22121000kpc. The prevalence of cold gas at small radii in red (i.e., low star formation activities) galaxies is intriguing and perhaps surprising to some extent. Some recent observations indicate that early-type galaxies in the real universe do appear to contain a substantial amount of cold gas, consistent with our \ufb01ndings. For example, Thom et al. (2012) infer a mean mass of 109 \u22121011 M\u2299of gas with T < 105K at r < 150kpc based on a sample of 15 early-type galaxies at low redshift from COS observations, which is shown as the red square (its horizontal position is slightly shifted to the right for display clarity) in Figure 5. Their inferred range is in fact consistent with our computed value of \u223c6 \u00d7 1010 M\u2299of cold T < 105K gas for red > 0.1L\u2217galaxies shown as the red dashed curve in Figure 5. Figure 9 that is analogous to Figure 8 shows the corresponding distributions for metals mass fractions in the three components for red and blue galaxies. The overall trends are similar to those for total warm gas mass. We note one signi\ufb01cant di\ufb00erence here. The overall dominance of metals mass in cold gas extends further out radially for both red and blue galaxies, whereas the contributions to metal mass from the other two components are compensatorily reduced. For example, we \ufb01nd that the radius within which the cold mass component makes up 50% of total gas in (red, blue) galaxies is (40,150)kpc, where the radius within which the cold mass component makes up 50% of total gas metals in (red, blue) galaxies becomes (200,500)kpc. This is largely due to a signi\ufb01cantly higher metallicity of the cold component in both red and blue galaxies compared to the other two components, as shown in Figure 10. We also note from Figure 10 that the warm gas in red galaxies has a higher metallicity than in blue galaxies, as found earlier, with the mean metallicity (\u223c0.25 Z\u2299, \u223c0.11 Z\u2299) in (red, blue) galaxies within a radius of 150kpc. \f\u2013 21 \u2013 1 2 3 \u22122 \u22121 0 log r (kpc) [Z/H] T<105K, red >0.1L*, z=0.2 T=105\u22126K, red >0.1L*, z=0.2 T>106K, red >0.1L*, z=0.2 T<105K, blue >0.1L*, z=0.2 T=105\u22126K, blue >0.1L*, z=0.2 T>106K, blue >0.1L*, z=0.2 Fig. 10.\u2014 shows as a function of radius the metallicity for cold (dashed curves), warm-hot (solid curves) and hot gas (dotted curves) around blue (blue curves) and red (red curves) galaxies at z = 0.2. 4." + }, + { + "url": "http://arxiv.org/abs/1210.3600v1", + "title": "Nature of Lyman Alpha Blobs: Powered by Extreme Starbursts", + "abstract": "We present a new model for the observed Lyman alpha blobs (LABs) within the\ncontext of the standard cold dark matter model. In this model, LABs are the\nmost massive halos with the strongest clustering (proto-clusters) undergoing\nextreme starbursts in the high-z universe. Aided by calculations of detailed\nradiative transfer of Lya photons through ultra-high resolution (159pc)\nlarge-scale (>30Mpc) adaptive mesh-refinement cosmological hydrodynamic\nsimulations with galaxy formation, this model is shown to be able to, for the\nfirst time, reproduce simultaneously the global Lya luminosity function and\nluminosity-size relation of the observed LABs. Physically, a combination of\ndust attenuation of Lya photons within galaxies, clustering of galaxies, and\ncomplex propagation of Lya photons through circumgalactic and intergalactic\nmedium gives rise to the large sizes and frequently irregular isophotal shapes\nof LABs that are observed. A generic and unique prediction of this model is\nthat there should be strong far-infrared (FIR) sources within each LAB, with\nthe most luminous FIR source likely representing the gravitational center of\nthe proto-cluster, not necessarily the apparent center of the Lya emission of\nthe LAB or the most luminous optical source. Upcoming ALMA observations should\nunambiguously test this prediction. If verified, LABs will provide very\nvaluable laboratories for studying formation of galaxies in the most overdense\nregions of the universe at a time when global star formation is most vigorous.", + "authors": "Renyue Cen, Zheng Zheng", + "published": "2012-10-12", + "updated": "2012-10-12", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction The physical origin of spatially extended (tens to hundreds of kiloparsecs) luminous (LLy\u03b1 \u22651043erg/s) Ly\u03b1 sources, also known as Ly\u03b1 blobs (LABs) \ufb01rst discovered more than 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu 2University of Utah, Department of Physics and Astronomy, Salt Lake City, UT 84112; zhengzheng@astro.utah.edu arXiv:1210.3600v1 [astro-ph.CO] 12 Oct 2012 \f\u2013 2 \u2013 a decade ago (e.g., Francis et al. 1996; Fynbo et al. 1999; Keel et al. 1999; Steidel et al. 2000), remains a mystery. By now several tens of LABs have been found (e.g., Matsuda et al. 2004; Dey et al. 2005; Saito et al. 2006; Smith et al. 2009; Matsuda et al. 2011). One fact that has confused the matter considerably is that they appear to be associated with a very diverse galaxy population, including regular Lyman break galaxies (LBGs) (e.g., Matsuda et al. 2004), ultra-luminous infrared galaxies (ULIRGs) and sub-millimeter galaxies (SMGs) (e.g., Chapman et al. 2001; Geach et al. 2005, 2007; Matsuda et al. 2007; Yang et al. 2011b), unobscured (e.g., Bunker et al. 2003; Weidinger et al. 2004) and obscured quasars (e.g., Basu-Zych & Scharf 2004; Geach et al. 2007; Smith et al. 2009), or either starbursts or obscured quasars (e.g., Geach et al. 2009; Scarlata et al. 2009; Colbert et al. 2011). An overarching feature, however, is that the vast majority of them are associated with massive halos or rich large-scale structures that reside in dense parts of the Universe and will likely evolve to become rich clusters of galaxies by z = 0 (e.g., Steidel et al. 2000; Chapman et al. 2004; Matsuda et al. 2004; Palunas et al. 2004; Matsuda et al. 2006; Prescott et al. 2008; Matsuda et al. 2009; Yang et al. 2009; Webb et al. 2009; Weijmans et al. 2010; Matsuda et al. 2011; Erb et al. 2011; Yang et al. 2011a; Zafar et al. 2011). Another unifying feature is that LABs are strong infrared emitters. For instance, most of the 35 LABs with size > 30 kpc identi\ufb01ed by Matsuda et al. (2004) in the SSA 22 region have been detected in deep Spitzer observations (Webb et al. 2009). Many physical models of LABs have been proposed. A leading contender is the gravitational cooling radiation model in which gas that collapses inside a host dark matter halo releases a signi\ufb01cant fraction of its gravitational binding energy in Ly\u03b1 line emission (e.g., Haiman et al. 2000; Fardal et al. 2001; Birnboim & Dekel 2003; Dijkstra et al. 2006; Yang et al. 2006; Dijkstra & Loeb 2009; Goerdt et al. 2010; Faucher-Gigu` ere et al. 2010; Rosdahl & Blaizot 2012). The strongest observational support for this model comes from two LABs that appear not to be associated with any strong AGN/galaxy sources (Nilsson et al. 2006; Smith et al. 2008), although lack of sub-mm data in the case of Nilsson et al. (2006) and a loose constraint of \u2264550 M\u2299yr\u22121 (3\u03c3) in the case of Smith et al. (2008) both leave room to accommodate AGN/galaxies powered models. Another tentative support is claimed to come from the apparent positive correlation between velocity width (represented by the full width at half maximum, or FWHM, of the line) and Ly\u03b1 luminosity (Saito et al. 2008), although the observed correlation FWHM \u221dLLy\u03b1 appears to be much steeper than expected (approximately) FWHM \u221dLLy\u03b1 1/3 for virialized systems. Other models include photoionization of cold dense, spatially extended gas by obscured quasars (e.g., Haiman & Loeb 2001; Geach et al. 2009), by population III stars (e.g., Jimenez & Haiman 2006), or by spatially extended inverse Compton X-ray emission (e.g., Scharf et al. 2003), emission from dense, cold superwind shells (e.g., Taniguchi & Shioya 2000; Ohyama et al. 2003; Mori et al. 2004; Wilman et al. 2005; Matsuda et al. 2007), or a combination of photoionization and gravitational cooling radiation (e.g., Furlanetto et al. 2005). \f\u2013 3 \u2013 The aim of this writing is, as a \ufb01rst step, to explore a simple star formation based model in su\ufb03cient details to access its physical plausibility and self-consistency, through detailed Ly\u03b1 radiative transfer calculations utilizing a large set of massive (\u22651012 M\u2299) starbursting galaxies from an ultra-high resolution (\u223c110h\u22121pc), cosmological, adaptive mesh re\ufb01nement (AMR) hydrodynamic simulation at z = 3.1. The most critical, basically the only major, free parameter in our model is the magnitude of dust attenuation. Adopting the observationally motivated trend that higher SFR galaxies have higher dust attenuation, with an overall normalization that seems plausible (e.g., we assume that \u223c5% of Ly\u03b1 photons escape a galaxy of SFR = 100 M\u2299yr\u22121), the model can successfully reproduce the global Ly\u03b1 luminosity function and the luminosity-size relation of LABs. To our knowledge this is the \ufb01rst model that is able to achieve this. The precise dependence of dust attenuation on SFR is not critical, within a reasonable range, and hence the results are robust. In this model we show that LABs at high redshift correspond to proto-clusters containing the most massive galaxies/halos in the universe. Within each proto-cluster, all member galaxies contribute collectively to the overall Ly\u03b1 emission, giving rise to the diverse geometries of the apparent contiguous large-area LAB emission, which is further enhanced by projection e\ufb00ects due to other galaxies that are not necessarily in strong gravitational interactions with the main galaxy (or galaxies), given the strong clustering environment of massive halos in a hierarchical universe. This prediction that LABs should correspond to the most overdense regions in the universe at high redshift is fully consistent with the observed universal association of LABs with high density peaks (see references above). The relative contribution to the overall Ly\u03b1 emission from each individual galaxy depends on a number of variables, including dust attenuation of Ly\u03b1 photons within the galaxy and propagation and di\ufb00usion processes through its complex circumgalactic medium and the intergalactic medium. Another major predictions of this model is that a large fraction of the stellar (and AGN) optical and ultraviolet (UV) radiation (including Ly\u03b1 photons) is reprocessed by dust and emerges as infrared (IR) radiation, consistent with observations of ubiquitous strong infrared emission from LABs. We should call this model simply \u201cstarburst model\u201d (SBM), encompassing those with or without contribution from central AGNs. This model automatically includes emission contribution from gravitational cooling radiation, which is found to be signi\ufb01cant but sub-dominant compared to stellar radiation. Interestingly, we also \ufb01nd that Ly\u03b1 emission originating from nebular emission (rather than the stellar emission), which includes contribution from gravitational binding energy due to halo collapse, is more centrally concentrated than that from stars. One potentially very important prediction is that in this model the Ly\u03b1 emission from photons that escape to us is expected to contain signi\ufb01cant polarization signals. Although polarization radiative transfer calculations will be performed to detail the polarization signal in a future study, we brie\ufb02y elaborate the essential physics and latest observational advances here. One may broadly \ufb01le all the proposed models into two classes in terms of the spatial \f\u2013 4 \u2013 distribution of the underlying energy source: central powering or in situ. Starburst galaxy and AGN powered models belong to the former, whereas gravitational cooling radiation model belongs to the latter. A smoking gun test between these two classes of models is the polarization signal of the Ly\u03b1 emission. In the case of a central powering source (not necessarily a point source) the Ly\u03b1 photons di\ufb00use out, spatially and in frequency, through optically thick medium and escape by a very large number of local resonant scatterings in the Ly\u03b1 line pro\ufb01le core and a relatively smaller number of scatterings in the damping wings with long \ufb02ights. Upon each scattering a Ly\u03b1 photon changes its direction, location and frequency, dependent upon the geometry, density and kinematics of the scattering neutral hydrogen atoms. In idealized models with central powering signi\ufb01cant linear polarizations of tens of percent on scales of tens to hundreds of kiloparsecs are predicted and the polarization signal strength increases with radius (e.g., Lee & Ahn 1998; Rybicki & Loeb 1999; Dijkstra & Loeb 2008). On the other hand, in situ radiation from the gravitational cooling model is not expected to have signi\ufb01cant polarizations (although detailed modeling will be needed to quantify this) or any systematic radial trend, because thermalized cooling gas from (likely) \ufb01laments will emit Ly\u03b1 photons that are either not scattered signi\ufb01cantly or have no preferential orientation or impact angle with respect to the scattering medium. An earlier attempt to measure polarization of LABd05 at z = 2.656 produced a null detection (Prescott et al. 2011). A more recent observation by Hayes et al. (2011), for the \ufb01rst time, detected a strong polarization signal tangentially oriented (almost forming a complete ring) from LAB1 at z = 3.05, whose strength increases with radius from the LAB center, a signature that is expected from central powering; they found the polarized fraction (P) of 20 percent at a radius of 45 kpc. Hayes et al. (2011) convincingly demonstrate their detection and, at the same time, explain the consistency of their result with the nondetection by Prescott et al. (2011), if the emission from LABd05 is in fact polarized, thanks to a signi\ufb01cant improvement in sensitivity and spatial resolution in Hayes et al. (2011). This latest discovery lends great support to models with central powering, including SBM, independent of other observational constraints that may or may not di\ufb00erentiate between the two classes of models or between models in each class. But we stress that detailed polarization calculations will be needed to enable statistical comparisons. The outline of this paper is as follows. In \u00a72.1 we detail simulation parameters and hydrodynamics code, followed by a description of our Ly\u03b1 radiative transfer method in \u00a72.2. Results are presented in \u00a73 with conclusions given in \u00a74. \f\u2013 5 \u2013 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the AMR Eulerian hydro code, Enzo (Bryan & Norman 1999; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc (comoving) on a side. We identi\ufb01ed a region centered on a cluster of mass of \u223c3 \u00d7 1014 M\u2299at z = 0. We then resimulate with high resolution of the chosen region embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. The re\ufb01ned region has a comoving size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and represents 1.8\u03c3 matter density \ufb02uctuation on that volume. The dark matter particle mass in the re\ufb01ned region is 1.3 \u00d7 107h\u22121 M\u2299. The re\ufb01ned region is surrounded by three layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 84 times that in the re\ufb01ned region. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 111h\u22121pc (physical), corresponding to a maximum mesh re\ufb01nement level of 13 at z = 0. The simulations include a metagalactic UV background (Haardt & Madau 1996), and a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005). They include metallicity-dependent radiative cooling (Cen et al. 1995). Our simulations also solve relevant gas chemistry chains for molecular hydrogen formation (Abel et al. 1997), molecular formation on dust grains (Joung et al. 2009), and metal cooling extended down to 10 K (Dalgarno & McCray 1972). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell, which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the entire feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating. The total amount of explosion kinetic energy from Type II supernovae for an amount of star formed M\u2217with a Chabrier initial mass function (IMF) is eSNM\u2217c2 (where c is the speed of light) with eSN = 6.6 \u00d7 10\u22126. Taking into account the contribution of prompt Type I supernovae, we use eSN = 1 \u00d7 10\u22125 in our simulations. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy is used to power galactic superwinds (e.g., Heckman 2001). Supernova feedback is important primarily for regulating star formation and for transporting energy and metals into the intergalactic medium. The \f\u2013 6 \u2013 extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported in a physically sound (albeit still approximate at the current resolution) way. The kinematic properties traced by unsaturated metal lines in damped Lyman-alpha systems (DLAs) are extremely tough tests of the model, which is shown to agree well with observations (Cen 2012b). We use the following cosmological parameters that are consistent with the WMAP7normalized (Komatsu et al. 2010) \u039bCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100h km s\u22121Mpc\u22121 = 70 km s\u22121Mpc\u22121 and n = 0.96. This simulation has been used (Cen 2011b) to quantify partitioning of stellar light into optical and infrared light, through ray tracing of continuum photons in a dusty medium that is based on selfconsistently computed metallicity and gas density distributions. We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (and various colors) and others. At a spatial resolution of 159pc (physical) with nearly 5000 well resolved galaxies at z \u223c3, this simulated galaxy catalog presents an excellent (by far, the best available) tool to study galaxy formation and evolution. 2.2. Ly\u03b1 Radiative Transfer Calculation The AMR simulation resolution is 159pc at z = 3. For each galaxy we produce a cylinder of size (2Rvir)\u00d7(2Rvir)\u00d7(42Rvir) on a uniform grid of cell size 318pc, where Rvir is the virial radius of the host halo. The purpose of using the elongated geometry is to incorporate the line-of-sight structures. Subsequently, in our Ly\u03b1 radiative transfer calculation, the line-of-sight direction is set to be along the longest dimension of the cylinder. In each cell of a cylinder Ly\u03b1 photon emissivities are computed, separately from star formation and cooling radiation. The luminosity of Ly\u03b1 produced by star formation is computed as LLy\u03b1 = 1042[SFR/( M\u2299yr\u22121)] erg s\u22121 (Furlanetto et al. 2005), where SFR is the star formation rate in the cell. The Ly\u03b1 emission from cooling radiation is computed with the gas properties in the cell by following the rates of excitation and ionization. \f\u2013 7 \u2013 With Ly\u03b1 emissivity, neutral hydrogen density, temperature, and velocity in the simulations, a Monte Carlo code (Zheng & Miralda-Escud\u00b4 e 2002) is adopted to follow the Ly\u03b1 radiative transfer. The code has been recently used to study Ly\u03b1 emitting galaxies (Zheng et al. 2010, 2011a,b). In our radiative transfer calculation, the number of Ly\u03b1 photons drawn from a cell is proportional to the total Ly\u03b1 luminosity in the cell, with a minimum number of 1000, and each photon is given a weight in order to reproduce the luminosity of the cell. Ly\u03b1 photons associated with star formation and cooling radiation are tracked separately so that we can study their \ufb01nal spatial distributions. For each photon, the scattering with neutral hydrogen atoms and the subsequent changes in frequency, direction, and position are followed until it escapes from the simulation cylinder. More details about the code can be found in Zheng & Miralda-Escud\u00b4 e (2002) and Zheng et al. (2010). The pixel size of the Ly\u03b1 images from the radiative transfer calculation is chosen to be equal to 318pc, corresponding to 0.04\u2032\u2032. We smooth the Ly\u03b1 images with 2D Gaussian kernels to match the resolutions in Matsuda et al. (2011) for detecting and characterizing LABs from observation. In Matsuda et al. (2011), the area of an LAB is the isophotal area with a threshold surface brightness 1.4 \u00d7 10\u221218erg s\u22121cm\u22122arcsec\u22122 in the narrowband image smoothed to an e\ufb00ective seeing of FWHM 1.4\u2032\u2032 (slightly di\ufb00erent from Matsuda et al. 2004, where FWHM=1\u2032\u2032), while the Ly\u03b1 luminosity is computed with the isophotal aperture in the FWHM=1\u2032\u2032 image. We de\ufb01ne LABs in our model by applying a friends-of-friends algorithm to link the pixels above the threshold surface brightness in the computed Ly\u03b1 images, with the area and luminosity computed from smoothed images with FWHM=1.4\u2032\u2032 and FWHM=1\u2032\u2032, respectively. 3. Results The SBM model that we study here in great detail may appear at odds with available observations at \ufb01rst sight. In particular, the LABs often lack close correspondence with galaxies in the overlapping \ufb01elds and their centers are often displaced from the brightest galaxies in the \ufb01elds. As we show below, these puzzling features are in fact exactly what are expected in the SBM model. The reasons are primarily three-fold. First, LABs universally arise in large halos with a signi\ufb01cant number of galaxies clustered around them. Second, dust attenuation renders the amount of Ly\u03b1 emission emerging from a galaxy dependent substantially sub-linearly on star formation rate. Third, the observed Ly\u03b1 emission, in both amount and three-dimensional (3D) location, originating from each galaxy depends on complex scattering processes subsequently. \f\u2013 8 \u2013 x (kpc) y (kpc) 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 x (kpc) y (kpc) 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 Fig. 1.\u2014 Two examples: left (a) and right (b) columns. See the caption below with columns (c) and (d). 3.1. E\ufb00ects Caused by Galaxy Clustering We \ufb01nd that large-scale structure and clustering of galaxies play a fundamental role in shaping all aspects of LABs, including two-dimensional line-of-sight velocity structure, line pro\ufb01le and Ly\u03b1 image in the sky plane. To illustrate this, Figure 1 shows Ly\u03b1 surface brightness maps (after the radiative transfer calculation) for four randomly selected galaxies with virial mass of the central galaxy exceeding 1012 M\u2299at z = 3.1. We \ufb01nd that Ly\u03b1 emission stemming from stellar radiation dominate over the gas cooling by about 10:1 to 4:1 in all relevant cases. We also \ufb01nd that the Ly\u03b1 emission due to gas cooling is at least as centrally concentrated as from the stellar emission for each galaxy. From this \ufb01gure it has become clear that large-scale structure and projection e\ufb00ects are instrumental to rendering \f\u2013 9 \u2013 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 x (kpc) y (kpc) 0 50 100 150 200 250 300 0 50 100 150 200 250 300 \u221220 \u221219 \u221218 \u221217 \u221216 \u221215 Fig. 1.\u2014 Two more examples: left (c) and right (d) columns. The four columns (a,b,c,d) show the logarithm of Ly\u03b1 surface brightness maps (in units of erg s\u22121cm\u22122arcsec\u22122) for four randomly selected large galaxies of virial masses both exceeding 1012 M\u2299at z = 3.1 with the primary galaxy centered on their respective panel. For each column the bottom panel is obtained, if one only includes galaxies within \u00b1Rvir of the primary galaxy along the line of sight, where Rvir is the virial radius of the primary galaxy. The top panel is obtained, including all galaxies within \u00b110h\u22121Mpc comoving of the primary galaxy along the line of sight. The length shown is in physical kpc. The e\ufb00ects of dust and faint sources have not been included yet in these plots (see the text for more details). the appearance of LABs in all aspects (image as well as spectrum). One could see that, for example in the top-left panel of Figure 1, the approximately linear structure aligned in the direction of lower-left to upper-right is composed of three additional galaxies that are well outside the virial radius of the primary galaxy but from projected structures. At \f\u2013 10 \u2013 the 1.4\u00d710\u221218erg s\u22121cm\u22122arcsec\u22122 detection isophotal contours of Matsuda et al. (2004) and Matsuda et al. (2011) for LABs, the entire linear structure may be identi\ufb01ed as a single LAB. This rather random example is strikingly reminiscent of the observed LAB structures (e.g., Matsuda et al. 2009; Erb et al. 2011; Yang et al. 2011a). Interestingly, depending on which galaxy is brighter and located on the front or back, the overall Ly\u03b1 emission of the LAB may show a variety of line pro\ufb01les. For example, it could easily account for a broad/brighter blue side in the line pro\ufb01le, as noted by Saito et al. (2006) for some of the observed LABs, which was originally taken as supportive evidence for the gravitational cooling radiation model. Furthermore, it is not di\ufb03cult to envision that the overall velocity width of an LAB does not necessarily re\ufb02ect the virial velocity of a virialized system and may display a wide range from small (masked by caustics e\ufb00ect) to large (caused by either large virial velocities, infall velocities, or Hubble expansion). A detailed spectral analysis will be presented elsewhere. For the results shown in Figure 1 we have not included dust e\ufb00ect, contributions from small galaxies (Mh < 109.5 M\u2299) that are not properly captured in our simulation due to \ufb01nite resolution, and instrumental noise. We now describe how we include these important e\ufb00ects. 3.2. Taking into Account Faint, Under-resolved Sources Although the resolution of our simulations is high, it is still \ufb01nite and small sources are incomplete. We \ufb01nd that the star formation rate (SFR) function in the simulation \ufb02attens out at 3 M\u2299yr\u22121 toward lower SFR at z = 2 \u22123 (Cen 2011b), which likely means that sources with SFR< 3 M\u2299yr\u22121 are unresolved/under-resolved and hence incomplete in the simulations. Since these low SFR sources that cluster around large galaxies contribute to the Ly\u03b1 emission of LABs, it is necessary to include them in our modeling. For this purpose, we need to sample their SFR distribution and spatial distribution inside halos. First, we need to model the luminosity or SFR distribution of the faint, unresolved sources. In each LAB-hosting halo in the simulations, the number of (satellite) sources with SFR>3 M\u2299yr\u22121 is found to be proportional to the halo mass Mh. Observationally, the faint end slope \u03b1 of the UV luminosity function of star forming galaxies is \u223c\u22121.8 (e.g., Reddy & Steidel 2009). Given this faint end slope, the contribution due to faint, unresolved sources is weakly convergent. As a result, the overall contribution from faint sources do not strongly depend on the faint limit of the correction procedure. We \ufb01nd that the conditional SFR function \u03c6(L; Mh) of faint sources (SFR< 3 M\u2299yr\u22121) in halos can be modeled as \u03c6(L; Mh) = dN(Mh) dL = \u2212(\u03b1 + 1) Lth \u0012 L Lth \u0013\u03b1 Mh M1 , (1) where L represents the SFR and Lth = 3 M\u2299yr\u22121, \u03b1 = \u22121.8, and M1 = 1012 M\u2299. This conditional SFR function allows us to draw SFR for faint sources to be added in our model. \f\u2013 11 \u2013 We now turn to the spatial distribution of faint sources. In the simulation the spatial distribution (projected to the sky plane) of satellite sources in halos is found to closely follow a power-law with a slope of \u22122. This is in good agreement with the observed small scale slope of the projected two-point correlation function of LBGs (Ouchi et al. 2005). There is some direct observational evidence that there are faint UV sources distributed within the LAB radii. Matsuda et al. (2012) perform stacking analysis of z \u223c3.1 Ly\u03b1 emitters and protocluster LBGs, showing di\ufb00use Ly\u03b1 pro\ufb01le in the stacked Ly\u03b1 image. Interestingly, the pro\ufb01les in the stacked UV images appear to be extended to scales of tens of kpc (physical) for the most luminous Ly\u03b1 sources or for sources in protoclusters, suggesting contributions from faint, starforming galaxies. We add the contribution from faint sources to post-processed unsmoothed Ly\u03b1 images from radiative transfer modeling as follows. For each model LAB, we draw the number and SFRs of faint sources in the range of 0.01\u20133 M\u2299yr\u22121 based on the conditional SFR distribution in equation (1). Then we distribute them in the unsmoothed Ly\u03b1 image in a radial range of 0.01\u20131Rvir by following the power-law distribution with slope \u22122. The faint sources can be either added as point or extended sources in Ly\u03b1 emission. If added as point sources, they would be smoothed with a 2D Gaussian kernel of FWHM=1.4\u2032\u2032 or 1\u2032\u2032 when de\ufb01ning LAB size and luminosity. In our \ufb01ducial model, each faint source is added as an exponential disk with scale length of 3\u2032\u2032 to approximate the radiative transfer e\ufb00ect, which is consistent with the observed di\ufb00use emission pro\ufb01le of star-forming galaxies (Steidel et al. 2011). We \ufb01nd that our \ufb01nal conclusion does not sensitively depend on our choice of the faint source Ly\u03b1 pro\ufb01le. In Figure 2, panel (a) shows the surface brightness and the 1.4\u00d710\u221218erg s\u22121cm\u22122arcsec\u22122 isphotal contour for a model LAB without including the faint sources, while panel (b) is the case with faint sources. We see that the size of the LAB de\ufb01ned by the isophotal aperture does not change much. If the Ly\u03b1 emission of each faint sources is more concentrated, e.g., close to a point source in the unsmoothed image, the LAB size can increase a little bit. Therefore, in both panels (a) and (b), the size is mainly determined by the central bright source. However, as will be described in the next subsection, including the e\ufb00ect of dust extinction will suppress the contribution of the central source and relatively boost that of the faint sources in determining the LAB size. 3.3. Dust E\ufb00ect In the cases shown in Figure 1, the central galaxies each have SFR that exceeds 100 M\u2299yr\u22121 and is expected to be observed as a luminous infrared galaxy (LIRG) or ULIRG (Sanders & Mirabel 1996). This suggests that dust e\ufb00ects are important and have to be taken into account. \f\u2013 12 \u2013 Fig. 2.\u2014 An LAB under di\ufb00erent model assumptions. The model LAB shown in this example resides in the most massive host halo in our simulation (\u223c5 \u00d7 1012 M\u2299) at z = 3.1. The Ly\u03b1 images are smoothed to correspond to seeing of FWHM=1.4\u2032\u2032. In each panel, the black contour is the isophotal level of 1.4 \u00d7 10\u221218erg s\u22121cm\u22122arcsec\u22122, the surface brightness threshold used in observation to de\ufb01ne LABs (Matsuda et al. 2004, 2011). Panels (a)\u2013(d) enumerate the combinations of adding faint sources and extinction. Panel (a) is the initial case without faint sources and without extinction. Panel (d) corresponds to the case with faint sources added and with extinction considered, which we regard as the favored model. See the text for more details. In general, there are two types of e\ufb00ects of dust on Ly\u03b1 emission from star-forming galaxies. The \ufb01rst one is related to the production of Ly\u03b1 phtons. Dust attenuates ionizing photons in star-forming galaxies. Since Ly\u03b1 photons come from reprocessed ionizing photons, the attenuation by dust leads to a lower Ly\u03b1 luminosity in the \ufb01rst place. Second, after being produced, Ly\u03b1 photons can be absorbed by dust during propagation. A detailed investigation needs to account for both e\ufb00ects self-consistently, and we reserve that for a future study. In Cen (2011b) the dust obscuration/absorption is considered in a self-consistent way, with respect to luminosity functions observed in UV and FIR bands. The modelling uses detailed ray tracing with dust obscuration model based on that of our own Galaxy (Draine 2011) and extinction curve taken from Cardelli et al. (1989). While the simultaneous match of both UV and FIR luminosity functions at z = 2 without introducing additional free parameters is an important validation of the physical realm of our simulations, it is not necessarily directly extendable to the radiative transfer of Ly\u03b1 photons. Nevertheless, it is reasonable to adopt a simple optical depth approach as follows for our present purpose, normalized by relevant observations, as follows. For each galaxy we suppress the initial intrinsic Ly\u03b1 emission, by applying a mapping LLy\u03b1 to LLy\u03b1 exp [\u2212\u03c4(SFR)], where the \u201ce\ufb00ective\u201d optical depth \u03c4(SFR) is intended to account for extinction of Ly\u03b1 photons as a function of SFR. We stress that this method is approximate and its validation is only re\ufb02ected by the goodness of our model \ufb01tting the \f\u2013 13 \u2013 observed properties of LABs. We adopt \u03c4(SFR) = 0.2[SFR/( M\u2299yr\u22121)]0.6. In reality, in addition, it may be that there is a substantial scatter in \u03c4(SFR) at a \ufb01xed SFR. We ignore such complexities in this treatment. The adopted trend that higher SFR galaxies have larger optical depths is fully consistent with observations (e.g., Nilsson & M\u00f8ller 2009). At intrinsic SFR = 100 M\u2299yr\u22121 the escaped LLy\u03b1 luminosity is equivalent to SFR = 5 M\u2299yr\u22121, whereas at intrinsic SFR = 10 M\u2299yr\u22121 the escaped LLy\u03b1 luminosity is equivalent to SFR = 4.5 M\u2299yr\u22121. It is evident that the scaling of the emerging LLy\u03b1 luminosity on intrinsic SFR is substantially weakened with dust attenuation. In fact, it may be common that, due to dust e\ufb00ect, the optical luminosity of a galaxy does not necessarily positively correlate with its intrinsic SFR, or the most luminous source in Ly\u03b1 does not necessarily correspond to the highest SFR galaxy within an LAB. As a result, a variety of image appearance and mis-matches between the LAB centers and the most luminous galaxies detected in other bands may result, seemingly consistent with the anecdotal observational evidence mentioned in the introduction. The e\ufb00ect of dust on the surface brightness distribution for a model LAB is shown in panel (c) of Figure 2. Compared to panel (a), which is the model without dust e\ufb00ect, we see that surface brightness of the central source is substantially reduced and the isophotal area for the threshold 1.4\u00d710\u221218erg s\u22121cm\u22122arcsec\u22122 also reduces. The case in panel (c) does not include the contribution from faint sources. In general, taking into account dust e\ufb00ect in our Ly\u03b1 radiative transfer calculation, the central galaxies tend to make reduced (absolutely and relative to other smaller nearby galaxies) contributions to the Ly\u03b1 surface brightness maps and in fact the center of each LAB may or may not coincide with the primary galaxy that would likely be a ULIRG in these cases, which is again reminiscent of some observed LABs. In the next subsection, we describe the modeling results of combining all the above e\ufb00ects. 3.4. Final LABs with All E\ufb00ects Included By accounting for the line-of-sight structures, the unresolved faint sources, and the dust e\ufb00ect, we \ufb01nd that the observed properties of LABs can be reasonably reproduced by our model. In panel (d) of Figure 2, we add the faint sources and apply the dust e\ufb00ect. Compared with the case in panel (c), where no faint sources are added, the isophotal area increases. The central source has a substantially reduced surface brightness because of extinction. There appears to be another source near the central source, which corresponds to a source of lower SFR seen in panel (a) but with lower extinction than the central source. From Figure 2, we see that the overall e\ufb00ect is that dust helps reduce the central surface brightness and faint sources help somewhat enlarge the isophotal area. \f\u2013 14 \u2013 Fig. 3.\u2014 Model predictions under di\ufb00erent assumptions along with observed properties of LABs. Top panels show luminosity and size relations and bottom panels cumulative luminosity functions. Panel (a) does not account for dust e\ufb00ect and contributions from faint galaxies under-resolved in our simulation. Panel (b) includes under-resolved sources. Observations are taken from Matsuda et al. (2004) (open squares) and Matsuda et al. (2011) supplemented with new unpublished data (open circles). Model predictions are shown as red points (top panels) and curves (bottom panels). To test the model and see the e\ufb00ect of di\ufb00erent assumptions on extinction and faint sources, we compare the model predictions with observational properties of LABs, shown in Figure 3. In the top panels, we compare the luminosity-size relation de\ufb01ned by the isophot with surface brightness 1.4 \u00d7 10\u221218erg s\u22121cm\u22122arcsec\u22122. The observed data points are taken from Matsuda et al. (2004) (open squares) and Matsuda et al. (2011) (open circles), which has been supplemented with new, yet unpublished data (Matsuda 2012, private communications). Note that the isophotal area is de\ufb01ned with FWHM=1\u2032\u2032 and 1.4\u2032\u2032 images in Matsuda \f\u2013 15 \u2013 Fig. 3.\u2014 Continued. Panel (c) only includes the dust extinction e\ufb00ect. Panel (d) includes both the dust extinction and the faint sources. The blue dots and blue curve in panel (d) is our realization of the global LF by using the LLy\u03b1 \u2212Mh relation from our model and the analytic halo mass function. et al. (2004) and Matsuda et al. (2011), respectively. This may partly explain that the LAB sizes are somewhat larger with the Matsuda et al. (2011) data points. However, the di\ufb00erence is not substantial. Our model data points follow Matsuda et al. (2011) in de\ufb01ning the luminosity and size. In the bottom panels of Figure 3, we show the cumulative Ly\u03b1 luminosity function or abundance of LABs. The data points from Matsuda et al. (2004) and Matsuda et al. (2011) (supplemented with new unpublished data; Matsuda 2012, private communications) have a large o\ufb00set (\u223c1 dex at the luminous end) from each other, suggesting large sample variance. The survey volumes of Matsuda et al. (2004) and Matsuda et al. (2011) are 1.3 \u00d7 105Mpc3 \f\u2013 16 \u2013 and 1.6 \u00d7 106Mpc3, respectively. For comparison, the volume of our parent simulation from which we choose our LAB sample is only 3.06 \u00d7 104Mpc3, much smaller than the volume probed by observation. The red points in top panel (a) of Figure 3 come from our model without extinction and faint sources. Compared to the observational data, the model predicts more or less the correct slope in the luminosity-size relation. However, the overall relation has an o\ufb00set, which means that the model either overpredicts the luminosity or underpredicts the size, or both. From the bottom panel (a), the model greatly over-predicts the LAB abundance, showing as a vertical shift. But it can also be interpreted as an overprediction of the LAB luminosity, leading to a horizontal shift, which is more likely. Because the central sources are bright, adding faint sources only slightly changes the sizes, as shown in panel (b), which leads to little improvement in solving the mismatches in the luminosity-size relation and in the abundance. Once the dust extinction e\ufb00ect is introduced, the situation greatly improves. Panel (c) of Figure 3 shows the case with extinction but without adding faint sources. With the extinction included, the luminosity of the predicted LABs drops, and at the same time, the size becomes smaller. Now the model points agree well with observations at the lower end of the range of LAB luminosity (1042.6 \u22121043.3erg/s) and size (15-30 arcsec2), the predicted luminosity-size relation conforms to and extends the observed one to still lower luminosity and smaller size. The predicted abundance is much closer to the observed one, as well. Finally, panel (d) shows the case with both extinction and faint sources included. Adding faint sources helps enlarge the size of an LAB, because faint sources extends the isophot to larger radii. The luminosity also increases by including the contribution from faint sources. As a whole, the model data points appear to slide over the luminosity-size relation towards higher luminosity and larger size. The model luminosity-size relation, although still at the low luminosity end, is fully overlapped with the observed relation. The abundance at the high-luminosity end from the model is within the range probed by observation and shows a similar slope as that in Matsuda et al. (2004). The agreement of the luminosity function between simulations and Matsuda et al. (2004) is largely fortuitous, re\ufb02ecting that the overall bias of our simulation box over the underlying matter happens to be similar to that of the Matsuda et al. (2004) volume over matter, provided that the model universe is a reasonable statistical representation of the real universe. Limited by the simulation volume, we are not able to directly simulate the full range of the observed luminosity and size of LABs. Our model, however, reproduces the luminositysize relation and abundance in the low luminosity end. The most important ingredient in our model to achieve such an agreement with the observation is the dust extinction, which drives the apparent Ly\u03b1 luminosity down into the right range. Accounting for the contribution of faint, unresolved sources in the simulation also plays a role in further enhancing the sizes \f\u2013 17 \u2013 and, to a less extent, the luminosities of LABs. To rectify the lack of high luminosity, large size LABs in our simulations due to the limited simulation volume, we perform the following exercise. Figure 4 shows the Ly\u03b1 luminosity and LAB size as a function of halo mass from our model LABs in Figure 3(d). Both quantities correlate with halo mass, but there is a large scatter, which is caused by varying SFRs as well as di\ufb00erent environmental e\ufb00ects for halos of a given mass. The largest LABs fall into the range probed by the observational data and they reside in halos above 1012M\u2299. The model suggests that the vast majority of the observed LABs should reside in proto-clusters with the primary halos of mass above 1012M\u2299at z \u223c3 and on average larger LABs correspond to more massive halos. Note that the sources with halo mass below 1012M\u2299is highly incomplete here. Our results suggest an approximate relation between the halo mass of the central galaxy and the apparent Ly\u03b1 luminosity of the LAB: LLy\u03b1 = 1042.4 \u0012 Mh 1012 M\u2299 \u00131.15 erg s\u22121, (2) which is shown as the solid curve in the left panel of Figure 4. This relation should provide a self-consistency test of our model, when accurate halo masses hosting LABs or spatial clustering of LABs can be measured, interpreted in the context of the \u039bCDM clustering model. We also \ufb01nd that the area-halo mass relation: area = 5.0 \u0012 Mh 1012 M\u2299 \u00131.15 arcsec2, (3) shown as the solid curve in the right panel of Figure 4. Equations (2) and (3) lead to the following luminosity-size relation area = 5.0 \u0012 LLy\u03b1 1042.4erg s\u22121 \u0013 arcsec2, (4) which matches the observed one, nothing new in this except as a self-consistency check. By extrapolating the above relations (2,3) to higher halo mass and using the analytic halo mass function (Jenkins et al. 2001), we can obtain the global Ly\u03b1 LF expected from our model. In detail, we draw halo masses based on the analytic halo mass function. For each halo, we compute LLy\u03b1 from Equation (2). A scatter in log LLy\u03b1 is added following a Gaussian distribution with 1\u03c3 deviation of 0.28dex (indicated by the dotted lines in the left panel of Figure 4). Then Equation (4) is used to assign the area, and a Gaussian scatter of 0.11dex is added to approximately reproduce the scatter seen in the observed luminositysize relation. The implied scatter in the area-halo mass relation is the sum of the above two scatters in quadrature, i.e., about 0.30 dex, which is indicated by the dotted lines in the right panel of Figure 4. Finally, we adopt the same area cut (>15 arcsec2) used in observations (Matsuda et al. 2011) to de\ufb01ne LABs. \f\u2013 18 \u2013 Fig. 4.\u2014 Dependence of LAB luminosity and size on halo mass from the model. In each panel, the points are from our model LABs in the simulation. The solid and dotted lines show the relation and scatter we use to populate halos drawn from the analytic halo mass function to compute the expected global Ly\u03b1 LF of LABs. See the text for more details. Our computed global Ly\u03b1 LF of LABs is shown as the blue curve in the bottom panel (d). The agreement between our predicted global LF and that from the larger-survey-volume observations of Matsuda et al. (2011) is striking. Given still substantial uncertainties involved in our model assumptions, the precise agreement is not to be overstated. However, the fact that the relative displacement between LF from our simulated volume and global LF is in agreement with that between Matsuda et al. (2004) and Matsuda et al. (2011) is quite encouraging, recalling that we have no freedom to adjust any cosmological parameters. This is also indicative of the survey volume of Matsuda et al. (2011) having becoming a fair sample of the universe for LABs in question. The blue dots in top panel (d) show that the predicted luminosity-area relation is simultaneously in agreement with observations, now over the entire luminosity and size range, suggesting that our derived relations in Equations (2), (3), and (4) are statistically applicable to LABs of luminosities higher than those probed by the current simulations. 4." + }, + { + "url": "http://arxiv.org/abs/1112.4527v2", + "title": "Coincidences between OVI and OVII Lines: Insights from High Resolution Simulations of the Warm-Hot Intergalactic Medium", + "abstract": "With high resolution (0.46kpc/h), adaptive mesh-refinement Eulerian\ncosmological hydrodynamic simulations we compute properties of O VI and O VII\nabsorbers from the warm-hot intergalactic medium (WHIM). Our new simulations\nare in broad agreement with previous simulations, with ~40% of the\nintergalactic medium being in the WHIM at z=0. It is found (1) The amount of\ngas in the WHIM at temperature below and above 10^6K is about equal within\nuncertainties. (1) Our simulations are in excellent agreement with observed\nproperties of O VI absorbers, with respect to the line incidence rate and\nDoppler width-column density relation. (2) Velocity structures within absorbing\nregions are a significant, and for large Doppler width clouds, a dominant\ncontributor to the Doppler widths of both O VI and O VII absorbers. A\nnon-negligible fraction (in number and mass) of O VI and O VII clouds can arise\nfrom gas of temperature lower than 10^5, until the Doppler width is well in\nexcess of 100km/s. (3) Strong O VI absorbers are predominantly collisionally\nionized. About (61%, 57%, 39%) of O VI absorbers in the column density ranges\nof log N(OVI) cm^2=(12.5-13,13-14,>14) have temperature lower than 10^5K. (4)\nQuantitative prediction is made for the presence of broad and shallow O VI\nlines, which current observations may have largely missed. Upcoming\nobservations by COS may be able to provide a test. (5) The reported 3 sigma\nupper limit on the mean column density of coincidental O VII lines at the\nlocation of detected O VI lines by Yao et al is above the predicted value by a\nfactor of 2.5-4. (6) The claimed observational detection of O VII lines by\nNicastro et al, if true, is 2 sigma above what our simulations predict.", + "authors": "Renyue Cen", + "published": "2011-12-19", + "updated": "2012-05-14", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction Physical understanding of the thermodynamic evolution of the intergalactic medium (IGM) has been substantially improved with the aid of ab initio cosmological hydrodynamic simulations. One of the most robust predictions is that 40\u221250% of all baryons in the present universe is in the WHIM of temperature 105 \u2212107K and overdensity 10 \u2212300 (e.g., Cen & 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1112.4527v2 [astro-ph.CO] 14 May 2012 \f\u2013 2 \u2013 Ostriker 1999; Dav\u00b4 e et al. 2001). The predicted WHIM provides an attractive solution to the long standing missing baryons problem (Persic & Salucci 1992; Fukugita et al. 1998). Let us \ufb01rst clarify the nomenclature of several related gas phases. The intra-group and intra-cluster medium (ICM) is de\ufb01ned to be gas within these virialized regions (i.e., overdensity > 100). The high density portion (overdensity \u2265500) of the ICM has traditionally been detected in X-ray emission; thermal Sunyaev-Zeldovich e\ufb00ect and more sensitive X-ray measurements can now probe ICM to about the virial radius. The circumgalactic medium (CGM) is usually de\ufb01ned to be gas that embeds the stellar components in galactic halos and may be made up of gases of a wide range of temperatures (104 \u2212107K) and densities. It is likely, at least for large galaxies, that a signi\ufb01cant fraction of the CGM falls into the same temperature range of the WHIM. Of particular interest is some of the CGM that has been heated up by star formation feedback shocks to the WHIM temperature range (e.g., Cen & Ostriker 2006; Cen & Chisari 2011). In the present analysis we de\ufb01ne WHIM as gas of temperature 105 \u2212107K with no density limits. Most of the WHIM gas is truly intergalactic with overdensity < 100 (see Figure 7) and mostly easily probed in absorption. The reality of the WHIM, at least its low temperature (T \u2264106K) portion, has now been fairly convincingly con\ufb01rmed by a number of observations in the FUV portion of QSO spectra from HST and FUSE, through the O VI \u03bb\u03bb1032, 1038 absorption lines that peak at T \u223c3\u00d7105K when collisionally ionized (e.g., Tripp et al. 2000; Tripp & Savage 2000; Oegerle et al. 2000; Savage et al. 2002; Prochaska et al. 2004; Sembach et al. 2004; Danforth & Shull 2005; Danforth et al. 2006; Danforth & Shull 2008; Tripp et al. 2008; Thom & Chen 2008a,b; Cooksey et al. 2008) and Ne VIII \u03bb\u03bb770, 780 absorption lines that peak at T \u223c7 \u00d7 105K in collisional ionization equilibrium (Savage et al. 2005, 2006; Narayanan et al. 2009, 2011; Tripp et al. 2011) as well as broad Ly\u03b1 absorption lines (BLAs) (Danforth et al. 2010; Savage et al. 2011a,b). In agreement with simulations, the part of WHIM detected in O VI absorption is estimated to constitute about 20-30% of total WHIM. The detection of Ne VIII lines along at least some of the sight lines with O VI detection provides unambiguous evidence for the WHIM origin, instead of lower temperature, photoionized gas, under physically plausible and observationally constrained situations. X-ray observations performed to search for X-ray absorption of the higher temperature portion (T \u2265106K) of the WHIM associated with known massive clusters have also been successful. An XMM-Newton RGS spectrum of quasar LBQS 1228+1116 revealed a feature at the Virgo redshifted position of O VIII Ly\u03b1 at the 95% con\ufb01dence level (Fujimoto et al. 2004). Using XMM-Newton RGS observations of an AGN behind the Coma Cluster, the Seyfert 1 X Comae, Takei et al. (2007) claimed to have detected WHIM associated with the Coma cluster. Through the Sculptor Wall Buote et al. (2009) and Fang et al. (2010) have detected WHIM O VII absorption at a column greater than 1016cm\u22122. There is evidence of detection in soft X-ray emission along the \ufb01lament connecting clusters A222 and A223 at z = 0.21 that may be associated with the dense and hot portion of the WHIM (Werner et al. \f\u2013 3 \u2013 2008). However, the search for X-ray absorption of WHIM along random lines of sight turns out to be elusive. Early pioneering observations (Fang et al. 2001, 2002, S5 0836+710, PKS 2149-306, PKS 2155-304) gave the \ufb01rst O VII detection (O VIII for PKS 2155-304), which has not been convincingly con\ufb01rmed subsequently (Cagnoni et al. 2004; Williams et al. 2007; Fang et al. 2007). Mathur et al. (2003) performed a dedicated deep observation (470 ks) with the Chandra LETGS of the quasar H 1821+643, which has several con\ufb01rmed intervening O V I absorbers, but found no signi\ufb01cant (>> 2\u03c3) X-ray absorption lines at the redshifts of the O V I systems. Nicastro et al. (2005a,b) embarked on a campaign to observe Mrk 421 during its periodic X-ray outbursts with the Chandra LETGS with a total of more than 7 million continuum counts and presented evidence for the detection of two intervening absorption systems at z = 0.011 and z = 0.027. But the spectrum of the same source observed with the XMM-Newton RGS did not show these absorption lines (Rasmussen et al. 2007), despite higher signal-to-noise and comparable spectral resolution. Kaastra et al. (2006) and Yao et al. (2012) reanalyzed the Chandra LETGS data and were in agreement with Rasmussen et al. (2007). The detection of O VII lines may also be at odds with recent BLA measurements (Danforth et al. 2011), under simplistic assumptions about the nature of the absorbing medium. However, it has been argued that the reported XMM-Newton upper limits and the Chandra measurements may be consistent with one another, when taking into consideration certain instrumental characteristics of the XMM-Newton GRS (Williams et al. 2006). Moreover, an analysis of the two candidate X-ray absorbers at z = 0.011 and z = 0.027 yields intriguing evidence of two large-scale \ufb01laments at the respective redshifts, one of which has only 5 \u221210% probability of occurring by chance (Williams et al. 2010). Observations of 1ES 1028+511 at z = 0.361 by Steenbrugge et al. (2006) yield no convincing evidence for X-ray WHIM absorption. What is perceived to be more disconcerting is the lack of detection of O VII absorbers at the redshifts of detected O VI absorbers along some random lines of sight. This is because, overall, the O VII line is predicted to be the most abundant and anecdotal evidence suggests substantial coincidence between O VI and O VII (e.g., Cen & Fang 2006). A statistically signi\ufb01cant upper limit placed on the mean column density of O VII absorbers at the locations of a sizeable set of detected O VI absorbers using stacking techniques by Yao et al. (2009) prompts them to call into question the very existence of the high temperature (T \u2265106K) portion of the WHIM, although the limited sensitivity and spectral resolution of the current X-ray observations may render any such conclusions less than de\ufb01nite. Therefore, at this juncture, it is pressing to statistically address this lack of signi\ufb01cant coincidence between O VI and O VII absorbers and other issues theoretically, through higher resolution simulations that are necessary in order to well resolve the interfaces of multi-phase media. This is the primary purpose of this paper. We use two simulations of high resolution \f\u2013 4 \u2013 of 0.46h\u22121kpc and box size of 20\u221230h\u22121Mpc to perform much more detailed characterization of O VI and O VII lines to properly compare to extant observations. This high resolution is to be compared with 83h\u22121kpc resolution in our previous simulations (Cen & Ostriker 2006; Cen & Fang 2006), 25 \u221249h\u22121kpc resolution in Smith et al. (2011) and Shull et al. (2011), 1.25 \u22122.5h\u22121kpc in Oppenheimer et al. (2012) and 1.25 \u22122.5h\u22121kpc in Tepper-Garc\u00b4 \u0131a et al. (2011), resolves the Jeans scale of WHIM by 2-3 orders of magnitude and interfaces between gas phases of di\ufb00erent temperatures in a multi-phase medium. It is useful to distinguish, in the case of SPH simulations, between the gravity force resolution and the resolution of the hydrodynamics solver, with the latter being worse than the former by a factor of order a few. It is also useful to keep in mind the initial cell size or interparticle separation, because in both SPH and adaptive mesh re\ufb01nement (AMR) simulations not all regions are resolved by the maximum resolution. Calling this \u201cmean region resolution\u201d \u2206root, \u2206root = (117, 25\u221249, 125, 195)h\u22121kpc for [this paper, Smith et al. (2011), Oppenheimer et al. (2012), Tepper-Garc\u00b4 \u0131a et al. (2011)]. We note that a region of overdensity \u03b4 is approximately resolved at a resolution of C\u2206root\u03b4\u22121/3 (up to a pre-speci\ufb01ed highest resolution), where the pre-factor C is about unity for AMR simulations and \u223c2 for SPH simulations. Using Lagrangian SPH or AMR approaches becomes necessary for regions \u03b4 \u2265300, because simulations of a similar resolution with the uni-grid method become increasingly impractical (largely due to limitations of computer memory). A more important advantage with very high resolution simulations has to do with the need to resolve galaxies, which in turn allows for a more selfconsistent treatment of the feedback processes from star formation, namely, the temporal and spatial distribution of metals and energy deposition rates to the CGM and IGM and their e\ufb00ects on subsequent star formation. Our new simulations, in agreement with earlier \ufb01ndings, rea\ufb03rm quantitatively the existence of WHIM and furthermore show that the properties of the WHIM with respect to O VI line and O VI-O VII relations are fully consistent with observations. In particular, the observed upper limit of the mean coincidental O VII column density of detected O VI absorbers is higher than what is predicted by the simulations by a factor of \u223c2.5 \u22124. Higher sensitivity X-ray observations or a larger sample by a factor of \u223c10 should test this prediction de\ufb01nitively. The outline of this paper is as follows. In \u00a72.1 we detail simulation parameters and hydrodynamics code, followed by a description of our method of making synthetic O VI and O VII spectra in \u00a72.2, which is followed by a description of how we average the two separate simulations C (cluster) and V (void) run in \u00a72.3. Results are presented in \u00a73. In \u00a73.1 we present some observables for O VI to compare to observations to provide additional validation of the simulations. In \u00a73.2 we dissect the simulations to provide a physical analysis of the O VI and O VII absorbers. In \u00a73.3 results on the coincidences between O VI and O VII lines are given. Conclusions are summarized in \u00a74. \f\u2013 5 \u2013 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the AMR Eulerian hydro code, Enzo (Bryan 1999; Bryan & Norman 1999; O\u2019Shea et al. 2005; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed two regions separately, one centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299and the other centered on a void region at z = 0. We then resimulate each of the two regions separately with high resolution, but embedded in the outer 120h\u22121Mpc box to properly take into account largescale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. We name the simulation centered on the cluster \u201cC\u201d run and the one centered on the void \u201cV\u201d run. The re\ufb01ned region for \u201cC\u201d run has a size of 21\u00d724\u00d720h\u22123Mpc3 and that for \u201cV\u201d run is 31\u00d731\u00d735h\u22123Mpc3. At their respective volumes, they represent 1.8\u03c3 and \u22121.0\u03c3 \ufb02uctuations. The root grid has a size of 1283 with 1283 dark matter particles. The initial static grids in the two re\ufb01ned boxes correspond to a 10243 grid on the outer box. The initial number of dark matter particles in the two re\ufb01ned boxes correspond to 10243 particles on the outer box. This translates to initial condition in the re\ufb01ned region having a mean interparticle-separation of 117h\u22121kpc comoving and dark matter particle mass of 1.07 \u00d7 108h\u22121 M\u2299. The re\ufb01ned region is surrounded by two layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 83 times that in the re\ufb01ned region. The initial density \ufb02uctuations are included up to the Nyquist frequency in the re\ufb01ned region. The surrounding volume outside the re\ufb01ned region is aso followed hydrodynamically, which is important in order to properly capture matter and energy exchanges at the boundaries of the re\ufb01ned region. Because we still can not run a very large volume simulation with adequate resolution and physics, we choose these two runs of moderate volumes to represent two opposite environments that possibly bracket the average. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 460h\u22121pc physical, corresponding to a maximum mesh re\ufb01nement level of 11 at z = 0. The simulations include a metagalactic UV background (Haardt & Madau 2012), and a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005). The simulations also include metallicity-dependent radiative cooling and heating (Cen et al. 1995). We clarify that our group has included metal cooling and metal heating (due to photoionization of metals) in all our studies since Cen et al. (1995), contrary to some claims (e.g., Wiersma et al. 2009; Tepper-Garc\u00b4 \u0131a et al. 2011). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. \f\u2013 6 \u2013 Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell (i.e., weighting is equal to the inverse of density), which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the whole feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating, as in nature. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported into right directions in a physically sound (albeit still approximate at the current resolution) way, at least in a statistical sense. The primary advantages of this supernova energy based feedback mechanism are threefold. First, nature does drive winds in this way and energy input is realistic. Second, it has only one free parameter eSN, namely, the fraction of the rest mass energy of stars formed that is deposited as thermal energy on the cell scale at the location of supernovae. Third, the processes are treated physically, obeying their respective conservation laws (where they apply), allowing transport of metals, mass, energy and momentum to be treated selfconsistently and taking into account relevant heating/cooling processes at all times. We use eSN = 1\u00d710\u22125 in these simulations. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier IMF translates to eSN = 6.6 \u00d7 10\u22126. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy (due to Type II supernovae) is used to power galactic superwinds (e.g., Heckman 2001). Given the uncertainties on the evolution of IMF with redshift (i.e., possibly more top heavy at higher redshift) and the fact that newly discovered prompt Type I supernovae contribute a comparable amount of energy compared to Type II supernovae, it seems that our adopted value for eSN is consistent with observations and within physical plausibility. Test of the success of this feedback approach comes empirically. As we show in Cen (2012), the metal distribution in and around galaxies over a wide range of redshift is in good agreement with respect to the properties of observed damped Ly\u03b1 systems; this is a non-trivial success and provides strong validation of the simulations. We will provide additional validation of the simulations in \u00a73.1. To better understand di\ufb00erences in results between AMR and SPH simulations that we will discuss later, we note here that the evolution of metals in the two types of simulations is treated rather di\ufb00erently. In AMR simulations metals are followed hydrodynamically by solving the metal density continuity equation with sources (from star formation feedback) and sinks (due to subsequent star formation), whereas in SPH simulations of WHIM one does not separately solve the metal density continuity equation. Thus, metal mixing and di\ufb00usion through advection, turbulence and other hydrodynamic processes are properly captured in AMR simulations. While some SPH simulations have implemented metal di\ufb00usion schemes \f\u2013 7 \u2013 Fig. 1.\u2014 Top-left and bottom-left panels show the gas density and density-weighted temperature projection of a portion of the re\ufb01nement box of the C run of size (18h\u22121Mpc)3. Top-right and bottom-right panels show the gas density and density-weighted temperature projection of a portion of the re\ufb01nement box of the V run of size (30h\u22121Mpc)3. that are motivated by some subgrid turbulence model as a remedy parameterized to roughly match results from hydrodynamic simulations (e.g., Shen et al. 2010), most SPH simulations of WHIM obtain gas metallicities based on kernel-smoothed metal masses of feedback SPH particles that are assigned at birth and un-evolved (e.g., Tepper-Garc\u00b4 \u0131a et al. 2011; Oppenheimer & Dav\u00b4 e 2009; Oppenheimer et al. 2012). In the simulations of Oppenheimer et al. (2012) \u201cfeedback\u201d SPH particles with initially given metal masses are launched (in random directions) to be transported ballistically to su\ufb03ciently large distance (\u223c10kpc), after allowance for some period of hydrodynamic de-coupling between the feedback SPH particles and other neighboring SPH particles. Once some of the feedback parameters are \ufb01xed, this approach produces de\ufb01nitive predictions with respect to various aspects of stellar and IGM \f\u2013 8 \u2013 metallicity and others (e.g., Springel & Hernquist 2003; Oppenheimer & Dav\u00b4 e 2009; Tornatore et al. 2010; Dav\u00b4 e et al. 2011b,a; Oppenheimer et al. 2012). It is likely that mixing of metals on small to intermediate scales (\u223c1\u2212100kpc) in SPH simulations (e.g., Oppenheimer et al. 2012; Tepper-Garc\u00b4 \u0131a et al. 2011) is probably substantially underestimated. This signi\ufb01cant di\ufb00erence in the treatment of metal evolution may have contributed, in a large part, to some discrepancies between SPH and AMR hydrodynamic simulations, as we will discuss later. We use the following cosmological parameters that are consistent with the WMAP7normalized (Komatsu et al. 2010) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. Figure 1 shows the density and temperature \ufb01elds of the two simulations. The environmental contrast between the two simulations is evident. We also note that there is substantial overlap visually between the two simulations in that both cover the \u201c\ufb01eld\u201d environment, which we have shown quantitatively in Tonnesen & Cen (2011). In other words, these two simulations cover two extreme environments voids and clusters with substantial overlap of intermediate environment that facilitates possible averaging of some computed quantities, with proper normalizations by independent observational constraints. 2.2. Generation of Synthetic O VI and O VII Absorption Lines The photoionization code CLOUDY (Ferland et al. 1998) is used post-simulation to compute the abundance of O VI and O VII, adopting the shape of the UV background calculated by Haardt & Madau (2012) normalized by the intensity at 1 Ryd determined by Shull et al. (1999) and assuming ionization equilibrium. We generate synthetic absorption spectra using a code similar to that used in our earlier papers (e.g., Cen et al. 1994, 2001; Cen & Fang 2006), given the density, temperature, metallicity and velocity \ufb01elds from simulations. Each absorption line is identi\ufb01ed by the interval between one downward and the next upward crossing in the synthetic \ufb02ux spectrum without noise at a \ufb02ux equal to 0.99 (\ufb02ux equal to unity corresponds to an unabsorbed continuum). Since the absorption lines in question are sparsely distributed in velocity space, their identi\ufb01cations have no signi\ufb01cant ambiguity. Column density, equivalent width, Doppler width, mean column density weighted velocity and physical space locations, mean column density weighted temperature, density and metallicity are computed for each line. We sample the C and V run, respectively, with 72, 000 and 168, 000 random lines of sight at z = 0, with a total pathlength of \u2206z \u223c2000. While a detailed Voigt pro\ufb01le \ufb01tting of the \ufb02ux spectrum would have enabled closer comparisons with observations, simulations suggest that such an exercise does not necessarily provide a more clarifying physical understanding of \f\u2013 9 \u2013 200 400 600 0 0.2 0.4 0.6 0.8 1 v (km/s) flux 200 400 600 \u22121 0 1 2 v (km/s) log overdensity 200 400 600 \u2212400 \u2212300 \u2212200 \u2212100 0 100 v (km/s) vp (km/s) 200 400 600 3 4 5 6 7 v (km/s) log T (K) 0 200 400 600 800 \u22123 \u22122 \u22121 0 v (km/s) [Z/H] 200 400 600 0 0.2 0.4 0.6 0.8 1 v (km/s) flux 200 400 600 \u22121 0 1 v (km/s) log overdensity 200 400 600 \u2212300 \u2212200 \u2212100 0 100 200 300 v (km/s) vp (km/s) 200 400 600 3 4 5 6 7 v (km/s) log T (K) 0 200 400 600 800 \u22123 \u22122 \u22121 0 v (km/s) [Z/H] Fig. 2.\u2014 shows \ufb02ux spectra of two separate O VI lines and physical conditions. The left and right cases have column densities of log N(OVI)cm2 = 14.48 and 14.30, respectively. The \ufb01ve panels from top to bottom are: \ufb02ux, gas overdensity, proper peculiar velocity, temperature and metallicity in solar units. While the x-axis for the top panel is the Hubble velocity, the x-axis for the bottom four panels is physical distance that is multiplied by the Hubble constant. the absorber properties, because bulk velocities are very important (see Figure 6 below) and velocity substructures within an absorber do not necessarily correspond to separate physical entities. A small number of simulated spectra may not serve to illustrate the extreme rich and complex physics involved. It may even be misleading in the sense that any statistical conclusions drawn based on anecdotal evidence could be substantially wrong. Thus, we will present two absorption spectrum segments merely only for the purpose of illustration. Figure 2 shows two O VI lines and their associated physical environment. 2.3. Averaging C and V Runs The C and V runs at z = 0 are used to obtain an \u201caverage\u201d of the universe. This cannot be done precisely without much larger simulation volumes, which is presently not feasible. Nevertheless, it is still possible to obtain an approximate average. Since the WHIM is mostly closely associated with groups and clusters of galaxies, we will use X-ray clusters \f\u2013 10 \u2013 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log T (K) CDF(>T) All IGM at z=0 Only WHIM with log T=5\u22127 at z=0 Fig. 3.\u2014 shows the cumulative probability distribution function (CDF) of the IGM at z = 0 as a function of gas temperature (black dashed curve) and that of the WHIM only in the temperature range T = 105 \u2212107K (red solid curve); stars are not included. as an appropriate \u201cnormalization\u201d anchor point. We normalize averaging weightings of the C and V runs by requiring that the fraction of hot gas with temperature T \u2265107K is consistent with the observed value of \u223c15% of baryons at z = 0 (Bahcall 2011). Note that small variations on the adopted X-ray gas fraction do not cause large changes in most of the results. For comparative measures such as the coincidence rates between O VI and O VII absorbers, the dependence on the normalization procedure is still weaker. The results are shown in Figure 3, which shows the temperature distribution of entire IGM and WHIM at z = 0. In agreement with previous simulations (e.g., Cen & Ostriker 1999; Dav\u00b4 e et al. 2001; Cen & Ostriker 2006), we \ufb01nd that \u223c40% of the IGM at z = 0 is in WHIM. This is compared to 35-40% in Smith et al. (2011), 24% in Dav\u00b4 e et al. (2010) (limited to overdensities outside halos), 40% in Shen et al. (2010) and 40% and 50% in Tornatore et al. (2010) in their wind and black hole feedback models, respectievely. In simulations of Cen et al. (1995); Cen & Ostriker (1999); Cen et al. (2001); Cen & Ostriker (2006); Cen & Chisari (2011), Wiersma et al. (2009); Tepper-Garc\u00b4 \u0131a et al. (2011), Shen et al. (2010) and Shull et al. (2011), in additional to radiative processes of a primordial gas, both metal cooling (due to collisional excitation and recombination) and metal heating (due to photo-ionization heating of metal species) in the presence of UV-X-ray background are included, whereas in Oppenheimer & Dav\u00b4 e (2009), Oppenheimer et al. (2012) and Tornatore et al. (2010) only metal cooling is included. Tepper-Garc\u00b4 \u0131a et al. (2011) suggest that the relatively overall low fraction of \f\u2013 11 \u2013 WHIM in the latter (25%) versus higher fraction in the former (35-50%) may be accounted by the di\ufb00erence in the treatment of metal heating; we concur with their explanation for at least part of the di\ufb00erence. All these simulations have a box size of \u223c50h\u22121Mpc, which still su\ufb00ers from signi\ufb01cant cosmic variance: Dav\u00b4 e et al. (2001) show that WHIM fraction increases from 37% to 42% from a box size of 50h\u22121Mpc to 100h\u22121Mpc in two Eulerian simulations. The amplitude of power spectrum has a similar e\ufb00ect and may be able to, at least in part, account for some of the di\ufb00erences among the simulations; \u03c38 = (0.82, 0.82, 0.74, 0.77, 0.80, 0.82) in [this work, Shull et al. (2011), Tepper-Garc\u00b4 \u0131a et al. (2011), Shen et al. (2010), Tornatore et al. (2010), Oppenheimer et al. (2012)]. Gravitational collapse of longer waves powers heating of the IGM at later times. We suggest that the peak of WHIM fraction at z \u223c0.5 found in the 25h\u22121Mpc simulation boxes in Smith et al. (2011) is because of the small box size; in other words, available, reduced gravitational heat input in the absence of breaking density waves of lengths longer than 25h\u22121Mpc at z \u22640.5 fails to balance the cooling due to (primarily) universal expansion and (in part) radiative cooling. This explanation is supported by the behavior of their simulation boxes of size 50\u22121Mpc at low redshift (z \u22640.5). Figure 3 shows that within the WHIM temperature range, roughly equal amounts are at T = 105 \u2212106K and T = 106 \u2212107K. 3. Results 3.1. Simulation Validation with Properties of O VI Absorbers The present simulations have been shown to produce the metal distribution in and around galaxies over a wide range of redshift (z = 0 \u22124) that is in good agreement with respect to the properties of observed damped Ly\u03b1 systems (Cen 2012). Here we provide additional, more pertinent validation with respect to O VI absorbers in the IGM at z = 0. The top panel of Figure 4 shows a scatter plot of simulated O VI absorbers (red pluses) in the Doppler width (b)-O VI column density [N(OVI)] plane, compared to observations. The agreement is excellent in that the observed O VI absorbers occupy a region that overlaps with the simulated one. It is intriguing to note that the simulations predict a large number of large b, low N(OVI) (i.e., broad and shallow) absorbers in the region b > 31(N(OVI)/1014cm\u22122)0.4km/s, corresponding to the upper left corner to the green dashed line, where there is no observed O VI absorber. This green dashed line, however, has no physical meaning to the best of our knowledge. The blue solid line of unity logarithmic slope has a clear physical origin, which is a requirement for the decrement at the \ufb02ux trough of the weaker of the O VI doublet to be 4%: b = 25(N(OVI)/1013cm\u22122) km/s. Current observational data are heterogeneous with varying qualities. Thus, the blue solid line is a much simpli\ufb01ed characterization of the complex situation. Nevertheless, one could understand the desert of observed O VI absorbers in the upper left corner to the blue solid line, thanks to the di\ufb03culty of identifying broad \f\u2013 12 \u2013 and shallow lines in existing observations. We attribute the \u201cmissing\u201d observed O VI lines in the upper right corner between the blue solid line and the green dashed line, in part, to the observational procedure of Voigt pro\ufb01le \ufb01tting that may break up some large b lines into separate, narrower components, whereas no such procedure is performed in the presented simulation results. Ongoing and upcoming observations by the Cosmic Origins Spectrograph (COS) (e.g., Froning & Green 2009; Shull 2009; Green et al. 2012) will be able to substantially improve in sensitivity and thus likely be able to detect a sizeable number of O VI lines in the upper left corner to the blue solid line. Quantitative distribution functions of b parameter will be shown in Figure 10 later, for which COS may provide a strong test. The bottom panel of Figure 4 shows a scatter plot of simulated O VII absorbers (red pluses); because there is no data to compare to, we only note that the positive correlation between b and N(OVI) is stronger for O VII lines than for O VI lines, in part due to less important contribution to the O VII lines from photoionization and in part due to positive correlation between density and velocity dispersion. Figure 5 shows O VI line density as a function of column density. The agreement between simulations and observations of Danforth & Shull (2008) is excellent over the entire column density, N(OVI) \u223c1013 \u22121015cm\u22122, where comparison can be made. The simulation results are up to a factor of \u223c2 below the observational results of Tripp et al. (2008) in the column density range N(OVI) \u223c1013.7 \u22121014.5cm\u22122. Some of the disagrement is due to di\ufb00erent treatments in de\ufb01ning lines in that we do not perform Voigt pro\ufb01le \ufb01tting thus deblending of non-gaussian pro\ufb01les into multiple components, where the observational groups do and di\ufb00erent groups often impose di\ufb00erent, subjective criteria of choosing the \u201cgoodness\u201d of the \ufb01t. The down turn of line density towards lower column densities from N(OVI) \u223c1013.9cm\u22122 from Tripp et al. (2008) as well as the lower values in the column density range N(OVI) \u223c1013.2 \u22121013.7cm\u22122 of Danforth & Shull (2008) may be related to the \u201cmissing\u201d broad and shallow lines, as indicated in the top panel of Figure 4. It is noted that the observed line density at N(OVI) \u223c1013cm\u22122 of the Danforth & Shull (2008) data displays an upturn and lies on top of the simulated curve. Closer examination reveals that this is due to the presence of two relatively broad absorbers at N(OVI) \u223c1013cm\u22122 and b \u223c30 km/s. We expect that the upcoming COS observations will substantially raise the line density at N(OVI) \u22641013.5cm\u22122. \f\u2013 13 \u2013 12 13 14 15 0.5 1 1.5 2 log N(OVI) (cm\u22122) log b (km/s) simulation obs: Danforth et al 2008 obs: Tripp et al 2008 13 14 15 16 0.5 1 1.5 2 log N(OVII) (cm\u22122) log b (km/s) Fig. 4.\u2014 Top panel shows a scatter plot of simulated O VI absorbers (red pluses) in the b-N(OVI) plane. Also shown as black dots and blue triangles are the observations from Danforth & Shull (2008) and Tripp et al. (2008), respectively. The green dashed line of slope 4/10 is only intended to guide the eye to suggest that there appears to be a desert of observed O VI absorbers in the upper left corner. The blue solid line of unity logarithmic slope is a requirement for the decrement at the \ufb02ux trough of the weaker of the O VI doublet to be 4%: b = 25(N(OVI)/1013cm\u22122) km/s. Bottom panel shows the same for the O VII absorbers. \f\u2013 14 \u2013 10 12 10 13 10 14 10 15 10 \u22121 10 0 10 1 10 2 N(OVI) (cm\u22122) dn/dz/per unit log N(OVI) total T>105K T<105K obs: Danforth & Shull 2008 obs: Tripp et al. 2008 Fig. 5.\u2014 shows the O VI line density as a function of column density, de\ufb01ned to be the number of lines per unit redshift per unit logarithmic interval of the column density. The red solid dots, green squares and blue triangles are the total, collisionally ionized and photoionized absorbers, respectively. Also shown as black open circles and stars are the observations from Danforth & Shull (2008) and Tripp et al. (2008), respectively. These results show that our simulation results are realistic with respect to the abundance of O VI lines in the CGM and IGM. This is a substantial validation of the simulations, when considered in conjunction with the success of the simulations with respect to the damped Ly\u03b1 systems (Cen 2012). The damped Ly\u03b1 systems primarily originate in gas within the virial radii of galaxies, whereas the O VI absorbers examined here extend well into the IGM, some reaching as far as the mean density of the universe (see Figure 7 below). In combination, they require the simulations to have substantially correctly modeled the propagation of initial metal-enriched blastwaves from sub-kpc scales to hundreds of kiloparsecs as well as other complex thermodynamics, at least in a statistical sense. Since O VII absorbers arise in regions in-between, this gives us con\ufb01dence that O VII lines are also modeled correctly and the comparisons that we will make between O VI and O VII lines are meaningful. \f\u2013 15 \u2013 4 5 6 7 0 10 20 30 40 50 60 70 80 90 100 150 log T (K) b (km/s) EW(1032)>100mA EW(1032)=30\u2212100mA thermal broadening only obs: Tripp et al 2008 4 5 6 7 0 10 20 30 40 50 60 70 80 90 100 150 200 log T (K) b (km/s) EW(OVII)>2mA EW(OVII)=0.5\u22122mA thermal broadening only Fig. 6.\u2014 shows absorbers in the b\u2212T plane for O VI line (top panel) and O VII line (bottom panel). Within each panel, we have broken up the absorbers into strong ones (blue squares) and weak ones (red circles). Only thermally broadened lines should follow the indicated solid green curve (Eq. 1). Also shown as right-pointing triangles are observed data of Tripp et al. (2008) based on a joint analysis of Ly\u03b1 and O VI lines; the location of each triangle is the best estimate of the temperature and the rightmost tip of the attached line to each triangle represents a 3\u03c3 upper limit. \f\u2013 16 \u2013 0 1 2 3 4 4 5 6 7 log b log T (K) log N(O VI)>14 log N(O VI)=13\u221214 0 1 2 3 4 4 5 6 7 log b log T (K) log N(O VII)>15 log N(O VII)=14\u221215 Fig. 7.\u2014 shows absorbers in the temperature-density plane for O VI line (top panel) and O VII line (bottom panel). Within each panel, we have broken up the absorbers into strong ones (blue squares) and weak ones (red circles). \f\u2013 17 \u2013 3.2. Physical Properties of O VI and O VII Lines In this subsection we will present physical properties of both O VI and O VII absorbers and relationships between them. For most of the \ufb01gures below we will show results in pairs, one for O VI and the other for O VII, to facilitate comparisons. Figure 6 shows absorbers in the b \u2212T plane for O VI (top panel) and O VII absorbers (bottom panel). For thermal broadening only absorbers the b \u2212T relation would follow the solid green curve obeying this formula: b(O) = 10.16(T/105K)1/2 km/s. (1) It is abundantly clear from Figure 6 that b is a poor indicator of absorber gas temperature. Bulk velocity structures within each absorbing line are important. For O VI lines of equivalent width greater than 100mA, it appears that bulk velocity structures are dominant over thermal broadening at all temperatures. No line is seen to lie below the green curve, as expected. All of the observationally derived temperature limits shown, based on a joint analysis of line pro\ufb01les of well-matched coincidental Ly\u03b1 and O VI lines by Tripp et al. (2008), are seen to be fully consistent with our simulation results. It is noted that velocity structures in unvirialized regions typically do not have gaussian distributions (in 1-d). Caustic-like velocity structures are frequently seen that are reminiscent of structure collapse along one dimension (e.g., Zeldovich pancake or \ufb01laments); for anecdotal evidence see Figure 2. Thus, we caution that temperatures derived on the grounds of gaussian velocity pro\ufb01le (e.g., Tripp et al. 2008) may be uncertain. A more detailed analysis will be performed elsewhere. The situations with respect to O VII absorbers are similar to O VI absorbers. Figure 7 shows absorbers in the temperature-density plane for O VI (top panel) and O VII absorbers (bottom panel). In the top panel we see that strong O VI absorbers with N(OVI)\u22651014cm\u22122 have a large concentration at (\u03b4, T)= (10 \u2212300, \u223c105.5K) that corresponds to collisional ionization dominated O VI population, consistent with Figure 5. For weaker absorbers with N(OVI)= 1013\u221214cm\u22122 we see that those with temperature above and those below 105K are roughly equal, consistent with Figure 5; the density distributions for the two subsets are rather di\ufb00erent: for the lower-temperature (T< 105K) subset the gas density is concentrated around \u03b4 \u223c10 that is photoionization dominated, whereas for the higher-temperature (T> 105K) subset the gas density is substantially spread out over \u03b4 \u223c 3\u22123000, which are mostly collisional ionization dominated. Finally, we note that still weaker lines with N(OVI)< 1013cm\u22122, not shown here, are mostly photoionization dominated, as indicated in Figure 5. In the bottom panel we see that strong O VII absorbers with N(OVI)\u2265 1015cm\u22122 are predominantly collisionally ionized at T \u223c105.5 \u2212106.5K and \u03b4 \u223c10 \u22121000, with a small fraction of lines concentrated at an overdensity of \u223c3 \u221220 and temperatures below 105.5K. For weaker absorbers with N(OVII)= 1014\u221215cm\u22122 collisionally ionized ones at temperatures greater than 105.5K and those photoionized at lower temperatures are roughly \f\u2013 18 \u2013 4 5 6 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log T (K) CDF(>T) log N(OVI)=12.5\u221213 log N(OVI)=13\u221214 log N(OVI)>14 OVI fraction with collisional ionization 4 5 6 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 log T (K) CDF(>T) log N(OVII)=13\u221214 log N(OVII)=14\u221215 log N(OVII)>15 OVII fraction w/ coll ionization Fig. 8.\u2014 shows the cumulative probability distribution functions as a function of absorber temperature for three subsets of O VI line with column densities of log N(OVI)cm2 = (12.5\u2212 13, 13 \u221214, > 14) (top panel) and O VII line of log N(OVI)cm2 = (13 \u221214, 14 \u221215, > 15) (bottom panel). The blue dot-dashed curve in the top (bottom) panel shows the O VI (O VII) fraction as a function of gas temperature in the absence of photoionization. \f\u2013 19 \u2013 comparable in numbers, consistent with Figure 14 below. Our results for O VI lines is broadly consistent with (Shull et al. 2011). Shull et al. (2011) \ufb01nd a bimodal distribution of O VI absorbers, one concentrating at (\u03b4, T)=(\u223c10, 104.5K) and the other at (\u03b4, T)=(\u223c100, 105.5K) (see their Figures 4,5). Figure 8 shows the cumulative probability distribution functions as a function of absorber temperature for three subsets of O VI lines with column densities of log N(OVI)cm2 = (12.5\u221213, 13\u221214, > 14) (top panel) and O VII line of log N(OVI)cm2 = (13\u221214, 14\u221215, > 15) (bottom panel). We \ufb01nd that (39%, 43%, 61%) of O VI absorbers in the column density ranges of log N(OVI)cm2 = (12.5 \u221213, 13 \u221214, > 14) have temperature greater than 105K, (25%, 39%, 73%) of O VII absorbers in the column density ranges of log N(OVI)cm2 = (13 \u221214, 14 \u221215, > 15) have temperature greater than 105K. Our \ufb01ndings are in broad agreement with previous results obtained by our group (e.g., Cen et al. 2001; Cen & Ostriker 2006; Cen & Chisari 2011) and some other groups (e.g., Tepper-Garc\u00b4 \u0131a et al. 2011; Shull et al. 2011), but in substantial disaccord with results of Oppenheimer & Dav\u00b4 e (2009) and Oppenheimer et al. (2012) who \ufb01nd that photo-ionized O VI lines with temperature lower than 105K make up the vast majority of O VI lines across the column density range log N(OVI)cm2 = 12.5 \u221215. Given the di\ufb00erences in simulation codes and in treatment of feedback processes, we cannot completely ascertain the exact cause for the di\ufb00erent results. Nevertheless, the explanation given by Oppenheimer et al. (2012) that the lack of metal mixing in their SPH simulations plays an important role in contributing to the di\ufb00erence is further elaborated here. Oppenheimer et al. (2012) \ufb01nd a large fraction of metal-carrying feedback SPH particles wound up in low density regions that have relatively high metallicity (\u223c1 Z\u2299) and low temperature (T \u223c104K). As a result, they \ufb01nd low-density, high-metallicity and lowtemperature photo-ionized O VI absorbers to dominate the overall O VI absorber population in their SPH simulations. According to Tepper-Garc\u00b4 \u0131a et al. (2011), they repeat simulations with the same feedback model used in Oppenheimer & Dav\u00b4 e (2009) and Oppenheimer et al. (2012) but with metal heating included, and are unable to reproduce the dominance of lowtemperature photoionized O VI absorbers seen in the latter. This leads them to conclude that lack of metal heating, in the presence of high-metallicity feedback SPH particles, is the cause of the dominance of low-density, high-metallicity and low-temperature photo-ionized O VI absorbers found in Oppenheimer & Dav\u00b4 e (2009) and Oppenheimer et al. (2012). We suggest that this overcooling problem may have been exacerbated by lack of metal mixing. Consistent with this conjecture, while Tepper-Garc\u00b4 \u0131a et al. (2011) su\ufb00er less severely from the metal overcooling problem (because of metal heating), the median metallicity of their O VI absorbers is still \u223c0.6 Z\u2299, substantially higher than that of our O VI absorbers, Z \u223c0.03 \u22120.3 Z\u2299, even though their overall abundance of O VI absorbers is lower than observed by a factor of \u223c2. This noticeable di\ufb00erence in metallicity may be rooted in lack of metal mixing in theirs. \f\u2013 20 \u2013 As we will show later (see Figure 13 below), the metallicity of simulated O VI in our simulations appears to better match observations. Despite that, it is desirable to directly probe the physical nature of O VI absorbers to test models by di\ufb00erent simulation groups. One major di\ufb00erence between SPH simulations (e.g., Tepper-Garc\u00b4 \u0131a et al. 2011; Oppenheimer et al. 2012) and AMR simulations (e.g., Smith et al. 2011, and this work) is that the former predict metallicity distributions that are peaked at (0.6\u22121) Z\u2299compared to peaks of \u223c(0.05\u22120.2) Z\u2299 in the latter. In addition, in the latter positive correlations between metallicity and O VI column density and between metallicity and temperature are expected, whereas in the former the opposite or little correlation seems to be true. Therefore, direct measurements of O VI metallicity and correlations between metallicity and other physical quantities would provide a good discriminator. Putting di\ufb00erences between SPH simulations of WHIM by di\ufb00erent groups aside, what is in common among them is the dominance of low-density (\u03b4 \u2264100) O VI absorbers at all column densities. In the AMR simulations it is found that the collisionally ionized O VI absorbers, with density broadly peaked at \u03b4 \u223c100, dominate (by 2 to 1) over photoionized O VI absorbers for N(OV) \u22651014cm\u22122 population. Given these signi\ufb01cant di\ufb00erences between SPH and AMR simulations, we suggest a new test, namely, the cross-correlation function between galaxies and strong [N(O VI) \u22651014cm\u22122] O VI absorbers. Available observations appear to point to strong correlations at relatively small scales \u2264300 \u2212700kpc between luminous galaxies (\u22650.1L\u2217) and N(OV) \u22651013.2\u221213.8cm\u22122 O VI absorbers at z = 0 \u22120.5 (e.g., Stocke et al. 2006; Chen & Mulchaey 2009; Prochaska et al. 2011). We expect AMR simulated O VI absorbers to show stronger small-scale crosscorrelations with galaxies than SPH simulated O VI absorbers thanks to the predicted dominance of O VI absorbers in low density (but higher metallicity) regions in the latter that are at larger distances from galaxies. When an adequate observational sample of Ne VIII absorbers becomes available, galaxy-Ne VIII may provide a still more sensitive test between photoionization models suggested by some SPH simulations and AMR simulations, because the N VIII line needs a still higher temperature to collisionally ionize and hence the contrast is still higher between the simulations. However, the shorter wavelengths of the Ne VIII lines at 770\u02da A and 780\u02da A require galaxies at z > 0.47 to shift into the HST (COS or STIS) observable band, for which current galaxy surveys will only be able to probe most luminous galaxies (L > L\u2217). Detailed calculations and comparisons to observations will be needed to ascertain these expectations to nail down their physical nature and to constrain feedback models. The rapid rise in the cummulative fraction in the temperature range log T = 5.5\u00b10.1 in the top panel of Figure 8 re\ufb02ects the concentration of collisionally ionized O VI lines in that temperature range, supported the blue dot-dashed curve showing the collisional ionization fraction of O VI as a function of temperature. This feature is most prominent in the high column density O VI population with log N(OVI)cm2 > 14 (green dashed curve), simply stating the fact that collisional ionization is dominant in high column density O VI lines. \f\u2013 21 \u2013 For O VII there is a similar feature except that it is substantially broader at log T = 6.0\u00b10.4, which is consistent with the ionization fraction of O VII as a function of temperature in the collisional ionization dominated regime, shown as the blue dot-dashed curve in the bottom panel of Figure 8. For O VI lines with column density in the range log N(OVI)cm2 = 12.5\u221214 we see a relative dearth of absorbers in the temperature range log T = 4.8 \u22125.4, a regime where neither collisional ionization nor photoionization is e\ufb00ective due to structured multiphase medium (i.e., positive correlation between density and temperature in this regime, see Figure 17 below); at still lower temperature log T < 4.8 (and low density due to the positive correlation), the curve displays a rapid ascent due to its entry into the photoionization dominated regime. Analogous behaviors and explanations can be said for O VII absorbers. We see earlier in Figure 6 that b is not a good indicator of the temperature of absorbing gas. It is thus useful to quantify the fraction of absorbers at a given b whose temperature is in the WHIM regime. Figure 9 shows the fraction of O VI (top panel) and O VII (bottom panel) absorbers that is in WHIM temperature range of 105\u2212107K as a function of b. Broadly speaking, above the threshold (thermally broadened Doppler width of 10.16km/s for a gas at temperature of 105K), the WHIM fraction is dominant at \u226550% for both O VI and O VII lines, but only close to 100% when b is well in excess of 100km/s. This again indicates the origin of the O VI absorbing gas whose random motions are far from completely thermalized, consistent with its (mostly) intergalactic nature. The approximate \ufb01tting curve (blue curve) for the column density weighted hisotogram for O VI shown in Figure 9 can be formulated as the following equation: f(OVI) = 0.20 for b < 10 km/s = 0.0026(b \u221210) + 0.6 for b = 10 \u2212160 km/s = 1 for b > 160 km/s (2) As already indicated in Figure 4 that a substantial fraction of broad but shallow absorbers may be missing in current observational data, here we quantify it further. Figure 10 shows four cumulative probability distribution functions as a function of b for four subsets of O VI (top panel) and O VII (bottom panel) lines of di\ufb00ering column densities. To give some quantitative numbers, (15%, 20%, 26%, 39%) of O VI absorbers with log N(O VI) = (12.5 \u2212 13, 13 \u221213.5, 13.5 \u221214, > 14) have b > 40 km/s; the fractions drop to (1%, 2%, 4%, 7%) for b > 80 km/s. Similarly, (17%, 46%, 77%, 88%) of O VII absorbers with log N(O VI) = (13\u2212 14, 14 \u221215, 15 \u221216, > 16) have b > 40 km/s, with (2%, 9%, 29%, 30%) having b > 80 km/s. With COS observations of substantially higher sensitivities, broad O VI lines are begun to be detected (Savage et al. 2010). A direct, statistical comparison between simulation \f\u2013 22 \u2013 0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b (km/s) fraction of O VI lines in WHIM # weighted N(OVI) weighted thermal broadening with 105K 0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b (km/s) fraction of O VII lines in WHIM # weighted N (OVII) weighted Fig. 9.\u2014 shows the fraction of O VI (top panel) and O VII (bottom panel) absorbers that is in WHIM temperature range of 105 \u2212107K as a function of b. The red and green histograms are number and column density weighted, respectively, including only lines with column density above 1013cm\u22122 in the case of O VI and 1014cm\u22122 for O VII. The vertical black line indicates b for a purely thermally broadened line at a temperature of 105K. The approximate \ufb01tting curve indicated by blue dashed line is given in Equation (2). \f\u2013 23 \u2013 0 25 50 75 100 125 150 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b (km/s) CDF(>b) log N(OVI)=12.5\u221213 log N(OVI)=13\u221213.5 log N(OVI)=13.5\u221214 log N(OVI)>14 0 25 50 75 100 125 150 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b (km/s) CDF(>b) log N(OVII)=13\u221214 log N(OVII)=14\u221215 log N(OVII)=15\u221216 log N(OVII)>16 Fig. 10.\u2014 Top panel shows four cumulative probability distribution functions as a function of b for four subsets of O VI lines in the four column density ranges: log N(OVI)cm2 = 12.5\u221213 (black dotted curve), log N(OVI)cm2 = 13\u221213.5 (red solid curve), log N(OVI)cm2 = 13.5 \u221214 (green dashed curve) and log N(OVI)cm2 > 14 (blue dot-dashed curve). Bottom panel shows four cumulative probability distribution functions as a function of b for four subsets of O VII lines in the four column density ranges: log N(OVI)cm2 = 13 \u221214 (black dotted curve), log N(OVI)cm2 = 14 \u221215 (red solid curve), log N(OVI)cm2 = 15 \u221216 (green dashed curve) and log N(OVI)cm2 > 16 (blue dot-dashed curve). \f\u2013 24 \u2013 results found here and observations will be possible in the near future. It will be extremely interesting to see if there is indeed a large population of broad but shallow O VI lines still missing. That is also important, because, if that is veri\ufb01ed, one will have more con\ufb01dence on the results for O VII lines, which suggest that the O VII lines may be substantially broader than a typical thermally broadened width of 40 \u221250 km/s. Additional useful information is properties of other lines, including Ly\u03b1, which we will present in a subsequent paper. Since the expected b of O VII lines is still substantially smaller than spectral resolution of Chandra and XMM-Newton X-ray instruments, it does not make much di\ufb00erence for extant observations. However, it should be taken into consideration in designing future X-ray telescopes to probe WHIM in absorption or emission (e.g., Yao et al. 2012). Figure 11 shows absorbers in the metallicity-overdensity plane. The apparent anticorrelation between metallicity and overdensity with a log slope of approximately \u22121 for the mostly collisional ionization dominated population (red circles) is simply due to the fact that the absorbers near the chosen column density cuto\ufb00dominate the numbers and that collisional ionization rate is density independent. Similarly, the apparent anti-correlation between metallicity and overdensity with a log slope of approximately \u22122 for the mostly photoionization dominated population (blue squares) is due to the fact that photo-ionization fraction is proportional to density. So one should not be misled to believe that there is an anti-correlation between gas metallicity and overdensity in general; the opposite is in fact true (see Figure 18 below). In Figure 12 we project two subsets of absorbers onto the metallicity-b plane. Because of complex behaviors seen in Figure 11 and the additional role played by complex temperature and velocity distributions, one may not be surprised to see the large dispersions in metallicity at a given b. The metallicity distribution is seen to be, to zero-order within the large dispersions, nearly independent of b. When metallicity of O VI and O VII absorbers can be measured directly in the future, this prediction may be tested. No further detailed information on this shall be given here due to its still more futuristic nature in terms of observability, except noting that the weak trends can be understood and these trends are dependent upon the column density cuts. Figure 13 shows the mean metallicity as a function of column density for O VI (red circles) and O VII absorbers (blue squares). We see that a substantial dispersion of about 0.51 dex is present for all column density bins. The mean metallicity for O VI lines increases by 0.9 dex from [Z/H] \u223c\u22121.3 at N(OVI) = 1013cm\u22122 to [Z/H] \u223c\u22120.4 at N(OVI) = 1015cm\u22122. For O VII lines the mean metallicity increases by 0.4 dex from [Z/H] \u223c\u22121.1 at 1014cm\u22122 to [Z/H] \u223c\u22120.7 at 1016cm\u22122. The trend of increasing metallicity with increasing column density is consistent with the overall trend that higher density regions, on average, have higher metallicity, at least in the density range of interest here (see Figure 11 below). It is noted that the mean metallicity for O VII absorbers is, on average, lower than that for O VI lines at a \ufb01xed column density for the respective ions. This and some other relative behaviors between O VI and O VII seen in Figure 13 merely re\ufb02ect the facts (1) that the \f\u2013 25 \u2013 product of oscillator strength and restframe wavelength of O VII line is about a factor of 10 lower than that of O VI, (2) the peak collisional ionization fraction for O VII is about a factor of 5 higher than that of O VI, and (3) the peak width for collisional ionization temperature for O VII is larger by a factor of \u223c3 than that of O VI (see Figure 8). Examination of the C+P (collisional + photoionization) model with distributed feedback (which is closest to our feedback model) in Figure 17 of Smith et al. (2011) reveals that the average metallicity increases from \u223c\u22121.0 to \u223c0.0 for NOVI from 1012cm\u22122 to 1015cm\u22122, which should be compared to an increase of metallicity from \u223c\u22121.5 to \u223c\u22120.5. Thus, our results are in good agreement with Smith et al. (2011) except that their metallicity is uniformly higher by a factor of \u223c3. While there are substantial disagreements among the SPH and AMR simulations with respect to the metallicity of O VI absorbers, it is fair to say that a median value of 0.1\u22121 Z\u2299 encompasses them. Given that, some quantitative physical considerations are useful here. The cooling time for gas of \u03b4 = 100, T = 105.5K and Z = 0.1 Z\u2299at z = 0 is \u223c0.05tH (tH is the Hubble time at z = 0) (this already takes into account metal heating by the X-ray background; it should be noted that the X-ray background at z \u223c0 is still quite uncertain (e.g., Shull et al. 2011)). This indicates that the O VI-bearing gas of T \u223c105.5K and \u03b4 \u2265100, in the absence of other balancing heating processes, can only spend a small fraction of a Hubble time at the temperature for optimal O VI production via collisional ionization. This has two implications. First, O VI absorbers at \u03b4 \u2265100 \u00d7 (Z/0.1 Z\u2299)\u22121 is transient in nature and their appearance requires either constant heating of colder gas or higher temperature gas cooling through. Which process is more responsible for O VI production will be investigated in a future study. Second, the metal cooling that is linearly proportional to gas metallicity may give rise to an interesting \u201cselection e\ufb00ect\u201d, where high metallicity O VI gas in dense regions, having shorter cooling time than lower metallicity O VI gas of the same density, would preferentially remove itself from being O VI productive by cooling, leaving behind only lower metallicity gas at O VI-bearing temperatures. We suggest that this selection e\ufb00ect may have contributed to a much reduced proportion of collisionally ionized O VI lines in SPH simulations that lack adequate metal mixing; in other words, dense metal \u201cbullets\u201d of SPH particles either cools very quickly to \u223c104K or they have reached regions of su\ufb03ciently low density before that happens. The results of Oppenheimer et al. (2012) appear to suggest, in the context of this scenario, that the feedback metal-bearing SPH particles have cooled to \u223c104K, before they can reach low density regions to avoid severe cooling, thus resulting in high-metallicity, low-density, photoionized O VI lines when they eventually wind up in low density regions. An analogous situation occurs in Tepper-Garc\u00b4 \u0131a et al. (2011) SPH simulations but with two signi\ufb01cant di\ufb00erences from those of Oppenheimer et al. (2012): (1) in the former the inclusion of metal heating (due to photoionization of metal species) keeps the corresponding SPH particles at a higher temperature \ufb02oor (\u223c104.5\u22125K barring adiabatic cooling) than in the latter, and (2) \u201csmoothed\u201d metallicity used in the former to compute \f\u2013 26 \u2013 metal cooling/heating rates has reduced the metal cooling e\ufb00ects (which still dominate over metal heating at T \u2265105K) compared to the case without such smoothing in the latter. 0 1 2 3 4 \u22123 \u22122 \u22121 0 [Z/H] (OVI) N(OVI)=1013\u221214cm\u22122 & T>105K N(OVI)=1013\u221214cm\u22122 & T<105K 0 1 2 3 4 \u22123 \u22122 \u22121 0 N(OVI)>1014cm\u22122 & T>105K N(OVI)>1014cm\u22122 & T<105K 0 1 2 3 4 \u22123 \u22122 \u22121 0 log b [Z/H] (OVII) N(OVII)=1014\u221215cm\u22122 & T>105K N(OVII)=1014\u221215cm\u22122 & T<105K 0 1 2 3 4 \u22123 \u22122 \u22121 0 log b N(OVII)>1015cm\u22122 & T>105K N(OVII)>1015cm\u22122 & T<105K Fig. 11.\u2014 shows absorbers in the metallicity-overdensity plane for O VI line with N(OVI) = 1013\u221214cm\u22122 (top left panel) and N(OVI) > 1014cm\u22122 (top right panel. The bottom two panels show O OVII line with N(OVII) = 1014\u221215cm\u22122 (bottom left panel) and N(OVII) > 1015cm\u22122 (bottom right panel). Within each panel, we have broken up the absorbers into two subsets using temperature: T > 105K (red circles) and T < 105K (blue squares). 3.3. Coincidence Between O VI and O VII Lines In \u00a73.1 we show that some of the primary observable properties of simulated O VI lines, including line incidence rate, are in excellent agreement with observations. In \u00a73.2 we have shown various physical properties underlying the observables of both lines. Before presenting quantitative coincidence rates between O VI and O VII lines, it is useful to further check \f\u2013 27 \u2013 0 50 100 150 200 250 \u22121.5 \u22121 \u22120.5 0 b (km/s) <[Z/H]> N(OVI)\u2212weighted mean metallicity of lines w/ T>105K and log N(OVI)>13 N(OVI)\u2212weighted mean metallicity of lines w/ T<105K and log N(OVI)>13 0 50 100 150 200 250 \u22121.5 \u22121 \u22120.5 0 b (km/s) <[Z/H]> N(OVII)\u2212weighted mean metallicity of lines w/ T>105K and log N(OVII)>14 N(OVII)\u2212weighted mean metallicity of lines w/ T<105K and log N(OVII)>14 Fig. 12.\u2014 shows the mean absorber metallicity as a function of b for O VI line with column density above 1013cm\u22122 (top panel) and O VII line with column density above 1014cm\u22122 (bottom panel). Within each panel, we have broken up the absorbers into two subsets using temperature: T > 105K (blue squares) and T < 105K (red circles). 12 13 14 15 16 17 \u22122 \u22121.5 \u22121 \u22120.5 0 log N(X) (cm\u22122) <[Z/H]> X=O VI X=O VII O VI: obs of Danforth & Shull (2008) O VI: obs of Lacki & Charlton (2010) Fig. 13.\u2014 shows the mean metallicity as a function of column density for O VI (red open circles) and O VII (blue open squares) lines. Also shown as solid symbols are observational data. It is likely that the observational errorbars are underestimated. \f\u2013 28 \u2013 10 13 10 14 10 15 10 16 10 \u22121 10 0 10 1 10 2 N(OVII) (cm\u22122) dn/dz [>N(OVII)] O VII: total O VII: T>10 5K O VII: T<10 5K Nicastro et al (2005) Fig. 14.\u2014 shows the cumulative O VII line density as a function of column density, de\ufb01ned to be the number of lines per unit redshift at the column density greater than the value at the x-axis. The red solid dots, green squares and blue triangles are the total, collisionally ionized and photo-ionized lines, respectively. Also shown as a black open circle is the observation of Nicastro et al. (2005a) with 1\u03c3 errorbar. Note that the quantify shown in the y-axis of Figure 5 is di\ufb00erential, not cumulative density. the O VII line incidence rate to assess the self-consistency of our simulations with extant observations. Figure 14 shows the cumulative O VII line density as a function of column density. We also show the implied observed line density, under the assumption that the detection reported by Nicastro et al. (2005a) is true. We see that the claimed observational detection is about 2\u03c3 above or a factor of \u223c7 higher than our predicted central value at the column density \u22657 \u00d7 1014cm\u22122. Our model is clearly in a more comfortable situation, if the claimed observational detection turns out to be negative. As discussed in the introduction the detection reported by Nicastro et al. (2005a) is presently controversial. This highlights the urgent need of higher sensitivity X-ray observations of this or other viable targets that could potentially place strong constraints on the model. We now turn to the coincidences between O VI and O VII lines. The top panel of \f\u2013 29 \u2013 Figure 15 shows the cumulative probability distribution functions as a function of velocity displacement of having a coincidental O VII line above the indicated equivalent width for an O VI line of a given equivalent width. We see that O VI lines of equivalent width in the range 50 \u2212200mA have (38%, 31%, 10%) probability of \ufb01nding an O VII line with equivalent width greater than (0.1, 0.5, 2)mA within a velocity displacement of 150 km/s. The vast majority of coincidental O VII lines for O VI lines for those equivalent widths in question are concentrated within a velocity displacement of \u226450 km/s and more than 50% at \u226425 km/s. The bottom panel of Figure 15 shows the cumulative probability distribution functions as a function of velocity displacement of having a coincidental O VI line above the indicated equivalent width for an O VII line of a given equivalent width. It is seen that for O VII lines of equivalent width in the range 2\u22124mA have a 17\u221227% probability of \ufb01nding an O VI line with equivalent width in the range 5 \u2212100mA within a velocity displacement of 150 km/s. Likewise, the vast majority of coincidental O VI lines for O VII lines for those equivalent widths in question are concentrated within a velocity displacement of \u226450 km/s and more than 80% at \u226425 km/s. The results shown in Figure 15 presently can not be compared to observations, because there is no de\ufb01nitive detection of O VII absorbers, although there are many detected O VI absorbers. Thus, we use the stacking method of Yao et al. (2009) to enable a direct comparison with available observations. The top panel of Figure 16 shows the expected mean O VII column density at the location of detected O VI lines of column density indicated by the x-axis, compared to the 3\u03c3 upper limits from observations of Yao et al. (2009) shown as black triangles. We see that the non-detection of O VII lines, or more precisely, a 3\u03c3 upper limit on the mean column of O VII lines for detected O VI lines of column density in the range log N(OVI)cm2 = 13.6 \u221214.1, is fully consistent with our simulations. The reported 3\u03c3 upper limit is above the expected value by a factor of 2.5 \u22124. This suggests that a factor of \u223c10 increase in sample size or sensitivity will be able to yield a de\ufb01nitive detection of O VII column density using the stacking technique even without detection of individual O VII absorbers. The bottom panel of Figure 16 shows the expected mean O VI column density at the location of detected O VII lines of column density indicated by the x-axis. It is evident from Figures 15,16 that O VI and O VII lines are coincidental only in a limited sense. We attribute the limited coincidence of O VII lines for O VI lines primarily to two situations for O VI producing regions. A line of sight that intersects an O VI producing region does not necessarily intersect a strong O VII producing region along the same line of sight, either because the temperature of the overall region does not reach a high enough value to be strong O VII bearing, or because the intersected O VI region is laterally an outskirt of an onion-like structure where the more central, higher temperature, O VII region makes up a smaller cross section. The former case should be ubiquitous, because weaker gravitational \f\u2013 30 \u2013 shocks that produce regions of temperature, say, 105.5K are more volume \ufb01lling than stronger gravitational shocks giving rise to regions of temperature, say, 106.0K. In addition, feedback shocks from star formation tend to be weaker than required to collisionally produce O VII at the spatial scales of interest here. In other words, one expects to see many O VI-bearing regions that have no associated O VII-bearing sub-regions. The latter case where hotter but smaller regions are surrounded by cooler regions is expected to arise naturally around large virialized systems such as groups and clusters of galaxies. A more quantitative but still intuitive physical check of the obtained results is not straight-forward, without performing a much more detailed study of individual physical regions that produce O VI and O VII absorbers. We shall reserve such a study for the future. The situation of coincidental O VI lines for given O VII lines might appear to be less ambiguous at \ufb01rst sight in the sense that the hotter central, O VII-producing regions should be surrounded by cooler regions and thus one might expect that the line of sight that intersects a strongly O VII-producing region should automatically intersect cooler regions that would show up as O VI absorbers. While it is true that a hot region is in general surrounded by cooler regions, it is not necessarily true that a hot 106K, O VII-bearing gas is surrounded by signi\ufb01cant 105.5K, O VI-bearing gas. For example, one may have a post-shock region of temperature 106K that is surrounded only by pre-shocked gas that is much colder than 105.5K. We note that for a gas of \u03b4 = 100, T = 106K and Z = 0.3 Z\u2299at z = 0, its cooling time is tcool \u223c0.5tH (tH is the Hubble time at z = 0). This means that O VII-bearing WHIM gas at \u03b4 \u2264100, which has been heated up by shocks to T \u2265106K, is unlikely to cool to T \u223c105.5K to become O VI rich gas. On the other hand, the cooling time for gas of \u03b4 = 100, T = 105.5K and Z = 0.1 Z\u2299at z = 0 is \u223c0.05tH, as noted earlier. Thus, it is physically possible that sharp interfaces between hot (T \u2265106K) and cold T \u2264105K gas develop. The simulations do not include thermal conduction, which can be shown to be unimportant here. The electron mean free path (mfp) is 0.44(T/106K)2(\u03b4/100)\u22121kpc, adopting the standard Spitzer value. The likely presence of magnetic \ufb01elds (not treated here) would further reduce the mfp by an order of magnitude (e.g., Cowie & McKee 1977). Thus, thermal conduction is insigni\ufb01cant and multi-phase media is expected to exist. \f\u2013 31 \u2013 0 25 50 75 100 125 150 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 6 v (km/s) CDF(O7 | O6, >6 v) P[finding EW>2mA O7 line given EW=50\u2212200mA O6 line] P[finding EW>0.5mA O7 line given EW=50\u2212200mA O6 line] P[finding EW>0.1mA O7 line given EW=50\u2212200mA O6 line] 0 25 50 75 100 125 150 0 0.05 0.1 0.15 0.2 0.25 0.3 6 v (km/s) CDF(O6 | O7, >6 v) P[finding EW>100mA O6 line given EW=2\u22124mA O7 line] P[finding EW>25mA O6 line given EW=2\u22124mA O7 line] P[finding EW>5mA O6 line given EW=2\u22124mA O7 line] Fig. 15.\u2014 Top panel shows the cumulative probability distribution functions as a function of velocity displacement of having a coincidental O VII line above the indicated equivalent width for an O VI line of a given equivalent width. Bottom panel shows the cumulative probability distribution functions as a function of velocity displacement of having a coincidental O VI line above the indicated equivalent width for an O VII line of a given equivalent width. \f\u2013 32 \u2013 12 13 14 15 13 13.5 14 14.5 15 log N(OVI) (cm\u22122) log (cm\u22122) sim 3\u001f upper limit (Yao et al 2009) 14 15 16 12.5 13 13.5 14 log N(OVII) (cm\u22122) log (cm\u22122) sim limit case: assuming polytropic gas limit case: assuming only hot gas at T>=106K Fig. 16.\u2014 The top panel shows the expected mean O VII column density at the location of detected O VI lines of column density indicated by the x-axis. Also shown as black triangles are 3\u03c3 upper limits from observations of Yao et al. (2009). The bottom panel shows the expected mean O VI column density at the location of detected O VII lines of column density indicated by the x-axis. The solid and dashed straight lines are possible limit cases based on simple physical considerations. \f\u2013 33 \u2013 Equipped with this information and adopting the simpler implied geometry allows for a simple physical check of the results in the bottom panel of Figure 16, as follows. We will take two separate approaches to estimate this. The \ufb01rst approach assumes a polytropic gas in the temperature range relevant for O VI and O VII collisional ionization. In the top panel of Figure 17 we show the entire temperature-overdensity phase diagram for the re\ufb01ned region in the C run. Note that the gas density reaches about 1 billion times the mean gas density, corresponding to \u223c100cm\u22123, i.e., star formation regions. For the regions of present relevance, the density range is about 10 \u2212300 times the mean density, illustrated by the upper part of the red tornado-like region near the middle of the plot. It is useful to note that for this density range, the gas mass is dominated by gas in the temperature range of 105 \u2212107K, i.e., WHIM. Because of this reason, it is a valid exercise to compute the mean pressure as a function of overdensity, at least for the density range relevant for WHIM, shown in the bottom panel of Figure 17. We see that for the WHIM overdensity range of 10 \u2212300, the adiabatic index 5/3, shown as the dashed line, provides an excellent approximation for the polytropic index of the gas. It is also necessary to have a relation between gas metallicity as a function of gas overdensity, shown in Figure 18. We only note that the metallicity is generally an increasing function with density above one tenth of the mean density, that the sharp rise of metallicity below one tenth of the mean density is due to metal-enrich galactic winds escaping into the low density regions, and that for our present purpose concerning the WHIM overdensity range of 10 \u2212300 the metallicity roughly goes as Z \u221d\u03b40.4, as indicated by the dashed line. Given the information in Figures 17, 18 we can now proceed to estimate the expected O VI column at a given O VII column density, i.e., < N(OVI) > / < N(OVII) >, assuming both are dominated by collisional ionization. The O VI column density may be roughly approximated as < N(OVI) >\u221df(OVI)\u2206log T(OVI)\u03c1(OVI)Z(OVI)L(OVI), where f(OVI) = 0.22 at log T(OVI)/K = 5.5, \u2206log T(OVI) = 0.2, \u03c1(OVI), Z(OVI) and L(OVI) are the peak collisional ionization fraction for O VI, the FWHM of the logarithmic temperature of the collisional ionization peak (see the blue curve in the top panel of Figures 8), the density of the O VI absorbing gas, the metallicity of the O VI absorbing gas and the physical thickness of the O VI absorbing gas, respectively. We have an exactly analagous relation for O VII, with f(OVII) = 1, log T(OVII) = 6, \u2206log T(OVII) = 0.7. With an additional assumption that the characteristic thickness at a given density goes as L(OVI) \u221d\u03c1(OVI)1/3 (i.e., mass distribution across log density is roughly uniform), we can now evaluate the column density ratio < N(OVI) > < N(OVII) > = f(OVI) f(OVII) \u2206log T(OVI) \u2206log T(OVII) \u03c1(OVI) \u03c1(OVII) Z(OVI) Z(OVII) L(OVI) L(OVII) = f(OVI) f(OVII) \u2206log T(OVI) \u2206log T(OVII) \u0012 T(OVI) T(OVII) \u0013(2/3+\u03b1)/(\u03b3\u22121) = 0.015, (3) \f\u2013 34 \u2013 10 \u22122 10 0 10 2 10 4 10 6 10 8 10 \u221222 10 \u221220 10 \u221218 10 \u221216 10 \u221214 10 \u221212 10 \u221210 10 \u22128 b pressure (erg/cm3) simulation p=n5/3 Fig. 17.\u2014 Top panel shows mass weighted phase diagram in the temperature-overdensity plane for the re\ufb01ned region in the C run. The bottom panel shows the mean pressure as a function of overdensity averaged over all cells in the regions in the C run. The black dashed line indicates the slope for polytropic gas of index 5/3 (i.e., the adiabatic index), which provides a good approximation to the simulation in the overdensity range 10 \u2212300 that is most pertinent to the absorbing WHIM in O VI and O VII. \f\u2013 35 \u2013 where \u03b1 = 0.4 and \u03b3 = 5/3 are used, as indicated in Figures 18 and Figures 17, respectively. This resulting ratio is shown as the solid line in the bottom panel of Figure 16, which we expect to an approximate upper limit of the true ratio, since it implies the presence of O VI-bearing gas for every O VII line. 10 \u22122 10 0 10 2 10 4 10 6 10 8 \u22123 \u22122 \u22121 0 1 2 b [Z/H] <[Z/H]> log(dispersion in linear Z) [Z/H]=0.4log(b) Fig. 18.\u2014 shows the mean gas metallicity (red solid curve) as a function of overdensity averaged over all cells in the re\ufb01ned region of the C run. Also shown as the green dotted curve is the logarithm of the dispersion in Z (linear metallicity). The black dashed line indicates the logarithmic slope of 0.4, which provides a good approximation to the simulation in the overdensity range 10 \u22121000 relevant to absorbing WHIM in O VI and O VII. Our second approach likely gives an approximate lower bound on < N(OVI) > / < N(OVII) >. We assume that the O VII-bearing gas is at the peak temperature of 106K and is surrounded by gas that has a temperature that is much lower than 105.5K (neglecting photoionization for the moment), in which case the coincidental O VI line is produced by the same temperature gas that produces the O VII line, giving < N(OVI) > < N(OVII) > = f(OVI)(T = 106K) f(OVII)(T = 106K) = 0.0035, (4) which is shown as the dashed line in the bottom panel of Figure 16. Admittedly, our approaches to estimate the column density ratios are quite simplistic. Nevertheless, we think \f\u2013 36 \u2013 they capture some of the essential underlying relationships between O VI-bearing gas and O VII-bearing gas in the collisional ionization dominated regime and it is reassuring that they are consistent with detailed calculations. Note that at N(OVII) < 1015cm\u22122 photoionization becomes important, especially for related O VI lines, hence our simple physical illustration breaks down in that regime. 4." + }, + { + "url": "http://arxiv.org/abs/1111.0707v1", + "title": "Inconsequence of Galaxy Major Mergers in Driving Star Formation at z>1: Insights from Cosmological Simulations", + "abstract": "Utilizing a high-resolution (114 pc/h) adaptive mesh-refinement cosmological\ngalaxy formation simulation of the standard cold dark matter model with a large\n(2000-3000 galaxies with stellar mass greater than 1e9 Msun) statistical\nsample, we examine the role of major mergers in driving star formation at z>1\nin a cosmological setting, after validating that some of the key properties of\nsimulated galaxies are in reasonable agreement with observations, including\nluminosity functions, SF history, effective sizes and damped Lyman alpha\nsystems. We find that major mergers have a relatively modest effect on star\nformation, in marked contrast to previous idealized merger simulations of disk\ngalaxies that show up to two orders of magnitude increase in star formation\nrate. At z=2.4-3.7, major mergers tend to increase the specific star formation\nrate by 10-25% for galaxies in the entire stellar mass range 10^9-10^12 Msun\nprobed. Their effect appears to increase with decreasing redshift, but is\ncapped at 60% at z=1.4-2.4. Two factors may account for this modest effect.\nFirst, SFR of galaxies not in major mergers are much higher at z>1 than local\ndisk galaxy counterparts. Second, most galaxies at z>1 have small sizes and\ncontain massive dense bulges, which suppress the merger induced structural\neffects and gas inflow enhancement. Various other predictions are also made\nthat will provide verifiable tests of the model.", + "authors": "Renyue Cen", + "published": "2011-11-03", + "updated": "2011-11-03", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction Simulations of major gas-rich disk galaxy mergers have provided quantitative insights to gas in\ufb02ows and central starbursts under idealized conditions (e.g., Barnes & Hernquist 1996; Mihos & Hernquist 1996; Hopkins et al. 2006). These simulations have laid the foundation of the theoretical framework for almost all contemporary mainstream interpretations of observed extreme starbursting galaxies, namely, the ultraluminous infrared galaxies (ULIRGs), 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1111.0707v1 [astro-ph.CO] 3 Nov 2011 \f\u2013 2 \u2013 as well as of the formation of supermassive black holes (e.g., Di Matteo et al. 2005). This framework is appealing, because almost all observed ULIRGs in the local universe either are directly seen merging or apparently show signs of mergers (at least some signi\ufb01cant interactions) (e.g., Joseph & Wright 1985; Sanders et al. 1988; Duc et al. 1997; Lutz et al. 1998) and at least some luminous quasars live in galaxies under strong interactions (e.g., Bahcall et al. 1997). What is known but not su\ufb03ciently stressed in the relevant context is that the local universe is very di\ufb00erent from the younger one at z > 1 when star formation was much more intensive. As an example, a typical Lyman Break Galaxy (LBG) several times less massive than our own Galaxy has a star-formation rate (SFR) that is about ten times that of the Galaxy (e.g., Steidel et al. 2003). Moreover, minor mergers and close interactions between galaxies are expected to be much more frequent at high redshift that, cumulatively, may have important e\ufb00ects. Furthermore, there are signi\ufb01cant structural di\ufb00erences between local galaxies and those at high redshift in that high redshift galaxies are more compact in size and the majority of massive quiescent galaxies that have been measured appear to have dense bulges (e.g., Lowenthal et al. 1997; Daddi et al. 2005; Trujillo et al. 2006b,a; Toft et al. 2007; Longhetti et al. 2007; Buitrago et al. 2008; Cimatti et al. 2008; van Dokkum et al. 2009; Cappellari et al. 2009; van de Sande et al. 2011). Therefore, our current physical interpretation of extreme galaxy events that is obtained based on linking local observations with substantially idealized major galaxy merger simulations may not pertain to the high redshift universe in general. In this work we examine theoretically, in a cosmological setting, the role of major mergers in driving star formation in the redshift range z > 1, utilizing a large-scale high-resolution galaxy formation simulation. At each redshift from z = 1.4 to z = 3.7 the simulation contains 2000 \u22123000 galaxies with stellar mass greater than 109 M\u2299resolved at better than 114h\u22121pc. Detailed merger histories of galaxies are tracked and (binary) major mergers, de\ufb01ned to be those of stellar mass ratios greater than 1/3, are examined in comparison to those that do not experience major mergers. We \ufb01nd that for galaxies with SFR in the range 1 \u22121000 M\u2299/yr and the stellar mass range Mstar = 109 \u22121012 M\u2299examined, major mergers, on average, yield a modest, fractional boost of 0 \u221260% in speci\ufb01c SFR; we do not \ufb01nd two orders of magnitude increase in SFR found in previous merger simulations of disk galaxies (e.g., Mihos & Hernquist 1996). We show that the properties of simulated galaxies are in reasonable agreement with observations and give a physical explanation of the results. Additional predictions are provided to further test the model. The outline of this paper is as follows. In \u00a72 we detail our simulation (\u00a72.1) and galaxy catalogs (\u00a72.2). Results are presented in \u00a73, followed by \u00a74 that gives a physical explanation of the results. Conclusions are given in \u00a75. \f\u2013 3 \u2013 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the adaptive mesh re\ufb01nement (AMR) Eulerian hydro code, Enzo (Bryan & Norman 1999; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed a region centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299at z = 0. We then resimulate with high resolution of the chosen region embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. This simulation box is the same region as the \u201cC\u201d run in (Cen 2011b). The re\ufb01ned region for \u201cC\u201d run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3. The initial condition in the re\ufb01ned region has a mean interparticle-separation of 58h\u22121kpc comoving, dark matter particle mass of 1.3 \u00d7 107h\u22121 M\u2299. The re\ufb01ned region is surrounded by three layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 84 times that in the re\ufb01ned region. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 114h\u22121pc physical, corresponding to a maximum mesh re\ufb01nement level of 13 at z = 0. The simulation includes a metagalactic UV background (Haardt & Madau 1996), and a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005). They also include metallicity-dependent radiative cooling (Cen et al. 1995). Our simulation also solves relevant gas chemistry chains for molecular hydrogen formation (Abel et al. 1997), molecular formation on dust grains (Joung et al. 2009) and metal cooling extended down to 10 K (Dalgarno & McCray 1972). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell, which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the entire feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating. The total amount of explosion kinetic energy from Type II supernovae for an amount of star formed M\u2217with a Chabrier IMF is eSNM\u2217c2 (where c is the speed of light) with eSN = 6.6 \u00d7 10\u22126. Taking into account the contribution of prompt Type I supernovae, we use eSN = 1 \u00d7 10\u22125 in our simulation. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy is used to power galactic superwinds (e.g., Heckman 2001). \f\u2013 4 \u2013 Supernova feedback is important primarily for regulating star formation and for transporting energy and metals into the intergalactic medium. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported in a physically sound (albeit still approximate at the current resolution) way. The kinematic properties traced by unsaturated metal lines in DLAs are extremely tough tests of the model, which is shown to agree well with observations (Cen 2010). As we will show below, the properties of galaxies produced in the simulation resemble well observed galaxies, within the limitations of \ufb01nite resolution. We use the following cosmological parameters that are consistent with the WMAP7normalized (Komatsu et al. 2010) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulation using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar particle mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (and various colors) and others. We create catalogs of galaxies from z = 1.4 to z = 3.7 with an increment of \u2206z = 0.05. We track the merger history of each galaxy in this redshift span. There are two di\ufb00erent ways to de\ufb01ne major mergers. First, a theoretical one where we identify the merger time as that when two galaxies with a stellar mass ratio greater than 1/3 are fully integrated into one with no identi\ufb01able separate stellar peaks. Second, an observational one where a major merger is de\ufb01ned to be that where a galaxy has a neighbor galaxy with a stellar mass greater than 1/3 its mass at a lateral distance smaller than 40kpc proper. Both will be used in subsequent analysis. It is useful to state that the observationally-oriented de\ufb01nition does not always lead to a true merger of the usual sense, because either the two galaxies are a projected pair, or their merging time scale is much longer than the relevant dynamic time or the time before something else will have happened to the two concerned galaxies. Some informative comparisons or distinctions between the two will be made, when useful. We \ufb01nd that there are about 2000-3000 galaxies with stellar mass greater than 109 M\u2299maximally resolved at \f\u2013 5 \u2013 better than 114h\u22121pc at each redshift snapshot in the range z = 1.4 \u22123.7, providing us with unprecedented statistical power. In Cen (2011b) we show that galaxy luminosity functions for both UV and FIR selected galaxies can be self-consistently produced by the simulation. This, in combination with other, independent tests of the simulation, including the properties of the damped Lyman alpha systems (Cen 2010), strongly indicates a range of applicability of our simulation to complex systems, including galaxies at sub-kpc ISM scales. This validation of the simulation results is critical and allows us, with signi\ufb01cant con\ufb01dence, to perform the particular analysis here with respect to e\ufb00ects of major mergers. 3. Results 9 9.5 10 10.5 11 11.5 12 12.5 0 0.5 1 1.5 2 2.5 3 log SFR (Msun/yr) not MM; z=1.40 \u2212 2.40 9 9.5 10 10.5 11 11.5 12 12.5 0 0.5 1 1.5 2 2.5 3 not MM; z=2.40 \u2212 3.70 9 9.5 10 10.5 11 11.5 12 12.5 0 0.5 1 1.5 2 2.5 3 log SFR (Msun/yr) log Mstellar (Msun) MM; z=1.40\u22122.70 9 9.5 10 10.5 11 11.5 12 12.5 0 0.5 1 1.5 2 2.5 3 log Mstellar (Msun) MM; z=2.40\u22123.70 Fig. 1.\u2014 places each galaxy as a plus symbol in the SFR-stellar mass plane for non major merger galaxies in the redshift range z = 1.4\u22122.4 (top left panel) and z = 2.4\u22123.7 (top right panel). The corresponding ones for galaxies with major mergers are shown in the bottom panels. Here we adopt the observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and projected separation less than 40kpc. Only a small percentage of randomly selected galaxies is shown. Figure 1 shows scatter plots between SFR and stellar mass for galaxies that do not have ongoing major mergers (top two panels), compared to those that are ongoing major mergers (bottom two panels). Under visual inspection we see that there is no major discernible di\ufb00erence between galaxies that do and do not experience major mergers in the redshift \f\u2013 6 \u2013 range examined for the entire range of stellar mass or SFR. It is noticeable that the number of galaxies that are major mergers is a minor fraction of all galaxies at any stellar mass or SFR. 9 10 11 12 0 0.1 0.2 0.3 0.4 log Mstar (Msun) fraction with apparent major mergers z=1.4\u22122.4 statistical errors 9 10 11 12 0 0.1 0.2 0.3 0.4 log Mstar (Msun) z=2.4\u22123.7 statistical errors Fig. 2.\u2014 shows the fraction of galaxies that are in major merger as a function of stellar mass (red histograms) at z = 1.4 \u22122.4 (left panel) and z = 2.4 \u22123.7 (right panel). The statistical errors are shown as green histograms. We use the observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and projected separation less than 40kpc. Figure 2 shows the fraction of galaxies that are in major merger as a function of stellar mass with the observational de\ufb01nition. We note that the major merger fraction at the low steller mass (< 1011 M\u2299) is substantially overestimated due to the adopted de\ufb01nition, because many satellite galaxies within the virial radius of large galaxies are \u201cmis-identi\ufb01ed\u201d as major mergers in this case. In fact, many of these satellite galaxies do not ever merge with one another directly in a binary fashion, as will be shown below in Figure 3. The fraction of major mergers at the high stellar mass end does not signi\ufb01cantly su\ufb00er from this \u201cprojection\u201d e\ufb00ect. We see that for galaxies with stellar mass in the range 1011 \u22121012 M\u2299 major merger galaxies make up about 10 \u221220% of all galaxies in that mass range. The results on major merger fractions shown in Figure 2 (and Figure 4 below) are based on the observational de\ufb01nition of major mergers. It is useful to distinguish that from the theoretical one, where the latter is based on the actual merger events rather than pairs within some projected distance. Figure 3 shows the theoretical merger rate, de\ufb01ned to be the number of major mergers per unit redshift, as a function of galaxy stellar mass for galaxies \f\u2013 7 \u2013 10 11 12 \u22122 \u22121.5 \u22121 \u22120.5 0 log Mstar (Msun) log # of major merger per unit redshift z=1.4 \u2212 2.4 z=2.4 \u2212 3.7 polynomial fit Fig. 3.\u2014 shows the merger rate (=number of major mergers per unit redshift) as a function of galaxy stellar mass for galaxies at z = 1.4 (red dots) and z = 2.4 (green squares). Here a merger is more physically based de\ufb01nition, an event where two galaxies of the stellar mass ratio greater than 1/3 physically merge. at z = 1.4 (red dots) and z = 2.4 (green squares), respectively. We see that the actual merger rate is roughly constant at \u223c0.3 \u22120.5 per unit redshift for the stellar mass range Mstar = 1010.5 \u22121012 M\u2299. At Mstar > 1012 M\u2299there is hint for a signi\ufb01cant upturn in merger rate, albeit with less statistical certainty due to a small number of such massive galaxies in the simulation. Nevertheless, such an upturn would be consistent with the expectation that the central cD galaxies may experience more major mergers due to dynamical inspiral of satellites. This is also consistent with the apparent di\ufb00erence seen in Figure 3 between galaxies at z = 1.4 \u22122.4 (red) and galaxies at z = 2.4 \u22123.7 (green) in that the upturn is absent in the higher redshift range, because of the absence of large galaxies at that redshift range in the given simulation box. If the simulation box were large enough to contain cD-like galaxies at that higher-redshift range, we expect to see the same upturn. The downturn at Mstar < 1010.5 M\u2299of the merger rate is still more dramatic. We see a decrease of merger rate by a factor of \u223c10 from Mstar = 1010.5 M\u2299to Mstar = 109.5 M\u2299. This should be compared to about a factor of 1.2\u22121.7 drop seen in Figure 2 across the mass range. This shows that the vast majority of galaxies of mass Mstar \u22641010 M\u2299that are seen in close proximity (< 40 kpc) with other galaxies of comparable masses are in fact do not end up in binary major mergers. In Cen (2011b) we show that the simulation reproduces observed luminosity functions in the concerned redshift range, indicating that the simulation is \u201ccomplete\u201d down to about a galaxy stellar mass of \u223c109 M\u2299. Thus, the results for the range of galaxy stellar mass shown here is reliable. A plausible physical explanation for the sharp downturn at the low mass end may be that most satellite galaxies just zoom around \f\u2013 8 \u2013 and never merge with their fellow satellite galaxies rather they dynamically spiral in to merge with the primary galaxy or remain as satellites. A more detailed study focused on the demographics of mass accretion, including mergers, will be presented elsewhere. Here we present a third-order polynomial \ufb01t to the major merger rate, R, de\ufb01ned to be the number of major mergers per unit redshift: log R = 0.34(log Mstar \u221211)3 \u22120.21(log Mstar \u221211)2 \u22120.013(log Mstar \u221211) \u22120.33, (1) shown as the solid black curve in Figure 3, where Mstar is in solar masses. 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 log SFR (Msun yr\u22121) fraction with apparent major mergers z=1.4\u22122.4 statistical errors 0 0.5 1 1.5 2 2.5 3 0 0.1 0.2 0.3 0.4 log SFR (Msun yr\u22121) z=2.4\u22123.7 statistical errors Fig. 4.\u2014 shows the fraction of galaxies that are in major merger as a function of SFR (red histograms) at z = 1.4\u22122.4 (left panel) and z = 2.4\u22123.7 (right panel). The statistical errors are shown as green histograms. We use the observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and projected separation less than 40kpc. Figure 4 shows the fraction of galaxies that are in major merger as a function of SFR. Similar to Figure 2, the actual major merger fraction at the low SFR end shown is overestimated, given the observational de\ufb01nition used. The major merger rate at the high SFR end, at SFR\u2265200 M\u2299yr\u22121, is less a\ufb00ected and the simulation shows that one should expect to see 10 \u221240% of these high SFR galaxies to be in apparent major mergers. This fraction is consistent with the observed upper bound of 57% (8/14) for the submillimeter galaxy (SMGs) sample of Tacconi et al. (2006) at z = 2 \u22123.4 that show a double-peaked pro\ufb01le in the CO 3-2/4-3 emission. Of this observed fraction of SMGs in major mergers, a fraction of it may be due to orbital motion of emitting gas in a disk con\ufb01guration or some other con\ufb01gurations instead of major mergers. We predict that, when high spatial resolution become available with the upcoming ALMA mission, the fraction due to major mergers should be in the range 10 \u221240%, if our model is correct. For star-forming galaxies of SFR\u2264200 M\u2299yr\u22121 \f\u2013 9 \u2013 at z = 1.4 \u22123.7, we also predict that the major merger fraction should fall in the range of 15 \u221235%. Recall that here we use the observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and projected separation less than 40kpc. To provide further tests of our model predictions, Figure 5 shows the probability distribution functions (PDFs) of the projected separation (rp) of major mergers at z = 1.4 \u22122.4 (top panels) and z = 2.4 \u22123.7 (bottom panels) for two subgroups of galaxies of SFR= 10 \u2212100 M\u2299yr\u22121 (left panels) and SFR> 100 M\u2299yr\u22121 (right panels), respectively. We \ufb01nd that the PDFs are reasonably \ufb01t with a single cored powerlaw of the following form: PDF(rp)drp \u221d(rc + rp)\u22123/4drp, (2) where the projected separation rp and core size rc are in physical kpc. The black curves shown in Figure 5 have rc = 1, although it is not stringently constrained. The simple powerlaw \ufb01ts are quite good, in contrast to gaussian or exponential forms that are found to provide poor \ufb01ts. The found slope of \u22123/4 in the PDF suggests that the three-dimensional distribution around each star-forming galaxy of other galaxies of comparable SFR approximately follows a powerlaw of a slope of \u22122.75. Details of this and other related clustering issues of galaxies will be presented elsewhere. 0 5 10 20 30 40 0 100 200 rp (kpc) PDF SFR=10\u2212100 Msun/yr @z=1.4\u22122.4 statistical errors fit: PDF(rp)=1/(1+rp)3/4 0 5 10 20 30 40 0 5 10 15 20 rp (kpc) PDF SFR>100 Msun/yr @z=1.4\u22122.4 statistical errors fit: PDF(rp)=1/(1+rp)3/4 0 5 10 20 30 40 0 100 200 300 400 500 rp (kpc) PDF SFR=10\u2212100 Msun/yr @z=2.4\u22123.7 statistical errors fit: PDF(rp)=1/(1+rp)3/4 0 5 10 20 30 40 0 5 10 15 rp (kpc) PDF SFR>100 Msun/yr @z=2.4\u22123.7 statistical errors fit: PDF(rp)=1/(1+rp)3/4 Fig. 5.\u2014 shows the probability distribution functions (PDF) of the projected separation (rp) of major mergers at z = 1.4 \u22122.4 (two upper panels) and z = 2.4 \u22123.7 (two bottom panels), respectively. The left panels are for star-forming galaxies of SFR= 10\u2212100 M\u2299yr\u22121 and the right panels for star-forming galaxies of SFR> 100 M\u2299yr\u22121. The red histograms are the PDFs and the green histograms the statistical errors at each bin. The black curves show a power \ufb01t described by Eq 2. The top panel of Figure 6 shows the meean SFR as a function of galaxy stellar mass, separately, for galaxies that are in major mergers and galaxies that are not in major mergers. \f\u2013 10 \u2013 9 10 11 12 0 1 2 3 log Mstar (Msun) log SFR (Msun yr\u22121) non MM, z=1.4\u22122.4 MM, z=1.4\u22122.4 non MM, z=2.4\u22123.7 MM, z=2.4\u22123.7 non MM, z=1.4\u22122.4 fit MM, z=1.4\u22122.4 fit non MM, z=2.4\u22123.7 fit MM, z=2.4\u22123.7 fit 9 10 11 12 \u221210 0 10 20 30 40 50 60 70 log Mstar (Msun) % boost in SFR by MM z=1.4\u22122.4 z=2.4\u22123.7 Fig. 6.\u2014 Top panel: the mean SFR of galaxies at a given stellar mass for galaxies that are in major mergers (red solid dots) and not in major mergers (red open dots) at z = 1.4 \u22122.4. The corresponding ones at z = 2.4 \u22123.7 are shown in green squares. The errorbars show the dispersion around the mean. The thin and thick dashed curves are the best second-order polynomial \ufb01ts to the non major mergers and major mergers, respectively, at z = 1.4 \u22122.4. The thin and thick solid curves are the best second-order polynomial \ufb01ts to the non major mergers and major mergers, respectively, at z = 2.4 \u22123.7. We use the observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and separation less than 40kpc. Bottom panel: the ratio of \ufb01tted curves to the major merger and non-major-merger minus one for z = 1.4 \u22122.4 (red solid curve) and z = 2.4 \u22123.7 (green dashed curve), respectively. Visually the ratio of the \ufb01tted curves and the actual computed data points display comparable amplitudes. The ratio of \ufb01tted curves for the galaxies with major mergers and those without major mergers minus one are shown in the bottom panel for z = 1.4 \u22122.4 (red solid curve) and z = 2.4 \u22123.7 (green dashed curve). We see that major mergers appear to experience very modest boost in SFR for galaxies at z = 2.4\u22123.7, at about 10\u221225% for the entire stellar mass range Mstar = 109 \u22121012 M\u2299probed. The overall strength of the boost due to major mergers appear to increase with decreasing redshift, when one compares the values at z = 1.4 \u22122.4 to those at z = 2.4 \u22123.7, but remains at less than 60% across the entire mass range. It also appears that there may be a trend of a relatively larger boost of SFR due to major mergers for lower mass galaxies than for larger mass galaxies at z = 1.4 \u22122.4. But we caution that the results in the bottom panel are somewhat sensitive to the exact \ufb01ts; given that the \ufb01ts do not exactly reproduce all the data points, one should be careful to not take the exact curves of the \ufb01ts too literally. In any case, it is abundantly clear that we do not see very large increase in SFR of a factor of two orders of magnitude that are found in simulations of isolated major gas-rich spiral galaxy mergers (e.g., Mihos & Hernquist 1996). In Figure 6 the modest boost in SFR due to major mergers is computed using the \f\u2013 11 \u2013 \u22120.5 \u22120.4 \u22120.3 \u22120.2 \u22120.1 0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 \u2206 z log SFR (Msun/yr) SFR=1\u22123.2; z=2.4\u22123.7 SFR=3.2\u221210; z=2.4\u22123.7 SFR=10\u221232; z=2.4\u22123.7 SFR=32\u2212100; z=2.4\u22123.7 SFR=100\u2212320; z=2.4\u22123.7 SFR=1\u22123.2; z=1.4\u22122.4 SFR=3.2\u221210; z=1.4\u22122.4 SFR=10\u221232; z=1.4\u22122.4 SFR=32\u2212100; z=1.4\u22122.4 SFR=100\u2212320; z=1.4\u22122.4 Fig. 7.\u2014 shows the history of the mean SFR as a function of time redshift \u2206z for \ufb01ve di\ufb00erent subsets of galaxies with SFR at \u2206z = 0.05 (i.e., prior to the merger event) equal to 1 \u22123.2, 3.2 \u221210, 10 \u221232 and 32 \u2212100, 100 \u2212320 M\u2299yr\u22121, respectively, separately for galaxies in the redshift range z = 1.4 \u22122.4 and z = 2.4 \u22123.7. Dispersions on the means are shown as well. observationally oriented de\ufb01nition of major mergers, i.e., pairs of stellar mass ratio greater than 1/3 and projected separation less than 40kpc. We now compute a similar quantity using the theoretical de\ufb01nition of major mergers where we identify the merger time as that when two galaxies are fully integrated into one with no identi\ufb01able separate stellar peaks. We follow the history of each galaxy and \u201cstack\u201d all major merger events centered at \u2206z = 0. Figure 7 shows the mean SFR history for galaxies at \ufb01ve given ranges of SFR, measured at \u2206z = 0.05 (using a di\ufb00erent redshift, say, \u2206z = 0.10 or 0.15, makes no material di\ufb00erence in the results). In a fashion that is consistent with the \ufb01ndings shown in Figure 6, we do not \ufb01nd any dramatic boost of SFR at the merger redshift and at |\u2206z| \u22640.5 for galaxies at SFR\u2264100 M\u2299yr\u22121 in the redshift range z = 1.4 \u22123.7. The 1\u03c3 dispersion about the mean is about 1.5 \u22123, roughly consistent with the range of SFR for each subset at \u2206z = 0.05, \f\u2013 12 \u2013 with a tendency that the dispersion is larger for lower SFR subsets. For the subset with the largest SFR (\u2265100 M\u2299yr\u22121), however, there is a visually noticeable jump in SFR by a factor of \u223c2 \u22125 from \u2206z > 0.2 to \u2206z < 0.2, hinting an intriguing possibility that a major merger event, not necessarily the \ufb01nal major merger moment, serves to \u201ctrigger\u201d a very high SFR event. In other words, it suggests that some very high SFR galaxies, such as ULIRGs or SMGs, may be initially triggered by some major merger events. At the same time results in Figure 7 also suggest that the very high SFR (\u2265100 M\u2299yr\u22121) galaxies remain at the elevated and upward trend for SFR following the merger event for a long period of time (\u2206z \u223c1) that is much longer than the typical merger time scale. This has profound implications for the nature of ULIRGs and SMGs that will be addressed elsewhere. 4. Physical Explanation of the Results Both external gravitational and internal gravitational and hydrodynamic torques may drive gas inward. Externally, the tidal \ufb01eld from a companion during a galaxy merger, major or minor, gives rise to a non-axisymmetric gravitational potential. This induces a response of the disk material (Toomre & Toomre 1972), in particular its cold gas, stronger for prograde mergers. More broadly, tidal \ufb01elds from interacting galaxies, which are not necessarily merging with one another, may drive gas inward. Internally, non-axisymmetric gravitational potentials, notably those sustained by stellar bars that are produced by secular evolution of su\ufb03ciently cold stellar disks under certain conditions or from other interactions, such as mergers, can also drive gas inward. A thorough study of torques due to gravitational and hydrodynamic processes to isolate the primary physical mechanisms governing the gas in\ufb02ows in a cosmological setting will be performed in a larger study. Here we provide some physical insight for the results found, relying mainly on circumstantial but strong evidence. Anecdotal evidence and visual examination of some galaxies suggest that chaotic gas in\ufb02ows often result in mis-alignments of newly formed stellar disks with previous stellar disk/non-spherical bulges, and the orbital planes of infalling satellite stellar or gas clumps do not always have a \ufb01xed orientation. These processes cumulatively may be thought to create stellar distributions in the central regions that are dynamically hot, which, in turn, provides conditions that are unfavorable to secular formation of stellar bars. We check if this indeed is the case. The left panel of Figure 8 shows axial ratios c/b versus b/a, where c < b < a are the semi-axes of an ellipsoid approximating the stellar distribution within re for galaxies with SFR \u226510 M\u2299yr\u22121 with major mergers (solid dots) and without (open circles) at z = 2. We see that the stellar distribution within re typically resembles an oblate spheroid with the half-height approximately equal to one half of that of the disk radius or more. For such hot stellar systems no barlike equilibria exist and no strong stellar bars would form \f\u2013 13 \u2013 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b/a(stars) c/b(stars) C S B B B B B B B B B B B B B B B B F MM, z=2 non MM, z=2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 b/a(SFR) c/b(SFR5) Fig. 8.\u2014 Left panel: shows axial ratios c/b versus b/a, where c < b < a are the semi-axes of an ellipsoid approximating the stellar distribution within re for galaxies with SFR \u226510 M\u2299yr\u22121 with major mergers (solid dots) and those without (open circles) at z = 2. The symbol size in both panels is linearly proportional to the logrithm of its SFR. Several special locations are indicated by special letters: \u201cB\u201d for thin bars of various thickness, \u201cS\u201d for sphere, \u201cC\u201d for \ufb02at circular disk and \u201cF\u201d for American football. Right panel: shows the same but for SFR density distribution within the radius of 50% SFR. secularly (e.g., Ostriker & Peebles 1973). Indeed, we do not \ufb01nd any instance of thin stellar bars that would occupy locations near the left y-axis; the one instance seen is in fact a close merging pair, which, when approximated as an ellipsoid by our code, shows up as a thin bar. The right panel of Figure 8 shows the same for SFR density, which shows that ongoing star formation in the central region for the majority of galaxies at z \u22651 takes place on a relatively thin disk of typical height-to-radius ratio of 0.1 \u22120.3, with some ratios reaching as low as 0.03, approaching our resolution limit of \u223c100pc. It is clear, however, the relatively thick stellar bulges seen in the left panel of Figure 8 are very well resolved and little a\ufb00ected by resolution e\ufb00ects. The number of stellar particles within re for mass in the range 109 \u22121012 M\u2299are typically N \u223c103.5 \u2212106.5 and the two-body relaxation time is roughly tr \u2248(N/50)tc (e.g., Steinmetz & White 1997), where tc is the orbital period at re. For a galaxy with Mstar = 1010 M\u2299(N \u223c104.2 within re) and re \u223c0.5kpc (see Figure 11 below) the relaxation heating time is estimated to be \u223c1 \u00d7 1010yr. A typical galaxy with Mstar = 1010 M\u2299corresponds to SFR \u223c10 M\u2299/yr at the relevant redshift range. Thus, we expect the two-body relaxation heating to be completely negligible for galaxies with SFR \u226510 M\u2299/yr. This shows that the dynamically hot state of the central stellar bulges of the simulated galaxies is unlikely caused by numerical e\ufb00ects. In the left panel of Figure 8 we do not see signi\ufb01cant di\ufb00erence between galaxies with major mergers and those without, indicating that major mergers do not appear to enhance formation of structures that resemble bars; this issue will be further examined below. In the absence of strong stellar bars, can signi\ufb01cant gas in\ufb02ows still exist? Figure 9 shows the gas depletion time in central regions at r < 1kpc (left panel) and over the entire galaxy within the virial radius (right panel) for all galaxies with SFR \u226510 M\u2299yr\u22121. The right panel indicates that the gas depletion time over the entire galaxy is longer than its \f\u2013 14 \u2013 9 10 11 12 5 6 7 8 9 10 log Mstar (Msun) log gas depletion time (r<1kpc) MM with rp<3kpc, z=2 MM with rp>3kpc, z=2 non MM, z=2 9 10 11 12 5 6 7 8 9 10 log Mstar (Msun) log gas depletion time (r3kpc, z=2 non MM, z=2 depletion time equal to tH depletion time equal to tdyn at rv Fig. 9.\u2014 Left panel: gas depletion time within the central 1kpc region of galaxies at z = 2 with SFR \u2265 10 M\u2299yr\u22121. Three types of galaxies are shown using di\ufb00erent symbols: galaxies that are not undergoing major mergers are open circles, galaxies in major mergers with projected separation between the two galaxies less than 3kpc as squares and greater than 3kpc as triangles. Right panel: gas depletion time over the entire galaxy within the virial radius. The symbol size is linearly proportional to the logrithm of SFR. The thin and thick horizontal lines correspond to the Hubble time and dynamical time at virial radius at z = 2, respectively. Here the observational de\ufb01nition of major mergers is used. dynamic time and comparable to the Hubble time. The gas depletion time in the central 1kpc region, however, is shorter at \u2264100Myrs. The gas depletion time in the central region spans a wide range, 0.1 \u2212100Myrs, and there is no discernible di\ufb00erence between galaxies in major mergers (solid symbols) and those that are not (open symbols). Furthermore, there is no visible dependence of the depletion time in the central region on the separation of the two merging galaxies for those that are in major mergers. Examination of SF histories of individual galaxies indicate that the SFR are relatively steady and their durations are on the order of Hubble time, i.e., much longer than the gas depletion time scales of the central regions but comparable to the gas depletion time scales within the virial radii shown in Figure 9 (Figure 6 shows that for galaxies with mergers within \u2206z = 0.5). This suggests that, irrespective of being in major mergers or not, gas in\ufb02ows to the central regions appear to be ubiquitous; in other words, galaxies that are not in major mergers appear to be able to channel a su\ufb03cient amount of gas to fuel the star formation on time scales that are much longer than the gas depletion times in the central regions. The disparity in the gas depletion time scales of the central regions and between those and the overall star formation durations strongly imply that gas in\ufb02ows, in general, are not smooth but in the form of clumps falling \f\u2013 15 \u2013 9 10 11 12 \u22122 \u22121 0 1 log Mstar (Msun) [Z/H] (gas) (<1kpc) MM with rp<3kpc, z=2 MM with rp>3kpc, z=2 non MM, z=2 \u22120.5 0 0.5 \u22122 \u22121 0 1 [Z/H] (gas) (<3kpc) [Z/H] (gas) (<1kpc) MM with rp<3kpc, z=2 MM with rp>3kpc, z=2 non MM, z=2 Z(<1kpc)=Z(<3kpc) Fig. 10.\u2014 Left panel: the mean gas metallicity within the central 1kpc region as a function of galaxy stellar mass for galaxies with SFR \u226510 M\u2299yr\u22121. Three types of galaxies are shown using di\ufb00erent symbols: galaxies that are not undergoing major mergers are open circles, galaxies in major mergers with projected separation between the two galaxies less than 3kpc as squares and greater than 3kpc as triangles. Right panel: the mean gas metallicity within the central 1kpc region as a function of the mean gas metallicity within the central 3kpc region galaxy stellar mass for galaxies with SFR \u226510 M\u2299yr\u22121. The symbol size in both panels is linearly proportional to the logrithm of its SFR. in intermittently. To further demonstrate that gas in\ufb02ows towards the central regions are generally not caused by central non-spherical gravitational perturbations, the left panel of Figure 10 shows the mean gas metallicity in the central 1kpc region and the right panel shows the mean gas metallicity in the central 1kpc region as a function of the mean gas metallicity in the central 3kpc region, comparing galaxies with and without major mergers. From both panels we see that there is no visible di\ufb00erence in the metallicity of gas in the central regions between galaxies that are in major mergers and those that are not. It is seen that there is a relatively large span of mean gas metallicity in the central 1kpc region, from \u223c\u22121.5 to \u223c0.5 for both types of galaxies, while the range shrinks to about \u22120.5 to 0.5 within 3kpc for both types of galaxies. If non-spherical gravitational perturbations in the central regions were responsible for driving gas inward, they would be most e\ufb00ective for the gas in the immediate neighborhood. Consequently, if the central 1kpc region were just fed by gas driven inward from the immediate surroundings by internal non-spherical gravitational perturbations within, one would expect to see a higher gas metallicity in the central 1kpc than in the central 3kpc, since star formation rate is super-linear on gas density (the SchmidtKennicutt law) hence SFR density stronger in the 1kpc central region than in the 3kpc central region per unit gas. This expectation is not universally borne out for all galaxies in the simulation; on the contrary, the majority of galaxies lie below the Z(< 1kpc) = Z(< 3kpc) line, and there exists low mean metallicity (Z < \u22120.5) gas in the central 1kpc that is not seen in the mean metallicity within the central 3kpc. This is unambiguous evidence that a \f\u2013 16 \u2013 signi\ufb01cant amount of gas in\ufb02ow is directly \u201cparachuted in\u201d (e.g., dynamical friction inspiral of gas clumps with or without dark matter halos, or infalling satellites on nearly radial orbits) or \u201cchannelled in\u201d (e.g., clumpy cold streams) from large scales, not smooth gas from regions that immediately surround it, at least for a large fraction of galaxies. This is consistent with the implied intermittency of fueling seen in Figure 9. In any event, the results indicate that major mergers do not appear to form a distinct set of galaxies with respective to gas metallicity in the central regions. 10 10 10 11 10 12 0.1 0.2 0.3 0.5 1 2 3 5 7 10 Mstar (Msun) re (kpc) MM, z=2.0 non MM, z=2.0 obs z=1.5\u22122.5 (Buitrago et al 2008) 10 10 10 11 10 12 50 70 100 200 300 500 700 1000 Mstar (Msun) me (km/s) MM, z=2.0 non MM, z=2.0 van Dokkum et al 2009, z=2.2 van de Sande et al 2011, z=1.8 Cappellari et al 2009, z~1.7 Onodera et al 2010, z=1.8 Fig. 11.\u2014 Left panel: the e\ufb00ective radii of galaxies in restframe V band (observed H band) versus the stellar masses for galaxies with major mergers (solid dots) and those without (open circles) at z = 2. Also shown as solid diamonds are the observations of Buitrago et al. (2008) for the subset of galaxies at z = 1.5 \u22122.5 observed in H band. The symbol size in the left panel is linearly proportional to the logrithm of SFR. Right panel: the relation between velocity dispersion (y axis) and dynamical mass (x axis) at re. The black diamond, star, square and triangle symbols with cross errorbars are the observational data for galaxies in the range range z = 17 \u22122.2 from van Dokkum et al. (2009), van de Sande et al. (2011), Cappellari et al. (2009) and Onodera et al. (2010), respectively. The symbol size in the right panel is linearly proportional to the SFR. Mihos & Hernquist (1996) show that galaxy structure plays a dominant role in regulating gas in\ufb02ows, which they \ufb01nd are generally driven by gravitational torques from the host galaxy, rather than the companion, in their major merger simulations. The lack of any signi\ufb01cant merger induced e\ufb00ects appear at odds with their simulations at \ufb01rst instance. We attribute the di\ufb00erence primarily to the di\ufb00erence in the physical properties of galaxies between merger simulations and those in present cosmological simulation at z > 1. Speci\ufb01cally, as we will show shortly, most of galaxies in our simulation appear to have massive stellar bulges, whereas merger simulations with dramatic in\ufb02ows seen during mergers start with pre-merger disk galaxies without massive stellar bulges. In fact, a subset of simulations by Mihos & Hernquist (1996) where the per-merger galaxies have massive stellar bulges has already provided insight for the above apparent discrepancy: they note that dense bulges act to \f\u2013 17 \u2013 stabilize galaxies against bar modes and have much diminished in\ufb02ow enhancement. In the left panel of Figure 11 we show the e\ufb00ective stellar radii in restframe V band of galaxies with SFR \u226510 M\u2299yr\u22121 at z = 2, compared to observed galaxies also in restframe V band (observed H band). We see that the e\ufb00ective radii of most simulated galaxies at z = 2 are in the range of 0.5 \u22122kpc for galaxies of stellar mass \u22651011 M\u2299, consistent with previous results (Joung et al. 2009; Naab et al. 2009), and are in reasonable agreement with observations. No dust obscuration is applied in the calculation so the computed radii are likely lower limits; if we had taken dust obscuration into account, we expect the agreement would still be better. The right panel of Figure 11 shows the 1-d velocity dispersion at the e\ufb00ective stellar radius as a a function of stellar mass and we \ufb01nd that within the uncertainties the simulation results are in agreement with the observations, indicative of a self-consistency of the simulation results. The observed high value of central velocity dispersion (van Dokkum et al. 2009) was somewhat surprising initially based on an extrapolation of local elliptical galaxy properties, but now that additional observations have con\ufb01rmed the earlier discovery and our simulations indicate that this is in fact in line with the theoretical expectation based on the cold dark matter model. There is one exception (Onodera et al. 2010) that shows a lower central velocity dispersion; our current statistics are insu\ufb03cient to gauge this against our model one way or another. Although the simulation results and observations are statistically consistent with one another, enlarging both the simulation size and observed galaxy sample size may provide very useful constraints on physical processes that govern the formation of the bulges. If pressed, one might incline to conclude that there is a slight hint that the simulated galaxies are slightly smaller than the handful of observed galaxies, although the observed ones overlap and are statistically consistent with the simulated range in terms of velocity dispersion at a \ufb01xed stellar mass. Nonetheless, three e\ufb00ects may have caused slight overestimation of the velocity dispersions of simulated galaxies. First, no dust obscuration e\ufb00ect is taken into account. Second, no observational beam smearing e\ufb00ect is taken into account. Third, the simulated galaxies at a \ufb01xed stellar mass have a range in SFR, whereas the observed galaxies shown are thought to be quiescent; one might think that gas loss from aging or dying stars acts in the direction of enlarging stellar cores with aging stellar population due to adiabatic expansion related to mass loss that is known to be substantial. To be prudent and conservative, we have purposely plotted the symbol size in the right panel of Figure 11 to be linearly proportional to the SFR to see if there is noticeable trend in SFR with core size/velocity dispersion. We see one case, the green solid dot at (1.3 \u00d7 1011 M\u2299,150km/s), that has a SFR that may be a factor of a few higher than typical galaxies at around that mass. However, we also see galaxies to have higher SFR even though having much higher velocity dispersions, with or without major mergers. In any case, it appears that some of the noticeably high SFR galaxies are consistent with being randomly distributed with respect to \u03c3e. Thus, we conclude that there is no dramatic trend of SFR with \f\u2013 18 \u2013 respect to \u03c3e at a \ufb01xed stellar mass in the range of \u03c3e that overlaps with observed values, save the one noted exceptions that is presently di\ufb03cult to gauge statistically. This check suggests that our results are not hinged on our modeling of the size of the central stellar bulges being perfectly correct and are thus robust to possible small variations. Taking the evidence presented in the preceding four \ufb01gures together a consistent physical picture emerges: \u2022 Gravitational or hydrodynamic torques stemming from scales larger than the central regions containing most of the stars in the primary galaxy may play a fundamental role in transporting the necessary amount of gas to fuel the star formation in the central regions. \u2022 A large portion of often metal-poor gas from large scales is directly transported into the central regions, possibly in the form of dynamical friction inspiraling gas clumps, infalling satellites on nearly radial orbits, or clumpy cold streams from large scales in an intermittent fashion. \u2022 Signi\ufb01cant gas in\ufb02ows, not necessarily requiring major mergers, allow for formation of dense, compact, not-so-\ufb02at stellar bulges that are stable to bar formation. \u2022 Major mergers of galaxies, most of which have dense bulges, do not dramatically enhance gas in\ufb02ows and SFR or cause signi\ufb01cant di\ufb00erences in gas properties in the central regions for galaxies at z \u22651, in accord with earlier major mergers simulations of disk galaxies with massive bulges. 5." + }, + { + "url": "http://arxiv.org/abs/1110.5645v1", + "title": "Far-Infrared Properties of Lyman Break Galaxies from Cosmological Simulations", + "abstract": "Utilizing state-of-the-art, adaptive mesh-refinement cosmological\nhydrodynamic simulations with ultra-high resolution (114h-1pc) and large sample\nsize (>3300 galaxies of stellar mass >10^9Msun), we show how the stellar light\nof Lyman Break Galaxies at z=2 is distributed between optical/ultra-violet (UV)\nand far-infrared (FIR) bands. With a single scalar parameter for dust\nobscuration we can simultaneously reproduce the observed UV luminosity function\nfor the entire range (3-100 Msun/yr) and extant FIR luminosity function at the\nbright end (>20Msun/yr). We quantify that galaxies more massive or having\nhigher SFR tend to have larger amounts of dust obscuration mostly due to a\ntrend in column density and in a minor part due to a mass (or SFR)-metallicity\nrelation. It is predicted that the FIR luminosity function in the range\nSFR=1-100Msun/yr is a powerlaw with a slope about -1.7. We further predict that\nthere is a \"galaxy desert\" at SFR(FIR) < 0.02 (SFR(UV)/10Msun/yr)^2.1 Msun/yr\nin the SFR(UV)-SFR(FIR) plane. Detailed distributions of SFR(FIR) at a fixed\nSFR(UV) are presented. Upcoming observations by ALMA should test this model. If\nconfirmed, it validates the predictions of the standard cold dark matter model\nand has important implications on the intrinsic SFR function of galaxies at\nhigh redshift.", + "authors": "Renyue Cen", + "published": "2011-10-25", + "updated": "2011-10-25", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction The precise relation between optical/UV light detected and dust emission in the far infrared (FIR) of Lyman Break Galaxies (LBGs; Steidel et al. 2003) is di\ufb03cult to establish observationally, because of the faintness of the expected FIR luminosity (e.g., Ouchi et al. 1999; Adelberger & Steidel 2000). In this work we study this relation using direct simulations of galaxy formation in the standard cosmological constant-dominated cold dark matter model (LCDM; Komatsu et al. 2010) in light of the capabilities of the upcoming Atacama Large 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu \f\u2013 2 \u2013 Millimeter Array (ALMA) mission. The outline of this paper is as follows. In \u00a72 we detail our simulations, method of making galaxy catalogs and a dust obscuration analysis method. Results are presented in \u00a73, followed by conclusions given in \u00a74. 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the adaptive mesh re\ufb01nement (AMR) Eulerian hydro code, Enzo (Bryan & Norman 1999; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed a region centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299at z = 0 and then resimulate it with high resolution, embedded in the outer 120h\u22121Mpc box. The re\ufb01ned region for \u201cC\u201d run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and represents 1.8\u03c3 \ufb02uctuation on that volume. The dark matter particle mass in the re\ufb01ned region is 1.3 \u00d7 107h\u22121 M\u2299. The re\ufb01ned region is surrounded by three layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 84 times that in the re\ufb01ned region. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 114h\u22121pc physical, corresponding to a maximum mesh re\ufb01nement level of 13 at z = 0. The simulations include a metagalactic UV background (Haardt & Madau 1996), a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005), metallicity-dependent radiative cooling (Cen et al. 1995) extended down to 10 K (Dalgarno & McCray 1972) and all relevant gas chemistry chains for molecular hydrogen formation (Abel et al. 1997), including molecular formation on dust grains (Joung et al. 2009). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Supernova feedback from star formation is modeled following Cen et al. (2005). We allow the entire feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating. See Cen (2010) for all other simulation details and physical treatments. We use the following cosmological parameters that are consistent with the WMAP7-normalized (Komatsu et al. 2010) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan \f\u2013 3 \u2013 Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (and various colors) and others. At a spatial resolution of 109pc with nearly 5000 well resolved galaxies at z = 2, this simulated galaxy catalog presents an excellent tool to study galaxy formation and evolution. 2.3. Modeling Dust Obscuration A fully self-consistent modeling would be di\ufb03cult, given our lack of knowledge of the distribution of dust and its properties. Here we take a simpli\ufb01ed approach. Given the 3-d distribution of gas with varying metallicity and stellar particles distributed within it, the observed SFR at a rest-frame UV wavelength \u03bb for the galaxy is computed as SFRUV,\u03bb = X i sfri(1 \u2212e\u2212\u03c4\u03bb(\u20d7 ri\u2192obs)), (1) where \u03c4\u03bb(\u20d7 r \u2192obs) is the extinction optical depth at some UV wavelength \u03bb for an individual stellar particle i of star formation rate sfri in the galaxy from its individual location \u20d7 ri to the observer: \u03c4\u03bb(\u20d7 r \u2192obs) = (A\u2032 V /1.086)f\u03b2\u03bb \u00af Zi(\u20d7 r \u2192obs)NH,i(\u20d7 r \u2192obs), (2) where A\u2032 V = 5.3 \u00d7 10\u221222 is visual extinction AV per unit hydrogen column density per unit solar metallicity for RV = 3.1 (Draine 2011) and \u03b2\u03bb \u2261A\u03bb/AV (a \ufb01tting function) is taken from Cardelli et al. (1989); \u00af Zi(\u20d7 r \u2192obs) is the column density-weighted mean metallicity of gas obscuring the stellar particle i in solar units and NH,i(\u20d7 r \u2192obs) is the integrated hydrogen column density from the stellar particle i to the observer. Note that in Equation (1) the calculation is based on 3-d distributions of stellar particles that each are subject to their own integrated optical depth and the sum is over all the memeber stellar particles, typically of number 105 \u2212106 for a galaxy of stellar mass 1011 M\u2299. In Equation (2) f is a dimensionless parameter that we will adjust such that the simulated LBG UV luminosity function matches observations; f should be of order unity, if dust properties for galaxies at z \u223c2 are not drastically di\ufb00erent from those derived locally and our galaxy formation model is realistic. As we will see below, the required value of f is indeed close to unity with an adopted extinction law that is also close to those derived locally. Thus, the dust extinction of SFR at a speci\ufb01c UV band is a good proxy of the overall extinction of SFR in the optical-to-UV regime. We will use the 1700\u02da A band for subsequent analysis. The portion of the SFR that \f\u2013 4 \u2013 does not escape in UV/optical is assumed to be converted to FIR SFR: SFRFIR = X i sfri \u2212SFRUV,\u03bb. (3) For each galaxy we place 95 random observers in its sky at in\ufb01nity for results presented in the next section. This sampling is adequate and results are converged statistically. 3. Results 0 0.5 1 1.5 2 2.5 3 \u22125 \u22124 \u22123 \u22122 \u22121 log SFR (Msun/yr) log n(>SFR) (Mpc\u22123) z=2.0 all galaxies z=2.0 UV\u2212selected z=2.0 FIR\u2212selected powerlaw slope \u22120.7 z=1.9\u22122.7 UV obs (Reddy & Steidel 2009) z=2 LIRG and ULIRG obs (Caputi etal 2007) 0 0.5 1 1.5 2 2.5 3 \u22125 \u22124 \u22123 \u22122 \u22121 log SFR (Msun/yr) log n(>SFR) (Mpc\u22123) all galaxies 8 subboxes 0 0.5 1 1.5 2 2.5 3 \u22122 \u22121 0 log SFR (Msun/yr) log SFR density (>SFR) (Msun yr\u22121Mpc\u22123) obs (Hopkins & Beacoms 2007) Fig. 1.\u2014 Top panel: cumulative total SFR function at z = 2 (red circles), cumulative UV and FIR SFR functions in blue squares and magenta stars, respectively. Black diamonds are LBG observations z = 1.9 \u22122.7 from Reddy & Steidel (2009); two green triangles are LIRG and ULIRG observational data from Caputi et al. (2007). We convert to SFR of observational data from MAB(1700\u02da A) using the standard conversion formula, SFR = 6.1 \u00d7 10\u2212[8+0.4MAB(1700\u02da A)] M\u2299/yr (Kennicutt 1998) in the AB magnitude system (Oke 1974). Solid magenta line indicates a powerlaw slope of \u22120.7 (corresponding to a slope of \u22121.7 for the di\ufb00erential SFRFIR function). Thin solid black line indicates a powerlaw slope of \u22121 (corresponding to a slope of \u22122 for the di\ufb00erential function). The three thin curves of color red, blue and magenta, respectively, correspond their thick counterparts but from a lower resolution simulation with four times poorer spatial and eight times poorer mass resolutions. Bottom left panel: the eight green curves represent the cumulative total SFR function in the eight octant volumes; the average of the green curves is the redshift circles (also shown in the top panel). Bottom right panel: Cumulative light densities for total (red circles), UV galaxies (blue squares) and FIR galaxies (magenta stars), respectively, at z = 2. Also show as a black diamond is the observed data at z = 2 compiled by Hopkins & Beacom (2006) with 1\u03c3 errorbar. The top panel of Figure 1 shows the SFR functions for total SFR, UV and FIR selected galaxies, respectively. We have adjusted the parameter f in Equation 2 to be f = 1.4 to \f\u2013 5 \u2013 arrive at the excellent match between the computed UV SFR function and the observations at z \u223c2. We note that f could be 1 if one had adopted a slightly di\ufb00erent Rv (Cardelli et al. 1989). In any case, the results with f = 1 with Rv = 3.1 di\ufb00er only slightly from the case with f = 1.4 shown here and UV SFR function in that case is consistent with the observations within the errorbars. This also suggests that our overall results are robust and insensitive to small variations of uncertain parameters for the dust model within a reasonable range. It also implies that dust properties at z \u223c2 are not signi\ufb01cantly di\ufb00erent from those of local dust. After matching the observed UV SFR function, we see that the predicted FIR SFR function agrees remarkably well with the observed LIRG and ULIRG data at z = 2 (top panel). As a consistency check, we show in the bottom right panel of Figure 1 the cumulative SFR density at z = 2. Here we see both the UV SFR density and FIR SFR density agree well with observations. We see that, while the directly observed UV SFR density should be roughly equal to the directly observed FIR SFR density, at face value, the UV SFR density is somewhat higher than FIR SFR density. Our results suggest that galaxies with higher SFR tend to have relatively larger obscuration in UV/optical than galaxies with lower SFR, resulting in a steepening UV luminosity function at the luminous end. The underlying cause will be discussed in Figure 4. Our previous studies (Cen 2010, 2011) indicate that the \u201cC\u201d run used is positively biased over the cosmic mean by a factor of \u223c2. Taking that into account, we \ufb01nd that the simulated SFR function as well SFR density becomes too low compared to observed ones. A plausible adjustment is to the stellar IMF. The results shown above uses an top-heavy IMF that produces twice the UV light output per unit SFR than the standard Salpeter function. This provides intriguing evidence for top-heavy IMF at high redshift, consistent with other independent considerations (e.g., Baugh et al. 2005; Dav\u00b4 e 2008; van Dokkum 2008). The abundance of massive, rare objects is expected to depend on box size as well as the overdensity of the environment. We assess this e\ufb00ect as follows. We divide the simulation box into eight equal-volume octants and compute the SFR function for each of the eight octants. The results for the eight SFR functions are shown as green curves in the bottom left panel of Figure 1. We note two points here. First, at SFR \u226430 M\u2299/yr the SFR function is very well converged and does not appear to sensitively depend on environment. Second, substantial variations are visible at SFR \u2265100 M\u2299/yr, which suggests that the abundance of galaxies with SFR higher than 100 M\u2299/yr depends sensitively on density environment and our positively biased simulation box likely has produced some over-abundance of galaxies with SFR \u2265100 M\u2299/yr relative to galaxies with SFR \u2264100 M\u2299/yr; the computed UV SFR at SFR \u2265100 M\u2299/yr in this simulation lies above the observed points is thus not inconsistent. A comparison between the thin and thick curves in the top panel of Figure 1 indicates that the resolutions achieved in the higher resolution run is required in order to provide an \f\u2013 6 \u2013 adequate match to observations. The lower (four times spatially and eight times in mass) resolution simulation of the same volume su\ufb00ers from the two shortcomings. First, there is a slight overproduction of the highest SFR (\u2265200 M\u2299yr\u22121) galaxies in the lower resolution simulation, which is due to a combination of slight overmerging and higher gas reservoir in the lower resolution run. Second, there is a signi\ufb01cant underproduction of lower SFR (\u2264200 M\u2299yr\u22121) galaxies in the lower resolution simulation due to lower resolution. Taking into account these two e\ufb00ects, the results can be understood and our main predictions on the faint slope and galaxy desert (see below) remain robust. Our model makes several predictions. The \ufb01rst is that the di\ufb00erential FIR SFR function displays a nearly perfect powerlaw of slope about \u22121.7 below SFRFIR \u223c100 M\u2299/yr at z = 2. We attribute this outcome to a combination of three physical factors: (1) the intrinsic di\ufb00erential SFR function is steeper than than \u22121.7 but close to \u22122, as indicated by the thin black line in the top panel of Figure 1; (2) on averagge, higher SFR galaxies have higher dust optical depth (as discussed in detail in Figure 4 below) that tends to \ufb02atten the FIR SFR function; (3) there is a signi\ufb01cant dispersion of SFRFIR at a \ufb01xed intrinsic SFR (see Figure 2 below) that also smoothes and \ufb02attens the FIR SFR function. This predictions can be tested by ALMA observations, and if con\ufb01rmed, will provide evidence that the intrinsic SFR function is close to a powerlaw with a slope that is steeper than \u22121.7 in the SFR range 10 \u2212300 M\u2299yr\u22121. We attribute this behavior to a large dispersion of SFR at a \ufb01xed halo mass but will address it in more detail separately. Given this slope, most of the FIR light is concentrated at the bright end. In terms of cumulative galaxy number density we \ufb01nd that UV and FIR selected samples are expected to have comparable abundances at SFR \u226520 \u221240 M\u2299/yr. In terms of cumulative SFR density we \ufb01nd that FIR selected galaxies with SFR \u226510 M\u2299/yr dominate over UV selected galaxies with SFR \u226510 M\u2299/yr; the reverse is true at SFR < 10 M\u2299/yr. Reading directly from simulations we \ufb01nd that FIR selected galaxies with FIR SFR \u226510 M\u2299/yr contain 78% of total FIR light density, whereas UV selected galaxies with UV SFR \u226510 M\u2299/yr contain only 50% (the actual number may be still lower, since our simulations likely have underestimated the number density of galaxies below SFR \u22643 M\u2299/yr, below which a \ufb02attening of the UV SFR function is seen in the top panel of Figure 1). Note that while a Schechter function normally \ufb01ts halo functions well, it does not provide an adequate \ufb01t to the FIR SFR function, due to large dispersions of SFR at \ufb01xed halo masses mentioned above. Our results suggest that the observed UV-selected LBGs detected at SFR \u2265a few M\u2299/yr at z = 2 \u22123 can account for the bulk of the FIR background at z \u223c2 \u22123, consistent with earlier independent observational assessments (e.g., Smail et al. 1999; Adelberger & Steidel 2000; Chapman & Casey 2009). Needless to say, our model implies that UV and FIR selected galaxies form a complementary pair of populations that are drawn from the same underlying general galaxy \f\u2013 7 \u2013 population. This point has been noted by others (e.g., Sawicki & Yee 1998; Meurer et al. 1999; Shapley et al. 2001; Papovich et al. 2001; Calzetti 2001). 0 1 2 3 \u22121 0 1 2 3 log SFRUV (Msun/yr) log SFRFIR (Msun/yr) Galaxy Desert SFRFIR=0.02 (SFRUV/10)2.1 Fig. 2.\u2014 each dot is a galaxy in the plane of UV and FIR detected SFR at z = 2. The solid line is SFRFIR = 0.02[SFRUV/10 M\u2299yr\u22121]2.1 M\u2299yr\u22121. Figure 2 shows a scatter plot of galaxies in the SFRUV \u2212SFRFIR plane. We see a nearly complete empty space at the lower right corner of the plot, with SFRFIR < 0.02[SFRUV/10 M\u2299yr\u22121]2.1 M\u2299yr\u22121, which we shall call the \u201cgalaxy desert\u201d. The physical reason for this nearly complete absence of galaxies with high UV SFR and low FIR SFR rate is that the dust optical depth of galaxies increases with SFR. This second prediction of our model should be testable by ALMA observations. Figure 3 dissects the information contained in Figure 2 further and shows a set of Table 1. Parameters for gaussians in Figure 3 with log SFRUV being the variable SFRUV( M\u2299/yr) mean dispersion 3-10 -0.84 1.0 10-30 0.030 0.98 30-100 0.85 0.76 100-300 1.7 0.72 \f\u2013 8 \u2013 \u22124 \u22123 \u22122 \u22121 0 1 2 0 0.2 0.4 log SFR(FIR) PDF \u22123 \u22122 \u22121 0 1 2 3 0 0.2 0.4 0.6 log SFR(FIR) PDF \u22122 \u22121 0 1 2 3 0 0.2 0.4 0.6 log SFR(FIR) PDF 0 1 2 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 log SFR(FIR) PDF SFR(UV)=3\u221210 SFR(UV)=10\u221230 SFR(UV)=30\u2212100 SFR(UV)=100\u2212300 Fig. 3.\u2014 shows four distributions of FIR SFR for LBG galaxies at each of the four UV SFR values of 3 \u221210 M\u2299/yr (top left), 10 \u221230 M\u2299/yr (top right), 30 \u2212100 M\u2299/yr (bottom left) and 100 \u2212300 M\u2299/yr (bottom right), respectively, at z = 2. Each black curve is a gaussian \ufb01t with its parameters listed in Table 1. distributions of SFRFIR at a given range of SFRUV. We see that for LBGs with SFRUV = 10\u2212 100 M\u2299/yr, the distributions are well \ufb01tted by gaussians (the black curves) using log SFRUV as the variable. In the lowest SFRUV (SFRUV = 3 \u221210 M\u2299/yr) we see a slight tendency of the SFRFIR distribution to skew towards the low SFRFIR end, indicative of increasingly diminishing dust obscuration for galaxies with low SFR. In the highest SFRUV (SFRUV = 100 \u2212300 M\u2299/yr), the SFRFIR distribution is signi\ufb01cantly skewed to the high SFRFIR end for the same physical reason. We list the parameters of the best gaussian \ufb01t of SFRFIR distributions for all SFRUV bins in Table 1. These predictions should be veri\ufb01able by ALMA observations. Finally, we examine the underlying cause of the generally di\ufb00erential rate of dust obscuration seen in prior \ufb01gures where higher SFR galaxies are more dust obscured. Figure 4 shows gas metallicity and gas column density as a function of stellar mass and total SFR, respectively. Examination of the top left panel of Figure 4 indicates that in the stellar mass range Mstar \u22651010 M\u2299there is a positive correlation between gas metallicity and stellar mass, in agreement with the observed, so-called mass-metallicity relation at z \u223c2 of Erb et al. (2006). While this is not the focus of our study here, the agreement is quite \f\u2013 9 \u2013 9 10 11 12 \u22121 0 1 log Mstar (Msun) [Z/H] SFR > 10 Msun/yr SFR < 10 Msun/yr Erb et al 2006 9 10 11 12 20 21 22 23 log Mstar (Msun) log NH (cm\u22122) SFR > 10 Msun/yr SFR < 10 Msun/yr 1 2 3 \u22121 0 1 log total SFR (Msun/yr) [Z/H] 1 2 3 20 21 22 23 log total SFR (Msun/yr) log NH (cm\u22122) Fig. 4.\u2014 Top left panel: column density weighted gas metallicity averaged over the entire galaxy as a function of stellar mass. Shown in black diamond is observations from Erb et al. (2006) at z \u223c2. Top right panel: column density weighted gas metallicity averaged over the entire galaxy as a function of SFR. Bottom left panel: radially integrated total column density as a function of stellar mass. Bottom right panel: radially integrated total column density as a function of SFR. In all the panels red symbols have SFR greater than 10 M\u2299/yr and green symbols less than 10 M\u2299/yr. remarkable but consistent with the agreement that is found between simulations and observations with respect to the metallicity distribution of damped Lyman alpha systems in an earlier study (Cen 2010). A comparison between the two top and two bottom panels of Figure 4 clearly indicates that the correlation between column density and stellar mass or SFR is about three times stronger than that between gas metallicity and stellar mass or SFR. This suggests that the general trend of larger dust obscuration for larger stellar mass or SFR is mostly due to a trend in column density in the same sense, but positively aided by a mass (or SFR)-metallicity trend. This overall trend gives an integral constraint on the total optical depth. The actual distribution of dust optical depth at a given galaxy mass (or SFR) or even for a given galaxy viewed at di\ufb00erent angels has large dispersions due to the clumpy distribution of gas with varying metallicity, resulting in a wide FIR SFR distribution within a narrow UV SFR range, as quanti\ufb01ed in Figure 3. \f\u2013 10 \u2013 4." + }, + { + "url": "http://arxiv.org/abs/1104.5046v1", + "title": "Environmentally Driven Global Evolution of Galaxies", + "abstract": "Utilizing high-resolution large-scale galaxy formation simulations of the\nstandard cold dark matter model, we examine global trends in the evolution of\ngalaxies due to gravitational shock heating by collapse of large halos and\nlarge-scale structure. We find two major global trends. (1) The mean specific\nstar formation rate (sSFR) at a given galaxy mass is a monotonically increasing\nfunction with increasing redshift. (2) The mean sSFR at a given redshift is a\nmonotonically increasing function of decreasing galaxy mass that steepens with\ndecreasing redshift. The general dimming trend with time merely reflects the\ngeneral decline of gas inflow rate with increasing time. The differential\nevolution of galaxies of different masses with redshift is a result of\ngravitational shock heating of gas due to formation of large halos (groups and\nclusters) and large-scale structure that move a progressively larger fraction\nof galaxies and their satellites into environments where gas has too high an\nentropy to cool to continue feeding resident galaxies. Overdense regions where\nlarger halos are preferentially located begin to be heated earlier and have\nhigher temperatures than lower density regions at any given time, causing sSFR\nof larger galaxies to fall below the general dimming trend at higher redshift\nthan less massive galaxies and galaxies with high sSFR to gradually shift to\nlower density environments at lower redshift. We find that several noted cosmic\ndownsizing phenomena are different manifestations of these general trends. We\nalso find that the great migration of galaxies from blue cloud to red sequence\nas well as color-density relation, among others, may arise naturally in this\npicture.", + "authors": "Renyue Cen", + "published": "2011-04-26", + "updated": "2011-04-26", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.HE" + ], + "main_content": "Introduction The intriguing phenomenon of the so-called cosmic downsizing (e.g., Cowie et al. 1996) has had practioners of the cold dark matter cosmogony perplexed. Innovative astrophysical 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1104.5046v1 [astro-ph.CO] 26 Apr 2011 \f\u2013 2 \u2013 ideas have been proposed to introduce scales in the growth of galaxies within the context of hierarchical formation of dark matter halos in the standard cosmological constant-dominated cold dark matter model (LCDM) (Komatsu et al. 2010). Successful models have been constructed, for example, semi-analytically by incorporating possible AGN feedback (e.g., Croton et al. 2006; Bower et al. 2006). In this work we investigate the nature of cosmic downsizing in the LCDM model by performing and analyzing high-resolution large-scale hydrodynamic galaxy formation simulations, including feedback from star formation and proper treatment of gravitational heating due to collapse of large-scale structure. Our simulations reproduce well observations that galaxies of higher star formation (SF) rates (SFR) contribute progressively more to the overall SFR density towards higher redshift (e.g., Cowie et al. 1996). We \ufb01nd that this cosmic downsizing phenomenon is part of a fundamental and universal trend that the sSFR, on average, is a monotonic function of galaxy halo (or stellar) mass with lower-mass galaxies having higher sSFR. As a result, on average, the stellar mass doubling time is a monotonically decreasing function with decreasing stellar mass at any redshift and for more massive galaxies that upcrosses the Hubble time earlier than less massive galaxies. The sSFR of galaxies of all masses, on average, display a monotonic and mass-dependent rate of increase with redshift. In this sense, we see primarily a trend of \u201cdi\ufb00erential galaxy dimming\u201d from high redshift to z = 0. Although the sSFR trend continues to the highest redshift we have examined, the SFR density that is a convolution of these trends and halo abundance evolution in the cold dark matter model displays a maximum at z = 1.5 \u22122. Related, within the simulation volume and density \ufb02uctuations that we probe, we also see an \u201cupsizing\u201d trend at z \u22652 in that the maximum SFR of galaxies decreases towards still higher redshift, probably re\ufb02ecting the tenet of the standard cold dark matter model of hierarchical buildup of dark matter halos where the abundance of large, star-forming halos start to drop o\ufb00exponentially. We examine the underlying physical cause for these distinct trends. We \ufb01nd that at high redshift (z \u22652) SF is largely gas demand limited, where there is su\ufb03cient supply of cold gas for galaxies to double its stellar mass within a Hubble time and SF is mostly regulated by its own e\ufb03ciency, due to feedback e\ufb00ects from star formation. At z \u22642 SF gradually moves to the regime of being supply limited, dependent on environments, as the supply rate of cold gas decreases, due to a combination of primarily two factors. First, the overall decrease of density [\u221d(1 + z)3] causes the gas in\ufb02ow rate to decline with decreasing redshift. Second, the overall heating of cosmic gas due to formation of large halos (such as groups and clusters) and large-scale structures causes a progressively larger fraction of halos to inhabit in regions where gas has too high an entropy to cool to continue feeding the residing galaxies. The combined e\ufb00ect is di\ufb00erential in that overdense regions are heated earlier and to higher temperatures than lower density regions at any given time. Because larger halos tend to reside, in both a relative and absolute sense, in more overdense regions than smaller halos, the net di\ufb00erential e\ufb00ects are that larger galaxies fall below the general \f\u2013 3 \u2013 dimming trend at higher redshift than less massive galaxies, the sSFR as a function of galaxy mass steepens with time and galaxies with the high sSFR gradually shift to lower density environments. We do include supernova feedback in the simulations and \ufb01nd that galactic winds are strong for starburst galaxies, strongest at z \u22652 when SF activities are most vigorous and are stronger in less massive galaxies than in large galaxies. But it appears that the stellar feedback processes do not drive any noticeable trend of the sort presented here, although they are important in self-regulating star formation at high redshift when gas supply rate is high. We also \ufb01nd that the cold gas starvation due to gravitational heating provides a natural mechanism to explain the observed migration of galaxies to the red sequence from the blue cloud as well as many other phenomena, such as the observed color-density relation, the trend of galaxies becoming bluer in lower density environment, and others. The outline of this paper is as follows. In \u00a72 we detail our simulations, method of making galaxy catalogs and analysis methods. Results are presented in \u00a73. In \u00a73.1 we compare some basic galaxy observables to observations. In \u00a73.2 we present detailed results and compare to observations. We then examine and understand physical processes that are primarily responsible for the results obtained in \u00a73.3, followed by predictions of the model in \u00a73.4. Conclusions are given in \u00a74. 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the adaptive mesh re\ufb01nement (AMR) Eulerian hydro code, Enzo (Bryan 1999; Bryan & Norman 1999; O\u2019Shea et al. 2004; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed two regions separately, one centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299and the other centered on a void region at z = 0. We then resimulate each of the two regions separately with high resolution, but embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. We name the simulation centered on the cluster \u201cC\u201d run and the one centered on the void \u201cV\u201d run. The re\ufb01ned region for \u201cC\u201d run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and that for \u201cV\u201d run is 31 \u00d7 31 \u00d7 35h\u22123Mpc3. At their respective volumes, they represent 1.8\u03c3 and \u22121.0\u03c3 \ufb02uctuations. The initial condition in the re\ufb01ned region has a mean interparticleseparation of 117h\u22121kpc comoving, dark matter particle mass of 1.07 \u00d7 108h\u22121 M\u2299. The re\ufb01ned region is surrounded by two layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 83 times that in the re\ufb01ned region. Because \f\u2013 4 \u2013 we still can not run a very large volume simulation with adequate resolution and physics, we choose these two runs to represent two opposite environments that possibly bracket the average. As we have shown in Cen (2010), these two runs indeed bracket all compared observables of DLAs and tests show good numerical convergence. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 460h\u22121pc physical, corresponding to a maximum mesh re\ufb01nement level of 11 at z = 0. The simulations include a metagalactic UV background (Haardt & Madau 1996), and a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005). They also include metallicity-dependent radiative cooling (Cen et al. 1995). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered at the star particle in question, weighted by the speci\ufb01c volume of each cell, which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the entire feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating. The total amount of explosion kinetic energy from Type II supernovae for an amount of star formed M\u2217with a Chabrier IMF is eSNM\u2217c2 (where c is the speed of light) with eGSW = 6.6\u00d710\u22126. Taking into account the contribution of prompt Type I supernovae, we use eSN = 1\u00d710\u22125 in our simulations. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy is used to power GSW (e.g., Heckman 2001). Supernova feedback is important primarily for regulating star formation and for transporting energy and metals into the intergalactic medium. The extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported in a physically sound (albeit still approximate at the current resolution) way. The kinematic properties traced by unsaturated metal lines in DLAs are extremely tough tests of the model, which is shown to agree well with observations (Cen 2010). As we will show below, the properties of galaxies produced in the simulations resemble well observed galaxies, within the limitations of \ufb01nite resolution. In order not to mingle too many di\ufb00erent e\ufb00ects, we do not include any feedback e\ufb00ect from AGN, which is often invoked to suppress star formation by cooling from hot atmosphere in large galaxies. We will see later that this omission may have caused larger galaxies to be somewhat overluminous. We use the following cosmological parameters that are consistent with the WMAP7normalized (Komatsu et al. 2010) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. \f\u2013 5 \u2013 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust and insensitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (and various colors) and others. For each galaxy we also compute its intermediate-scale environmental overdensity, de\ufb01ned to be the dark matter density, smoothed by a Gaussian function of radius 2h\u22121Mpc comoving, divided by the global mean dark matter density. We choose this smoothing scale, because it encloses a mass of 1.3\u00d71013h\u22121 M\u2299, whose gas at virial radius shock heated to the virial temperature approximately corresponds to the critical entropy Scrit that is a weak function of redshift. The relevance of Scrit will be explained in \u00a73.2. In addition, we compute the mean gas entropy of each galaxy at its virial radius, de\ufb01ned as < S >= P Tn1/3dV/ P ndV , where the two sums are over the radial range (0.9 \u22121.1)rv (rv is the virial radius). We also compute various \ufb02uxes across the virial radius for each galaxy, including total gas mass \ufb02ux, cold mass \ufb02ux. 3. Results 3.1. Validating Simulated Galaxies This is \ufb01rst-in-its-class kind of galaxy formation simulations that includes sophisticated physical treatment, su\ufb03cient resolution, and in a perhaps ground breaking fashion, a large enough sample covering the entire redshift range to statistically address relevant questions. In Cen (2010) we presented a detailed examination of the DLAs and found that the simulations, for the \ufb01rst time, are able to match all observed properties of DLAs, including abundance, size, metallicity and kinematics. The broad agreement between simulations and observations suggests that our treatment of feedback processes (including metal enrichment and transport) is realistic; other simulations that do not include these detailed treatment (such as metal transport) do not provide as good agreement with observations as ours especially with respect to kinematics (that depends quite sensitively on metallicity distribution). Nevertheless, as with any simulation, there are limitations. As such, it is prudent to examine \f\u2013 6 \u2013 the basic properties of galaxies themselves in the simulations to gauge how realistically we can reproduce observations. 0 1 2 3 4 5 6 \u22123 \u22122 \u22121 0 z log SFR density (Msun/yr/Mpc3) SFRD (C) SFRD (V) average SFRD Hopkins & Beacom (2006) Brinchmann et al (2004) Seymour et al (2008) Karim et al (2011) Bouwens et al (2007) Reddy & Steidel (2009) Fig. 1.\u2014 shows the evolution SFR density. Also shown as the grey shaded region is the observations compiled by Hopkins & Beacom (2006), as blue points and mageneta circles two more recent observations using radio techniques from Karim et al. (2011) (2\u03c3 errorbars) and Seymour et al. (2008) (1\u03c3 errorbars), as black asterisk the local SDSS data from Brinchmann et al. (2004) (1\u03c3 errorbars), as two black hexagons from Reddy & Steidel (2009) (1\u03c3 errorbars), and as open blue squares from Bouwens et al. (2007) (1\u03c3 errorbars). The blue curve is an average of the two runs. Figure 1 shows the SFR density history from z = 0 to z = 6. We see that for the entire redshift range the SF histories from C and V runs bracket the observations, suggesting that the SFR histories in the simulations are consistent with the observations. It is probably true that the global average lies between these two runs. However, the weightings of two runs for averaging are likely complicated, because di\ufb00erent properties of galaxies of di\ufb00erent masses depend on large-scale environments in a non-trivial fashion. For brevity, we use the constraints from the observed SFRD history to obtain our \u201cbest\u201d weightings for C and V run; we \ufb01nd that a weighting for the C run equal to (1 + z)/(7 + z) (with one minus that for the V run) to \ufb01t the redshift range of interest here, with the obtained average SFR density shown as the blue curve in Figure 1. In some of the subsequent \ufb01gures, we use the same weightings to average over some quantities of the two runs, when such an exercise is preferential. Figure 2 shows the SDSS restframe g \u2212r color distribution of galaxies at z = 0, 1.0, 1.6. The averaged color distribution at each redshift is obtained by the same weighting scheme normalized to the SFR density evolution in Figure 1. We see that the simulations can reproduce the observerd bimodality well at z = 0 (Blanton et al. 2003a); varying the weightings \f\u2013 7 \u2013 0 0.2 0.4 0.6 0.8 1 0 0.1 g \u2212 r PDF z=0.0 z=1.0 z=1.6 Blanton et al 2003 Fig. 2.\u2014 shows the SDSS restframe g \u2212r color distributions of simulated galaxies (number weighted) with stellar mass greater than 109 M\u2299at z = 0, 1, 1.6 (red, green and blue, respectively). Also show as the black curve is the corresponding SDSS observations at z = 0.1 from Blanton et al. (2003a). of the two runs in averaging within any reasonable range does not alter the bimodal nature of the distribution. There is a hint that our simulated galaxies may be slightly too blue (by \u223c0.05 mag), which may in part due to the omission of type Ia supernova feedback on a longer time scale (\u223c1Gyr) in the present simulations (we include feedback from SNe II and prompt SNe Ia). Our future simulations including SNe Ia should verify this. There is evidence that the color bimodality persists at least to z \u223c1 but becomes largely absent by z = 1.6, consistent with observations (e.g., Weiner et al. 2005; Franzetti et al. 2007; Cirasuolo et al. 2007). Figure 3 shows the SDSS g band galaxy luminosity function at z = 0. Within the uncertainties the simulations agree reasonably well with observations, except at the high luminosity end where simulations overproduce luminous galaxies. This is a well-known problem in simulations that do not include some strong feedback in large galaxies. AGN feedback has been invoked to suppress star formation due to cooling o\ufb00of hot gas in large galaxies (e.g., Croton et al. 2006; Bower et al. 2006). If we apply a similar AGN feedback prescription as in Croton et al. (2006) by suppressing star formation post-simulation by a factor of f \u22611/(1+(Mh/1.0\u00d71013 M\u2299)2/3), where we use Mh = Mstar/0.4 for satellite galaxies whose halos can no longer be unambiguously delineated (while stellar identi\ufb01es remain intact), we obtain the result shown as the thick solid curve in Figure 3 that is in good agreement with observations. There is indication that at Mg > \u221219, we underproduce small galaxies, which \f\u2013 8 \u2013 \u221223 \u221222 \u221221 \u221220 \u221219 \u22125 \u22124 \u22123 \u22122 absolute r magnitude log r band LF (1/Mpc3/mag) z=0 (uncorrected for AGN feedback) z=0 (corrected for AGN feedback) Blanton et al 2003 Fig. 3.\u2014 shows the SDSS g band galaxy luminosity function at z = 0. The thin dotted curve is directly from averaging over C and V run, whereas the thick solid curve is obtained after correcting for AGN feedback. Also shown as the thick dashed curve is the Schechter \ufb01t to the SDSS data (Blanton et al. 2003b). is probably a result of resolution e\ufb00ect. For the results that we present subsequently, these \u201cdefects\u201d do not materially alter any conclusions that we draw, because we are mostly interested in evolution of galaxies segregatd in mass and in environments, which do not depend strongly on precise abundances of galaxies. Figure 4 shows the rest-frame UV (at 1700\u02da A) luminosity functions at several redshifts, along with UV and IR (ULIRG and LIRG) observational data, to check if the reasonable agreement between simulations and observations found at lower redshift (Figure 2 and Figure 3) extend to higher redshifts. We convert SFR of each simulated galaxy to MAB(1700\u02da A) using the standard conversion formula, SFR = 6.1 \u00d7 10\u2212[8+0.4MAB(1700\u02da A)] M\u2299/yr (Kennicutt 1998) in combination with the AB magnitude system (Oke 1974). We see that the simulations agree well with the UV observations for MAB(1700\u02da A) > \u221222, within the uncertainties. A signi\ufb01cant portion of the disagreement between simulations and UV data at MAB(1700\u02da A) < \u221222 is removed when the abundance of ULIRGs is taken into account, and the simulations become approximately in agreement with observations within the errors at MAB(1700\u02da A) < \u221222. The faint end slope of the UV luminosity functions appear to be steeper than \u03b1 = \u22121.5 and about \u03b1 = \u22121.8 to \u22121.7, consistent with observations (e.g., Yan & Windhorst 2004; Bouwens et al. 2007; Reddy & Steidel 2009). In summary, our simulations produce properties of galaxies are in good agreement with a variety of observations that allow us now to examine their global evolutionary trends. \f\u2013 9 \u2013 \u221225 \u221224 \u221223 \u221222 \u221221 \u221220 \u221219 \u221218 \u22125 \u22124 \u22123 \u22122 MAB (1700) dn/dMAB(1700) (Mpc\u22123) z=0 z=0.5 z=1.0 z=1.6 z=2.5 z=3.1 z=5.0 z=1.9\u22122.7 UV (Reddy & Steidel 2009) z=2.7\u22123.4 UV (Reddy & Steidel 2009) z=2 LIRG and ULIRG (Caputi etal 2007) \u001f = \u22121.5 \u001f = \u22121.7 Fig. 4.\u2014 shows the rest-frame UV (at 1700\u02da A) luminosity functions at z = 0, 0.5, 1.0, 1.6, 2.5, 3.1, 5 with 1\u03c3 Poisson errorbars indicated on the z = 1 curve. The UV observational data are from Reddy & Steidel (2009): solid diamonds at z = 1.9\u22122.7 and open diamonds at z = 2.7\u22123.4. Also shown as two solid dots are observed LIRG and ULIRG data from Caputi et al. (2007). The ULRIG and LIRG data points are shown, if they were not reprocessed through dust, to account for the fact that we do not process stellar light through dust grains. The dotted and dashed straight lines indicate the faint end slope of the luminosity function at \u03b1 = \u22121.5 and \u22121.7, respectively. 3.2. Global Trends of Galaxy Formation and Evolution Figure 5 shows the cumulative light density distribution in rest-frame SDSS z band as a function of absolute z magnitude from redshift z = 0 to z = 3.1. The fact that the redshift z = 0 values of the two runs bracket the SDSS data at redshift z \u223c0.1 is self-consistent. We did not average the two runs in this case, because there is a substantial mismatch between the two at z < \u221225, because the abundance of these most luminous galaxies, at the exponential tail, depends more strongly on large-scale environmental density. We see that from z = 0 (red circles) to z = 1.6 (black triangles) there is a trend that light density increases with increasing redshift, in accord with the same trend for SFR density seen in Figure 1. It is also seen that the percentage contribution to the light density of galaxies at the most luminous end as well as the luminosity of the most luminous galaxies increases with increasing redshift from z = 0 to z = 1.6. This particular manifestation is in excellent agreement with the apparent downsizing phenomenon \ufb01rst pointed out by Cowie et al. (1996, see Figures 6, 20, 24 therein). As we will show later, the underlying reason for this apparent downsizing phenomenon is simply that the luminosity function in rest-frame z (or in restframe K-band, as shown in Cowie et al. (1996)) becomes brighter with increasing redshift from z = 0 to z \u223c1.6, but the brightening is across the entire spectrum of galaxy masses. \f\u2013 10 \u2013 \u221229 \u221228 \u221227 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 \u221220 \u221219 7 7.5 8 8.5 9 9.5 absolute z magnitude log light density (>z) (Lsun/Mpc3) redshift=0 (C) redshift=0.5 (C) redshift=1.0 (C) redshift=1.6 (C) redshift=2.5 (C) redshift=3.1 (C) redshift=0 (V) redshift=0.5 (V) redshift=1.0 (V) redshift=1.6 (V) redshift=2.5 (V) redshift=3.1 (V) Blanton et al 2003 Fig. 5.\u2014 shows the cumulative light density distribution in rest-frame SDSS z band as a function of absolute z magnitude at redshifts z = (0, 0.5, 1.0, 1.6, 2.5, 3.1) for both C and V runs. Also shown as the horizontal dashed line is the value from SDSS data at z \u223c0.1 (Blanton et al. 2003b). Similar redshift trends are seen in other SDSS broad bands. However, the brightening for galaxies of di\ufb00erent masses, i.e., sSFR, displays an important di\ufb00erential, where sSFR as a function of stellar mass has a negative slope that steepens with decreasing redshift, as shown in Figure 6 next. Figure 6 shows the distribution of galaxies in the sSFR-Mstar plane at z = (0, 0.5, 1.6, 3.1), where each galaxy is also encoded with the average gas entropy at its virial radius a higher entropy corresponds to a larger circle. The physical importance of gas entropy will become apparent later. The horizontal line in each panel indicates the value of sSFR at which the galaxy would double its stellar mass in one concurrent Hubble time. We see that at z = 3.1 (bottom right panel) most galaxies lie above the horizontal line and sSFR is nearly independent of stellar mass, indicating that all galaxies at this redshift are growing at a similar and rapid pace. As we will show later (see Figure 12), the cold gas in\ufb02ow rate signi\ufb01cantly exceeds SFR, indicating that SF is demand based and self-regulated. Comparison of the four panels clearly shows that a progressively larger fraction of galaxies of all masses downcross the horizontal line with decreasing redshift, with larger galaxies starting that migration earlier and generally at a faster pace than less massive galaxies. It is quite visible that the downcrossing of galaxies over the horizontal line is accompanied by orders of magnitude increase in gas entropy at the virial radii of these galaxies, i.e., circles get much larger moving downward. It is seen that some galaxies of all masses from C run occupy the lower quarter of the lower redshift (upper left and upper right) panels that have the lowest sSFR and largest entropies (large circles); these are galaxies in high entropy cluster environments. The negative slope \f\u2013 11 \u2013 9 10 11 12 \u221213 \u221212 \u221211 \u221210 \u22129 \u22128 sSFR (yr\u22121) z=0.0 (C) z=0.0 (V) doubling in tH 9 10 11 12 \u221213 \u221212 \u221211 \u221210 \u22129 \u22128 z=0.5 (C) z=0.5 (V) doubling in tH 9 10 11 12 \u221213 \u221212 \u221211 \u221210 \u22129 \u22128 Mstar (Msun) sSFR (yr\u22121) z=1.6 (C) z=1.6 (V) doubling in tH 9 10 11 12 \u221213 \u221212 \u221211 \u221210 \u22129 \u22128 Mstar (Msun) z=3.1 (C) z=3.1 (V) doubling in tH Fig. 6.\u2014 shows a scatter plot of sSFR versus galaxy stellar mass at z = 0 (top left), z = 0.5 (top right), z = 1.6 (bottom left) and z = 3.1 (bottom right) for both C (red) and V (blue) run. Each circle is a galaxy from C (red) and V (blue) run with its size proportional to the logarithm of the gas entropy at its virial radius. The horizontal line in each panel indicates the sSFR value at which a galaxy would double its stellar mass in a Hubble time. of the sSFR as a function of stellar mass appears to steepen wth decreasing redshift, which will be quanti\ufb01ed in Figure 7. As a result, by z = 0, only a signi\ufb01cant fraction of galaxies of stellar mass less than \u223c1010 M\u2299can still double their mass in a Hubble time and they are mostly in the V run (i.e., not in overdense regions), while the vast majority of larger galaxies have lost that ability. A comparison of red (galaxies from C run) and blue circles (galaxies from V run) as well as substantial dispersions of sSFR at a \ufb01xed stellar mass within each run indicates that there are substantial variations among galaxies of a same mass starting at z = 1.6 that must depend on variables other than just the contemporary galaxy mass. As will be shown and discussed extensively subsequently, environmental dependence plays the most fundamental role in shaping the formation and evolution of galaxies, and we \ufb01nd that the gas entropy at the virial radius of each galaxy is a useful variable for understanding the underlying physical cause. Figure 7 shows the mean sSFR as a function of stellar mass at redshifts z = (0, 0.5, 1.6, 1.9, 4.0). We see that simulations show a trend of steepening slope with decreasing redshift, visually noticed in Figure 6 above, which is generally consistent with observations. The agreement of sSFR between our simulations and IR-UV observations of Martin et al. (2007) at z = 0 \u22121 is good within uncertainties. Currently, the uncertainties in the observed data are still quite substantial, especially at higher redshifts, as evidenced by the di\ufb00erences among the shown observations of Elbaz et al. (2007), Oliver et al. (2010) and Karim et al. (2011) and others \f\u2013 12 \u2013 9 10 11 12 13 \u221212 \u221211 \u221210 \u22129 log Mstar (Msun) log sSFR (yr\u22121) z=0.0 z=0.5 z=1.0 z=1.9 z=4.0 z=0.1,0.5,1.0 (Martin et al 2007) z=1 (\u22120.1) (Elbaz et al 2007) z=0\u22121.6 (\u22120.35) (Karim et al 2011) z=0 (\u22120.52) to z=2 (\u22120.24) (Oliver et al 2010) Fig. 7.\u2014 shows the average sSFR as a function of stellar mass at redshifts z = (0, 0.5, 1.6, 1.9, 4.0) for both C (solid symbols) and V run (open symbols) with 1\u03c3 Poisson errorbars. The IR-to-UV observational data points are from Martin et al. (2007) (red, blue and green asterisks for z = 0.1, z = 0.5 and z = 1, respectively) are shown exactly as observed, from FIR observations of Elbaz et al. (2007) as the cyan line at z = 1 of log SFR \u2212logMstar slope of \u22120.1, from radio observations of Oliver et al. (2010) as two magenta lines are shown from z = 0 of slope \u22120.52 to z \u223c2 of slope \u22120.24, and from radio observations of Karim et al. (2011) as the dashed cyan for the slope range at z = 0 \u22121.6 of slope \u22120.35. (not shown here). Nonetheless, there is clear evidence of a negative slope of sSFR as a function of stellar mass that gradually \ufb02attens with increasing redshift, in both our simulations and these observations. In Figure 8 we plot the maximum and mean SFR as a function of stellar mass for seven di\ufb00erent redshifts z = (0, 0.5, 1.0, 1.6, 2.5, 3.1, 4.0) for both C (left panel) and V (right panel) runs. One striking result that is best seen in this plot is that the maximum SFR of galaxies at a given mass increases with increasing redshift up to zmax = 1.6 \u22123.1. Beyond zmax, that uptrend for maximum SFR at a \ufb01xed mass stops and appears to become static. Interestingly, the mean SFR at a \ufb01xed mass continues to increase up to the highest redshift shown and the ratio of maximum SFR to mean SFR at a \ufb01xed mass continues to shrink, reaching a value of 1 \u22123 in the range z = 2 \u22124, suggesting that at high redshift galaxy formation becomes more \u201cuniform\u201d. The second striking result is that the curves are nearly parallel to one another in the C run, suggesting that SFR of galaxies of di\ufb00erent masses evolve with redshift at similar rates. This point was noted earlier observationally, \ufb01rst by Zheng et al. (2007) (see their Figures 1,2). As shown in Figure 7, the rate of change of sSFR for galaxies of di\ufb00erent mass galaxies is, however, not exactly constant across the mass spectrum. We see very clearly here by comparing the two panels in Figure 8 that this di\ufb00erential at low \f\u2013 13 \u2013 8 9 10 11 12 13 \u22122 \u22121 0 1 2 3 log Mstar (Msun) log SFR(max) and SFR(mean) (Msun/yr) SFR = Mstar 3/4 SFR(max) z=0 (C) SFR(max) z=0.5 (C) SFR(max) z=1.6 (C) SFR(max) z=3.1 (C) SFR(max) z=4.0 (C) SFR(mean) z=0 (C) SFR(mean) z=0.5 (C) SFR(mean) z=1.6 (C) SFR(mean) z=3.1 (C) SFR(mean) z=4.0 (C) 8 9 10 11 12 13 \u22122 \u22121 0 1 2 3 log Mstar (Msun) log SFR(max) and SFR(mean) (Msun/yr) SFR = Mstar 3/4 SFR(max) z=0 (V) SFR(max) z=0.5 (V) SFR(max) z=1.6 (V) SFR(max) z=3.1 (V) SFR(max) z=4.0 (V) SFR(mean) z=0 (V) SFR(mean) z=0.5 (V) SFR(mean) z=1.6 (V) SFR(mean) z=3.1 (V) SFR(mean) z=4.0 (V) Fig. 8.\u2014 shows maximum (solid symbols) and mean SFR (open symbols) as a function of stellar mass at redshifts z = (0, 0.5, 1.0, 1.6, 2.5, 3.1) for both C (left) and V (right) run. The dashed magenta line has a slope of 3/4; it is not a \ufb01t to the curves but to guide the eye to see the general trend. redshift can be attributed, to a large degree, to less massive galaxies in the V run, i.e., in low density environment, that refuse to join the dimming trend of galaxies in high density environment. The physical reason for this will be made clear in \u00a73.2. We note that beyond zmax the mass at the high end is truncated at progressively smaller values with increasing redshift. This sharp cuto\ufb00at the high end may be somewhat arti\ufb01cial due to the limited simulation box size we have, but largely re\ufb02ects the hierarchical nature of growth of dark matter halos in the standard cold dark matter model. As we have shown earlier in Figure 1 and Figure 5 the SFR density and light density peak at z \u223c1.5 \u2212 2, this suggests, in combination with what is seen in Figure 8, that the growth of halos with time dominates over the downsizing trend of SFR down to z = 1.5 \u22122 from high redshift. Thereafter, gastrophysical processes that act upon galaxies at z < 1.5 \u22122 cause galaxy formation and evolution to deviate from the track of continued hierarchical buildup of Table 1. SFR evolution as a function of stellar mass, \ufb01tted in the form log SFR/( M\u2299yr\u22121) = a(1 + z)b row 1: stellar mass range; row 2: a; row 3: b. Stellar Mass 109 \u22121010 M\u2299 1010 \u22121011 M\u2299 1011 \u22121012 M\u2299 a -0.59 -0.018 0.80 b 1.9 2.1 2.4 \f\u2013 14 \u2013 0 1 2 3 4 \u22121 0 1 2 z log (Msun/yr) Mstar=109\u221210Msun (C) Mstar=109\u221210Msun (V) Mstar=1010\u221211Msun (C) Mstar=1010\u221211Msun (V) Mstar=1011\u221212Msun (C) Mstar=1011\u221212Msun (V) Mstar=109\u221210Msun (Martin et al 2007) Mstar=1010\u221211Msun (Martin et al 2007) Mstar=1011\u221212Msun (Martin et al 2007) Mstar=109\u221210Msun (Zheng et al 2007) Mstar=1010\u221211Msun (Zheng et al 2007) Mstar=1011\u221212Msun (Zheng et al 2007) Fig. 9.\u2014 shows the mean SFR for galaxies of stellar mass in three bins, 109 \u22121010 (red circles), 1010 \u22121011 (green squares) and 1011 \u22121012 M\u2299i (blue triangles), as a function of redshift. The solid symbols are from C run and open symbols from V run. The overplotted black curves are 1st order polynomial \ufb01ts to the three mass bins, averaged over C and V run curves. Also shown as hexagons and diamonds are observations from Martin et al. (2007) and Zheng et al. (2007), respectively. dark matter halos, resulting in a trend where the total luminosity density and SFR density decreases with time and di\ufb00erential evolution of galaxies with di\ufb00erent masses. Finally, in Figure 9, we show the redshift evolution of SFR for galaxies in three stellar mass bins: 109 \u22121010 (red circles), 1010 \u22121011 (green squares) and 1011 \u22121012 M\u2299(blue triangles). The observational data are still relatively uncertain at higher redshift bins for the low-mass galaxies, as indicated by the di\ufb00erence between di\ufb00erent observational determinations. The agreement between simulations and observations are reasonable, especially for the highest mass bin. To best gauge the evolution at low redshift, we decide to \ufb01t the simulated results using 1st order polynomial \ufb01ts using only the points at z < 2, although higher (e.g., 2nd) order polynomial \ufb01ts signi\ufb01cantly improve the goodness of the \ufb01ts at z \u22652. The best \ufb01t parameters are tabulated in Table 1. It is evident from the \ufb01tting parameters that higher-mass galaxies su\ufb00er a steeper drop in SFR in the range z = 0 \u22122 than lower-mass galaxies. This illustrates clearly the di\ufb00erential evolution of sSFR or SFR with redshift for galaxies of di\ufb00erent masses. \f\u2013 15 \u2013 3.3. Physical Origin: Gravitational Heating of External Gas We now perform a detailed analysis of the physical conditions of galaxies to understand the cause of the trend of cosmic dimming and its di\ufb00erential nature found in \u00a73.2. A useful starting point may be to quantify the evolution of the amount of gas that can cool to feed galaxies. The amount of gas that can cool depends on density, temperature, metallicity as well as what happens to the gas subsequently, such as shocks, compression, etc. It is therefore highly desirable to project the multidimensional parameter space to as a low dimension space as possible. Gas entropy provides an excellent variable to characterize gas cooling properties. As \ufb01rst insightfully noted by Scannapieco & Oh (2004), the cooling time of any parcel of gas has a minimum value that only depends on the entropy of the gas. Following them we write the gas cooling time in the following form: tcool = (3/2)nkBT n2 e\u039b(T) = S3/2 \" 3 2 \u0012\u00b5e \u00b5 \u00132 kB T 1/2\u039b(T) # , (1) where n and ne is total and electron density, respectively; kB is the Boltzmann\u2019s constant, T temperature and \u039b cooling function; \u00b5 = 0.62 and \u00b5e = 1.18 for ionized gas that we are concerned with; S is the gas entropy de\ufb01ned as S \u2261 T n2/3, (2) in units of K cm2. At a \ufb01xed S the cooling time is inversely proportional to T 1/2\u039b(T). The cooling function \u039b(T) depends on the gas metallicity, which is found in our simulations to be almost universal at a value of \u223c0.1 Z\u2299for gas at virial radii at the redshifts we are interested in here. Adopting a metallicity of 0.1 Z\u2299the term T 1/2\u039b(T) has a minimum at Tmin \u223c2.3 \u00d7 105K (we note that reasonable variations in metallicity, say, to 0.3 Z\u2299from 0.1 Z\u2299, does not materially impact our arguments). Therefore, if tcool(Tmin) > tH, the gas can never cool in a Hubble time, because (1) entropy is a non-decreasing quantity in the absence of cooling and (2) cooling will be insigni\ufb01cant within tH given the initial requirement. Subsequent adiabatic compression or expansion does not alter its fate. Any additional input of entropy, e.g., by shocks, would increase the entropy and make it more di\ufb03cult to cool. Thus, there is a critical value of entropy Scrit for any gas above which gas can no longer cool. The following \ufb01tting formula provides a \ufb01t to computed critical entropy Scrit for gas metallicity of 0.1 Z\u2299with an accuracy of a few percent over the entire redshift range z = 0\u22127: log[Scrit/(K cm2)] = 9.183 \u22120.167z + 0.0092z2. (3) In Figure 10 we place each galaxy in the entropy-overdensity parameter plane at four redshifts (z = 0, 0.5, 1.6, 3.1). The overdensity is de\ufb01ned to be the dark matter density, smoothed by a Gaussian function of radius 2h\u22121Mpc comoving, divided by the global mean \f\u2013 16 \u2013 \u22120.5 0 0.5 1 1.5 7 8 9 10 11 12 log S (K cm2) z=0.0 (C) z=0.0 (V) tcool = tH \u22120.5 0 0.5 1 1.5 7 8 9 10 11 12 z=0.5 (C) z=0.5 (V) tcool = tH \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 6 7 8 9 10 11 log overdensity log S (K cm2) z=1.6 (C) z=1.6 (V) tcool = tH \u22120.2 0 0.2 0.4 0.6 6 7 8 9 10 11 log overdensity z=3.1 (C) z=3.1 (V) tcool = tH Fig. 10.\u2014 shows local mean gas entropy at virial radius as a function of local overdensity smoothed by a Gaussian window of radius 2h\u22121Mpc comoving at redshifts z = 0 (top left), z = 0.5 (top right), z = 1.6 (bottom left) and z = 3.1 (bottom right). Each circle is a galaxy from C (red) and V (blue) run with its size linearly proportional to the inverse of the logarithm of its sSFR; smaller circles correspond to higher sSFR in this representation. Also shown as the horizontal bar is the critical entropy Scrit where cooling time is equal to the Hubble time. dark matter density. We see that at z = 3.1 the entropy of almost all galaxies is located below the critical entropy line, indicating that no signi\ufb01cant amount of gas at the virial radius has been heated. One should note that, once a gas element has upcrossed the critical entropy Scrit, it will not fall back below it again. Therefore, for most galaxies, the moment that it upcrosses Scrit marks the beginning of the cold gas starvation phase, because galaxies tend to move to higher density, higher entropy regions with time. The size of each circle in Figure 10 is linearly proportional to the inverse of the logarithm of the sSFR of each galaxy. We see that galaxies above the Scrit line have dramatically larger circles, i.e., having lower sSFR. It is also interesting to see that galaxies that upcross the Scrit line do so only in overdense region (smoothed by a Gaussian radius of 2h\u22121Mpc). This is clear and powerful evidence that the di\ufb00erential dimming of galaxies is caused by heating of gas in overdense regions; in other words, galaxy formation and long-term evolution are determined by external supply of cold gas, which in turn depends on overdensity on intermediate scales (\u223c1Mpc) that dictate the entropy of shock heated gas. To help further understant this, in Figure 11, we plot the galaxies in the entropy-halo mass parameter plane at four redshifts. Also shown as the dashed green line in each panel \f\u2013 17 \u2013 10 11 12 13 6 7 8 9 10 11 12 log (K cm2) z=0.0 (C) z=0.0 (V) tcool = tH at halo virial temperature 10 11 12 13 6 7 8 9 10 11 12 z=0.5 (C) z=0.5 (V) tcool = tH at halo virial temperature 10 11 12 13 6 7 8 9 10 11 12 log Mhalo (Msun) log (K cm2) z=1.6 (C) z=1.6 (V) tcool = tH at halo virial temperature 10 11 12 13 6 7 8 9 10 11 12 log Mhalo (Msun) z=3.1 (C) z=3.1 (V) tcool = tH at halo virial temperature Fig. 11.\u2014 shows local mean gas entropy at virial radius as a function of halo mass at redshifts z = 0 (top left), z = 0.5 (top right), z = 1.6 (bottom left) and z = 3.1 (bottom right). Each circle is a galaxy from C (red) and V (blue) run with its size proportional to the logarithm of the local overdensity smoothed by a Gaussian window of radius 0.5h\u22121Mpc comoving. Also shown as the horizontal bar is the critical entropy Scrit where cooling time is equal to the Hubble time. The inclined line indicates the gas entropy at virial radius if the temperature is exactly equal to the virial temperature of the halo. is the gas entropy at virial radius, if the temperature is heated up to the virial temperature of the host halo itself. One notices that at z = 3.1 when no galaxies more massive than \u223c5 \u00d7 1012 M\u2299has formed, virial heating due to formation of halos is insu\ufb03cient to upcross the entropy barrier. This is the redshift range where an ample amount of cold gas is available to feed galaxy formation, resulting in sSFR that is very weakly mass-dependent and galaxy formation in the \u201cupsizing\u201d domain, in concert with the hierarchical buildup of dark matter halos. At lower redshifts, formation of larger halos more massive than \u223c1 \u00d7 1013 M\u2299(i.e., groups and clusters) as well as collapse of larger waves due to formation of large-scale structures (\ufb01laments and walls) raise a progressively larger fraction of regions to higher entropy than Scrit. This causes a dichotomy in the entropy distribution, especially at the low halo mass end (\u22641011 M\u2299) as follows. There is a branch of low-mass galaxies in low density environments, as evidenced by their small circle sizes, which are located along or below the green line in Figure 11 and have entropies comparable to or lower than what is produced due to adiabatic shock heating accompanying the formation of the halos themselves. These small galaxies correspond to galaxies in the upper left corner in Figure 6 that are still able \f\u2013 18 \u2013 to double their mass in a Hubble time. Then there is another branch of small galaxies that lie above the Scrit line and are in overdense regions, as evidenced by their large circle sizes. These small galaxies are red and dead, correspond to dwarf galaxies in heated \ufb01laments and group/cluster environments. Generally, the gas entropy of galaxies above the green dashed line is higher than what virial shock heating due to the formation of the halo itself produces; therefore, all these galaxies above the green line are in essence \u201csatellite\u201d galaxies within a large halo (such as a group or cluster) or, if one were to generalize it, \u201csatellite\u201d galaxies in a gravitational shock heated region due to collapse of large-scale structure (\ufb01laments or pancakes), not necessarily virialized. The concentration of galaxies with entropy along the green line is due to virial shock heating of halo itself, i.e., the primary galaxy. It is striking that even at z = 0 there is only a very handful of (blue circle) galaxies with mass greater than 1012 M\u2299that lie above the Scrit from the V run. Taken together, this is unequivocal evidence that it is the external gas heating that drives the gas supply hence star formation and galaxy evolution; the absence of such heating in the V run has allowed galaxies there to remain active in star formation at present. 0 1 2 3 4 \u221211 \u221210 \u22129 \u22128 z log specific cold gas flow rates and sSFR (yr\u22121) sSFR, Ms=109\u22121010 (C) sSFR, Ms=109\u22121010 (V) specificc cold inflow rate, Ms=109\u22121010 (C) specificc cold inflow rate, Ms=109\u22121010 (V) sSFR, Ms=1010\u22121011 (C) sSFR, Ms=1010\u22121011 (V) specificc cold inflow rate, Ms=1010\u22121011 (C) specificc cold inflow rate, Ms=1010\u22121011 (V) a simple universal scaling Fig. 12.\u2014 shows the mean speci\ufb01c cold gas in\ufb02ow rate (de\ufb01ned to be the cold gas in\ufb02ow rate per unit stellar mass) and mean sSFR for galaxies in two di\ufb00erent stellar mass bins for C and V run. The cold gas is de\ufb01ned to be that with cooling time less than the dynamic time of the galaxy. Also shown as solid green curve is the general scaling of gas in\ufb02ow rate, which is assumed to be proportional to 4\u03c0r2 v(z)vv(z)\u03c13(z), where rv, vv and \u03c1(z) are redshift-dependent virial radius, virial velocity and mean gas density. Figure 12 shows the mean speci\ufb01c cold gas in\ufb02ow rate (de\ufb01ned to be the cold gas in\ufb02ow rate per unit stellar mass and cold gas is de\ufb01ned to be gas that has a cooling time less than the galaxy dynamical time at the virial radius) and mean sSFR for galaxies in two di\ufb00erent \f\u2013 19 \u2013 stellar mass bins for C and V run. Several points are worth noting. First, we see that the cold gas in\ufb02ow rates are generally higher than star formation rates, suggesting self-regulation of star formation, mostly due to feedback from star formation. Second, the ratio of cold gas in\ufb02ow rate to SFR decreases with decreasing redshift, pointing to a gradual transition of SF regimes from gas demand based at high redshift to gas supply based at low redshift. Third, the rough similarity between the evolution of the gas in\ufb02ow rate based on a simple scaling and the actual computed rates suggests the bulk of the cosmic dimming trend with decreasing redshift can be attributed to the decrease of mean density of the universe with increasing time and the evolution of the Hubble constant (or density parameter). Finally, the gravitational heating e\ufb00ects add a di\ufb00erentiating process on top of this general dimming trend, evident here by the di\ufb00erent steepening with decreasing redshift of the speci\ufb01c gas in\ufb02ow rates and SFR at lower redshifts among galaxies of di\ufb00erent masses and galaxies in di\ufb00erent environments (C versus V run). 7 8 9 10 11 0 0.2 0.4 0.6 0.8 1 overdense region g \u2212 r z=0.0 (C) z=0.0 (V) tcool = tH 7 8 9 10 11 0 0.2 0.4 0.6 0.8 1 overdense region z=0.5 (C) z=0.5 (V) tcool = tH 7 8 9 10 0 0.2 0.4 0.6 0.8 1 overdense region log (K cm2) g \u2212 r z=1.6 (C) z=1.6 (V) tcool = tH 7 8 9 0 0.2 0.4 0.6 0.8 1 log (K cm2) z=3.1 (C) z=3.1 (V) tcool = tH Fig. 13.\u2014 shows SDSS g-r color of galaxies as a function of local mean gas entropy at the virial radius at z = 0 (top left), z = 0.5 (top right), z = 1.6 (bottom left) and z = 3.1 (bottom right). The galaxies in C run are shown in red and those in V run in blue. The size of each circle is proportional to the logarithm of the galaxy stellar mass. Also shown as the vertical line is the critical entropy Scrit where cooling time is equal to the Hubble time. Finally, in Figure 13, we place galaxies in the color-entropy plane. Four things are immediately noticeable. First, the vast majority of galaxies are blue (in color, not the color of the plotted circles) and there is no strong evidence of bimodality in color at z \u22651.6. Second, at z = 0\u22120.5, almost all galaxies in the V run occupy the blue peak at g \u2212r \u223c0.2\u22120.6 with very few in the red peak. Third, the vast majority of galaxies on the left side of the critical \f\u2013 20 \u2013 entropy line are in the blue cloud, as they should. Fourth, there is a signi\ufb01cant number of galaxies on the right side of the critical entropy line that appear blue and have masses covering a comparable range compared to those in the red sequence. Thus, Figure 13 gives a physical underpinning for the well-known color-magnitude diagram of galaxies (e.g., Baldry et al. 2004). The existence of the cold-gas-starved yet blue galaxies indicates that external gas heating is the driving force to cause these blue galaxies to migrate upward in Figure 13 to ultimately join the red sequence. The fact that many galaxies in the V run, although having higher sSFR than those in the C run (see Figures 6, Figures 9, Figure 10 and Figure 12), both remain blue (as they should, given the high sSFR) and have low entropies suggest that SF is not the primary driver for the color migration. Internal driver, such as feedback from starbursts or AGN, may play a role in quenching star formation in a small fraction of galaxies that experience immense starbursts (e.g., caused by major mergers); but the situation is unclear at present. 3.4. Predictions Several manifestations of downsizing trends should by now be understood, including (1) the epoch of major stellar mass buildup in massive galaxies is substantially earlier than the epoch of mass buildup in low-mass galaxies, (2) the SF and stellar mass buildup are accelerated in overdense regions compared to less overdense regions, (3) massive galaxies are on average older than less massive galaxies, (4) galaxies of all masses, on average, get bluer with increasing redshift, (5) galaxy self metal enrichment shifts from high-mass galaxies at high redshift to lower-mass galaxies at lower redshift, all in broad agreement with a variety of observations (e.g., Kodama et al. 2004; P\u00b4 erez-Gonz\u00b4 alez et al. 2005; Bundy et al. 2006; Noeske et al. 2007; Zheng et al. 2007; Martin et al. 2007; Tresse et al. 2007; Buat et al. 2007; Lehmer et al. 2008; Mobasher et al. 2009; Hartley et al. 2010; Cirasuolo et al. 2010; Karim et al. 2011; Pilyugin & Thuan 2011). This model provide a coherent and uni\ufb01ed physical interpretation. Many other general trends in galaxy formation and evolution that this model would predict have already been con\ufb01rmed by observations, including (1) the galaxy color-environment relation (e.g., Blanton et al. 2005), (2) galaxy star formation as a function of environment, speci\ufb01cally the dramatic transition at a few cluster virial radii that mark location of virial shocks (e.g., G\u00b4 omez et al. 2003), (3) the trend of galaxies having higher sSFR and becoming bluer towards voids from cluster environments (e.g., Kau\ufb00mann et al. 2004; Rojas et al. 2004, 2005), (4) redder galaxies have stronger correlation functions than blue galaxies, irrespective of their luminosities (e.g., Zehavi et al. 2005). Several additional relatively robust trends may be predicted: (1) the faint end slope of the galaxy luminosity function should approach the Press-Schechter value of \u223c\u22122.0 at \f\u2013 21 \u2013 high redshift z \u22656, subject to uncertain e\ufb00ects of cosmological reionization. (2) Crosscorrelation between CMB Sunyaev-Zeldovich y maps and density of red galaxies is expected to be positive and the opposite is true for that between y maps and density of blue galaxies. (3) Correlations (galaxy-galaxy lensing) between background galaxy shapes and foreground red galaxies should be systematically stronger than between background galaxy shapes and foreground blue galaxies. 4." + }, + { + "url": "http://arxiv.org/abs/1102.0262v3", + "title": "Physics of Coevolution of Galaxies and Supermassive Black Holes", + "abstract": "A new model for coevolution of galaxies and supermassive black holes (SMBH)\nis presented that is physically based. The evolutionary track starts with an\nevent that triggers a significant starburst in the central region of a galaxy.\nIn this model, the main SMBH growth takes place in post-starburst phase fueled\nby recycled gas from inner bulge stars in a self-regulated fashion on a time\nscale that is substantially longer than 100Myrs and at a diminishing Eddington\nratio with time. We argue that the SMBH cannot gorge itself during the\nstarburst phase, despite the abundant supply of cold gas, because star\nformation is a preferred mode of gas consumption in such an environment than\naccretion to the central SMBH. We also show that feedback from star formation\nis at least as strong as that from AGN and thus, if star formation is in need\nof being quenched, AGN feedback generally does not play the primary role. The\npredicted relation between SMBH mass and bulge mass/velocity dispersion is\nconsistent with observations. A clear prediction is that early-type galaxy\nhosts of high Eddingtion rate AGNs are expected to be light-blue to green in\noptical color, gradually evolving to the red sequences with decreasing AGN\nluminosity. A suite of falsifiable predictions and implications with respect to\nrelationships between various types of galaxies and AGN, and others, are made.\nFor those where comparisons to extant observations are possible, the model\nappears to be in good standing.", + "authors": "Renyue Cen", + "published": "2011-02-01", + "updated": "2012-05-23", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.GA", + "astro-ph.HE" + ], + "main_content": "Introduction The tight correlation between galactic center supermassive black hole (SMBH) mass ( MBH) and the bulge mass ( MBG) or velocity dispersion (\u03c3) in the nearby universe (e.g., Richstone et al. 1998; Ferrarese & Merritt 2000; Tremaine et al. 2002) strongly suggests coevolution of the two classes, at least over the Hubble time. In many semi-analytic calculations one of the most adopted assumptions, to put it simply, is that active galactic nuclei 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1102.0262v3 [astro-ph.CO] 23 May 2012 \f\u2013 2 \u2013 (AGN) feedback is able to prevent most of the gas from accreting onto the SMBHs and at the same time is able to \ufb01x most of the \u201cdefects\u201d of galaxy formation models such as the shape of the galaxy luminosity function and star formation (SF) history (e.g., Kau\ufb00mann & Haehnelt 2000; Croton et al. 2006; Somerville et al. 2008) with the underlying feedback physics parameterized. The substantial success in explaining a variety of observations enjoyed by these semi-analytic models is indicative of the relevance of AGN feedback. Calculations of the coupled evolution of SMBHs and galaxies using three-dimensional hydrodynamic simulations deploy thermal energy feedback in regions signi\ufb01cantly outside of the Bondi radius of the putative SMBH that e\ufb00ectively couples to the surroundings to regulate the SF and eventually drive the gas away. These pioneering detailed simulations have provided much physical insight and appear to be remarkably successful in accounting for many intricate observables, including AGN light curves, Eddington ratio distributions and SMBH-bulge relation and its scatter, for certain chosen value of the feedback energy strength (e.g., Di Matteo et al. 2005; Hopkins et al. 2006). What is hitherto left open in these calculations is the physical origin of the adopted energy feedback. One concern is that the derived SMBHbulge relation depends very sensitively on the adopted energy feedback parameter due to the strong radiative cooling (e.g., Silk & Nusser 2010; Choi & Ostriker 2011). Thus, it is prudent to seek underlying physical origins for these successful models and, before that is achieved, continue to explore alternative models. This paper synthesizes an alternative physical model largely based on known physics. Before describing our overall model, we shall \ufb01rst, in \u00a72, examine the plausibility of the fundamental claim that AGN feedback is primarily responsible for regulating not only SMBH growth but also SF. We argue that scenarios invoking AGN as the primary \u201cblowing machine\u201d during the intense starburst phase may logically require signi\ufb01cant \ufb01ne-tuning. We then describe the evolutionary path from a starburst to an elliptical galaxy, including the coupled evolution of star formation and SMBH growth in the ensuing two sections. In \u00a73, we show that growth of SMBH during the starburst phase is limited and constitutes a small fraction of the overall SMBH consumption. The physical reason is that this phase is over-supplied with gas such that only a very small central disc is gravitationally stable (Toomre parameter Q > 1) for gas accretion onto the SMBH, while all other regions are unstable and more conducive to star formation. Since the SF time scale is much shorter than the Salpeter accretion time scale, most of the gas forms into stars. The accreted mass during this phase is probably limited to a few percent of the \ufb01nal SMBH mass. In \u00a74, we point out that energy or momentum feedback from SF is at least as competitive as that from the AGN during the starburst phase. Therefore, SF is largely responsible for blowing most of the last patch of gas away to end the starburst phase. In short, during the starburst phase, the SMBH does not grow signi\ufb01cantly and does not play the leading role in quenching the star formation. \f\u2013 3 \u2013 In \u00a75, we show that most of the growth of the SMBH occurs in the ensuing poststarburst period, when the bulge/elliptical galaxy is largely in place and SF enters \u201cpassive\u201d evolution. The fuel for this primary growth phase is provided by the gas recycled back into the interstellar medium (ISM) from aging bulge stars, proposed earlier by Norman & Scoville (1988) in the context of a central stellar cluster and stressed recently by Ciotti & Ostriker (2007) in the context of elliptical galaxies. It provides a relatively \u201cdi\ufb00use\u201d (compared to the starburst phase) but steady gas supply that, we show, is ideal for feeding SMBH via an accretion disc. Meanwhile, SF is the dominant mode for gas consumption in the outer region because the accretion is unstable to fragmentation there, even in this phase. Selfregulation is at work for the growth of the SMBH during this period and is provided by much more robust (compared to energy feedback) radiation pressure induced momentum. The amplitude and slope of the resultant SMBH-bulge relation with this self-regulation is consistent with observations. In this model, the entire evolution from the onset of starburst, due to a gas-rich merger or some signi\ufb01cant event that drives a large amount of gas into the central region within a short period of time, to becoming a quiescent elliptical galaxy (or a bulge of a future spiral galaxy) consists of three distinct periods, as summarized in \u00a75.1 and in Figure 2: (1) \u201cStarburst Period\u201d: merger of two gas-rich spiral galaxies or some other signi\ufb01cant event induces a starburst that lasts about 107 \u2212108yrs. The SMBH grows modestly during this period. The feedback energy/momentum from the starburst, i.e., supernovae, drives the last patch of gas away and helps shut down star formation. (2) \u201cSMBH Prime Period\u201d: several hundred million years after the end of the starburst, aging low-to-intermediate mass stars, now in the form of red giants and other post-main-sequence states, start to return a substantial fraction of their stellar mass to the ISM. The SMBH accretion is mostly supply limited in most of this period, except during the \ufb01rst several hundred million years or so, and lasts for order of gigayear. Because the rate of gas return from stars diminishes with time, the Eddington ratio of the SMBH decreases with time and the SMBH spends most of the time during this period at low the Eddington ratio (\u226410\u22123). The SMBH growth is nearly synchronous with star formation from recycled gas during this period. The accompanying star formation rate is quite substantial, roughly \u223c(5 \u221210)(M\u2217/1011 M\u2299)(t/1Gyr)\u22121.3 M\u2299yr\u22121, where t is time in Gyr and M\u2217is stellar mass of the elliptical galaxy formed during the starburst (at t = 0). The duration of this phase depends sensitively on the lower cuto\ufb00mass of the initial mass function (IMF). (3) \u201cQuiescent Elliptical Galaxy\u201d: several gigayears after the end of the starburst the elliptical galaxy is now truly red and dead gas return rate is now negligible so both accretion to the central SMBH and residual star formation have ceased. It is possible, at least for an elliptical galaxy that is not too massive (i.e., Mtot \u22641012 M\u2299), that it may grow a disk. The feeding of the central SMBH in the bulge of spiral galaxy during this period is no longer by aging stars, rather by occasional objects (molecular clouds, stars, etc) that happen to be on some plunging orbits due to secular or random events. \f\u2013 4 \u2013 We present some predictions and implications of this model in \u00a76.2-6.9, followed by conclusions in \u00a77. Where comparisons can be made between the predictions of the model and observations, they appear to be in good agreement. Some additional predictions could provide further tests of the model. 2. AGN Cannot Regulate Star Formation During Starburst While the subsequent sections of quantitative physical analysis are independent of statements made in this section, we shall argue for the assertion in the title of this section with logic, in hopes of being able to provide some conceptual clarity to the role of AGN feedback on star formation during the starburst phase. The starting point of the evolutionary sequence is a starburst. It may be triggered by a major merger of two gas-rich galaxies or by other signi\ufb01cant events that channel a large amount of gas into the central region in a short period of time. Consider that an event causes a large amount of gas of mass Mgas to land in the central region. Physical processes then operate on the gas to produce a starburst, accompanied by some growth of the central SMBH, along with some associated feedback from both. Extreme events of this kind may be identi\ufb01ed with observed Ultra-Luminous InfraRed Galaxies (ULIRGs) (e.g., Sanders et al. 1988) or Sub-Millimeter Galaxies (SMGs) (e.g., Chapman et al. 2005). Theoretical models (e.g, Silk & Rees 1998; Hopkins et al. 2006) have proposed that feedback from AGN is responsible for the regulation of SF and SMBH growth so as to produce the observed Magorrian et al. (1998) relation where the ratio of the \ufb01nal SMBH to bulge stellar mass is MBH : MBG \u223c2 : 1000. We shall now re-examine this case. Consider how the infallen gas may be partitioned. Mass conservation requires MBH + MBG + Mout = Mgas, where Mout is the amount of gas that is blown away from the bulge. Clearly, only a very small fraction of the initial gas Mgas can possibly end up in the central SMBH, i.e., fBH \u2261MBH/ Mgas \u226a1. Let us assume that the reason for a very small fBH is because the feedback from the central SMBH prevented its own further growth during this phase. Since SMBH masses are observed to span a very wide range, it must be that this purported SMBH feedback process that regulates its own growth is galaxy speci\ufb01c, i.e., dependent on at least some physical variables characterizing the galaxy. A usual and reasonable assumption (which we are not advocating at the moment) for that is that either the gravitational potential well of the bulge or of the total halo determines the \ufb01nal SMBH mass, in coordination with its feedback. Does SMBH feedback dominate that of starburst in terms of regulating both SMBH growth and starburst? While we will show later (in \u00a73) that the answer is largely no to regulating the starburst at least, we assume that the answer is yes to both for the sake of continuing the present thought experiment. The simpli\ufb01ed sequence of events then plays out \f\u2013 5 \u2013 more or less as follows. The central SMBH accretes gas and builds up its feedback strength until its mass has reached the observed value, then blows away all the remaining gas and both SMBH accretion and SF stop abruptly. What might have happened to SF during all this time before the gas is blown away? There are three possible scenarios. Scenario #1, the SMBH accretion is so competitive and quick that most of the gas is blown away by the SMBH feedback before much SF has occured. That of course cannot have happened, because that would be inconsistent with the observed MBH \u2212MBG relation. Scenario #2, SF precedes at a pace that is in concert with the SMBH feedback such that by the time that MBH = 0.002( Mgas \u2212Mout), the amount of stars formed is equal to MBG = 0.998( Mgas \u2212Mout); the rest of gas of mass Mout got blown away by the feedback from the SMBH. This scenario is designed to match the observed MBH \u2212MBG relation. What remains undetermined is how large fout \u2261Mout/ Mgas is. Is it close to 1 or 0? In the case fout \u223c1, because (1 \u2212fout) is a small number, there is no particular preferred value for it. The potential well created by the eventual bulge stars would be much shallower than the original one already created by the residing gas. In other words, the SMBH only knew the potential well of the original gas and it would be rather arbitrary how much stars the SMBH decides to allow the bulge to have. If one argues that it is the potential well of the total halo mass that matters, the SMBH still did not know how to let SF take place at such a rate that we have the very tight observed MBH \u2212MBG relation for the bulge region. Thus, this case also appears to require much \ufb01ne tuning. Besides, if (1 \u2212fout) is too small, the bulge will be too small compared to what is observed. The opposite case with fout \u226a1 is at least substantially more stable, since a large fraction of the original gas has formed into stars before the remainder of the gas got blown away. In this case the SMBH would \u201cknow better\u201d the gravitational potential well eventually sustained by bulge stars, because it is not too far from that created by the initial gas. Then, how did the SMBH know when to blow away the remaining gas left over from SF and SMBH accretion? Should the SMBH blow away the gas when fout = 0.90 (an arbitrarily picked number for illustration purpose) or should it wait a bit longer to \ufb01nally blow away the gas when fout = 0.10? It may require more energy or momentum in the former than the latter; but that can readily be accommodated by a proportionally increased amount of gas accreted, in the vein of feedback from SMBH providing the required feedback energy or momentum. Since the amount of gas available before fout = 0.90 is blown away in this hypothetical case is capable of growing the SMBH to be 900 more massive than observed and the amount of time available (cosmological scale) is much longer than Salpeter time, there is no obvious reason why the SMBH cannot grow 10 times (or whatever factor) larger to blow away the gas when fout = 0.90 instead of when fout = 0.10. How the SMBH has communicated with the bulge to ration the gas consumption would be a mystery. Thus, even in this case with fout \u226a1, taking it as a given that the SMBH always stands ready to provide the necessary feedback, having SMBH feedback to regulate the overall SF in the bulge such that the ratio of the two matches \f\u2013 6 \u2013 the observation, again, requires a substantial amount of \ufb01ne tuning. Nevertheless, since it is reasonable to expect that the dependence of the outcome, such as the MBH \u2212MBG relation, on any proposed feedback processes (including those based on thermal energy deposition near the galaxy center) is likely a monotonic function of the adopted feedback strength, it should be expected that a solution be found such that the observed MBH \u2212MBG relation is obtained, for some chosen value of feedback strength, at least for some narrow range in MBG. But, until there is clear physical reason or direct observational evidence to support the chosen value of the feedback parameter which the solution sensitively depends on, such an approach remains to be re\ufb01ned. We will provide an alternative, signi\ufb01cantly less contrived, quantitative physical mechanism to circumvent this concern of \ufb01ne tuning. 3. Starburst Phase: Modest SMBH Growth and SF Shutdown by Stars We have argued in the previous section that AGN feedback cannot logically play the leading role in regulating SF, in the sense that while some feedback from the SMBH can certainly a\ufb00ect its surrounding gas, there is no particular reason why this could provide a quite precise (within a factor of a few) rationing mechanism during the starburst phase so as to produce the observed relation between the two. We shall now argue for Scenario #3: during the starburst phase the SF is self-regulated and self-limited, while SMBH growth is modest, does not need regulation and does not provide signi\ufb01cant feedback to star formation. We now give a physical reason for why, even though there is a very large supply of gas in the bulge region during the starburst phase, the SMBH growth is modest. We will make three simplifying assumptions to present trackable illustration without loss of generality. We assume (1) for the regions of interest a geometrically thin Keplerian disc dominated by the SMBH gravity (at least at the radii of interest here) is in a steady state, meaning the accretion rate (Frank et al. 1992): \u02d9 M = 3\u03c0\u03bd\u03a3g \u0002 1 \u2212(rin/r)1/2\u0003\u22121 \u22483\u03c0\u03bd\u03a3g (1) is constant in radius r and time, where \u03a3g is gas mass surface density and \u03bd is viscocity; the last equality is valid because the radii of interest here are much larger than the radius of the inner disc rin; note that it is inevitable to form a disk in the central given the rapid cooling and \ufb01nite angular momentum; (2) we adopt the \u03b1-disc viscosity (Shakura & Sunyaev 1973): \u03bd = \u03b1c2 s\u2126\u22121, (2) where \u03b1 is a dimensionless viscosity constant for which magnetorotational instability process (Balbus & Hawley 1991) provides a physical and magnitude-wise relevant value; cs is sound speed and \u2126is angular velocity (equal to epicyclic frequency for Keplerian disc). The Toomre \f\u2013 7 \u2013 Q parameter of the gas disc can be obtained from Equations (1,2): Q \u2261 cs\u2126 \u03c0G\u03a3g = 1 31/2\u03c03/2\u03b11/2 \u02d9 M MBH !1/2 G\u22121/4 MBH 5/4 \u03a33/2 g r9/4 ! (3) where G is gravitational constant. The slope of the surface brightness pro\ufb01les of the inner region of the observed powerlaw elliptical galaxies, which are assumed to be the product of the starbursts resulting from the gas-rich galaxy mergers, has a value concentrated in the range \u22121.0 to \u22120.5 (e.g., Faber et al. 1997; Kormendy et al. 2009), reproduced in merger simulations (e.g., Hopkins et al. 2009). Presumably the initial gas density pro\ufb01le is similar to the \ufb01nal observed stellar density pro\ufb01le in the inner regions. For ease of algebraic manipulations, we assume (3) the de Vaucouleur mass surface density pro\ufb01le (with a halfmass radius re) but with the inner region at r \u2264rp \u22610.07re modi\ufb01ed to be a Mestel disc as: \u03a3g(r) = \u03a30 \u0012 r r0 \u0013\u22121 for r \u2264rp, (4) where \u03a30 is the normalizing surface density at some radius r0; we will only be dealing with region r \u2264rp; the notional nuclear velocity dispersion of the system without the central SMBH at r \u2264rp is related to \u03a30 and r0 by \u03c32 n = \u03c0G\u03a30r0. (5) Subsequent results do not sensitively depend on the exact slope. The total mass of such a hybrid pro\ufb01le is equivalent to a truncated isothermal sphere with a truncation radius of 2re and velocity dispersion on galactic scales of \u03c3g such that \u03c3n = 1.55\u03c3g. (6) Since the dynamical time, say at 1kpc for a 200 km/s bulge being only 5 \u00d7 106yr, is much shorter than the Salpeter time, it is appropriate to assume that the gas disc is assembled instantaneously with respect to accretion to the SMBH when infalling gas lands on the disc. Combining Equations (3,4,5,6) we rewrite Q as Q = 0.32\u03b1\u22121/2 0.01 \u03f5\u22121/2 0.1 l1/2 E M 5/4 8 \u03c3\u22123 200r\u22123/4 pc , (7) where \u03b10.01 = \u03b1/0.01, \u03f5 = 0.1\u03f50.1 is the SMBH radiative e\ufb03ciency, lE is Eddington ratio, M8 = MBH/108 M\u2299, \u03c3200 = \u03c3g/200 km/s, rpc = r/1pc. The value of \u03b1 is quite uncertain, possibly ranging from 10\u22124 to 1 (e.g., Hawley et al. 1995; Brandenburg et al. 1995; Stone et al. 1996; Armitage 1998; Gammie 2001; Fleming & Stone 2003; Fromang & Papaloizou 2007). Setting Q in Equation (7) to unity de\ufb01nes the disc stability radius rQ = 0.22\u03b1\u22122/3 0.01 \u03f5\u22122/3 0.1 l2/3 E M 5/3 8 \u03c3\u22124 200 pc (8) \f\u2013 8 \u2013 within which Q > 1 and disc is stable to gravitational fragmentation, and outside which Q < 1 and disc is subject to gravitational fragmentation to form stars, supported by both simulations (e.g., Gammie 2001; Rice et al. 2003) and circumstantial observational evidence of the existence of stellar disc at small Galactic radius (\u223c0.1pc) (e.g., Levin & Beloborodov 2003; Paumard et al. 2006). The demarcation value of Q between stability and fragmentation does not appear to be qualitatively di\ufb00erent even if the disc is under strong illumination (e.g., Johnson & Gammie 2003), as might happen to a nuclear gas disc in the starburst phase. The disc mass within rQ is MQ = 9.8 \u00d7 106\u03b1\u22122/3 0.01 \u03f5\u22122/3 0.1 l2/3 E M 5/3 8 \u03c3\u22122 200 M\u2299 (9) This is the accretable mass out of the entire bulge region (note that some of the outer regions are more random motion supported). This conclusion reached is in good agreement with Goodman (2003), who employs somewhat di\ufb00erent assumptions than in this study in that he assumes local energy balance, while we impose the observationally inferred inner density pro\ufb01le to be self-consistent; the good agreement suggests that this result is quite robust, insensitive to assumptions made. Taking cue from our own Galaxy, if we assume that the initial SMBH mass of the two merging spiral galaxies of mass \u223c1012 M\u2299each is 2.5 \u00d7 106 M\u2299, for a spiral galaxy of velocity dispersion of 200 km/s, we see that the amount of mass that could be readily accreted according to Equation (9) using M8 = 0.05 is 6.7 \u00d7 104\u03b1\u22122/3 0.01 \u03f5\u22122/3 0.1 l2/3 E M\u2299. Note that the \ufb01nal SMBH mass for such a system is \u223c1.3 \u00d7 108 M\u2299 (Tremaine et al. 2002), if we were to match the observations. It is possible that the mass accreted to the SMBH may be larger than that indicated by Equation (9) due to replenishment. Replenishment of low angular momentum gas during the starburst phase may be possible in two ways: (1) through orbital decay of outer disc gas or (2) direct infall of low-J gas from outer regions. We will show below that (1) does not signi\ufb01cantly increase the accretable mass. Process (2) is probably unavoidable to some extent but unlikely to be frequent enough to be signi\ufb01cant for the following reasons. All the low angular momentum infalling gas falls into the inner regions initially according to its respective speci\ufb01c angular momentum driven by the torque of the trigger event (e.g., merger or some other signi\ufb01cant torquing event). To replenish low angular momentum gas directly to the central region some frequent and signi\ufb01cant torquing events during the starburst phase are needed. It seems unlikely that such events will be frequent enough to be able to reach the observed \ufb01nal SMBH mass: about \u223c100 \u22121000 replenishments will be required. One might approximately equate the number of replenishment (i.e., signi\ufb01cant disturbance) to the number of generations of stars formed during the starburst phase (by assuming that each generation of star formation manages to redistribute the angular momentum of a signi\ufb01cant fraction of the gas), which is unlikely to be close to \u223c100 \u22121000. In summary, taking into account possible additional accretion due to some replenishment and giving the bene\ufb01t of the possibility of \u03b10.01 < 1, it seems improbable that the SMBH is able to acquire a mass during the starburst phase that would be much more than 10% of the \ufb01nal value. \f\u2013 9 \u2013 At r \u2265rQ, the disc is unstable to SF. For SF under the conditions relevant here both the dynamical and cooling time are short and do not constitute signi\ufb01cant bottleneck; if they were the only time scale bottleneck, SF would be too e\ufb03cient. A possible bottleneck for SF is the time scale to rid the cloud of the magnetic \ufb02ux (assuming the SF clouds are initially magnetically sub-critical). The main ionization source in the depth of molecular cloud cores is cosmic rays (CR). While the exact ionization rate by CR is unknown for other cosmic systems, we have some estimate of that for our own Galaxy, \u03b6CR,Gal = (2.6 \u00b1 1.8) \u00d7 10\u221217 s\u22121 (e.g., van der Tak & van Dishoeck 2000). If one assumes that the CR ionization rate in starburst is 100 times (modeling a typical ULIRG in this case) that of the Galactic value, considering that the SF rate in ULIRGs is 100 \u22121000 times the Galactic value occuring in a more compact region and that the CR in ULIRGs may be advected out via fast galactic winds (versus slow di\ufb00usion in the Galaxy), one may roughly estimate that the ambipolar di\ufb00usion time is 7 \u00d7 106yr at a density of n \u223c105 cm\u22123 using standard formulas for recombinations (e.g., McKee & Ostriker 2007). This estimate is, however, uncertain. We will again look to direct observations to have a better gauge. Gao & Solomon (2004) show, from HCN observations, that ULIRGs and LIRGs convert molecular gas at n \u22653 \u00d7 104 cm\u22123 at an e-folding time scale of tSF \u223c2Myr, consistent with the above rough estimate. It is clear that SF time scale is much shorter than the Salpeter time of 4.5\u00d7107\u03f50.1yr; in other words, when gas is dense and unstable, star formation competes favorably with the SMBH accretion with respect to gas consumption. Therefore, most of the gas at r \u2265rQ will be depleted by SF. When the density pro\ufb01le of the disc at r \u2265rQ steepens to be \u03a3g(r) = \u03a3Q(r/rQ)\u22125/2, where \u03a3Q is the gas surface density at \u223crQ, the disc at r \u2265rQ may become stable again. While continued accretion supplied by gas on the outer disc is likely, albeit at a much lower level, the mass integral is convergent and most of the mass of this outer disc is at rQ given the density slope, even if the entire outer disc at this time is accreted. Thus, it appears that the amount of gas that is actually accreted by the SMBH during the starburst phase is rather limited. This new and perhaps somewhat counter-intuitive conclusion is strongly supported by available observations of ULIRGs. This conclusion is also opposite to most models that rely on SMBH to provide the necessary feedback to regulate star formation (e.g, Silk & Rees 1998; Hopkins et al. 2006). Observational evidence is that the SMBHs in ULIRGs and SMGs appear to be signi\ufb01cantly smaller (an order of magnitude or more) than what the MBH \u2212MBG relation would suggest (e.g., Genzel et al. 1998; Ivison et al. 2000; Ptak et al. 2003; Ivison et al. 2004; Alexander et al. 2005a,b; Kawakatu et al. 2006; Alexander et al. 2008). Nonetheless, it is expected that the AGN contribution in ULIRGs should become relatively more important for larger more luminous galaxies (see Equation 9), consistent with observations (e.g., Lutz et al. 1998). Starbursts occuring on rotating nuclear disc/rings in ULIRGs are also supported by circumstantial observational evidence (e.g., Downes & Solomon 1998). The overall conclusion that the SMBH feedback has little e\ufb00ect on the amount of stars \f\u2013 10 \u2013 formed is in agreement with that of DeBuhr et al. (2010) who investigated the radiation pressure-regulated SMBH feedback in the starburst phase of the merger simulations utilizing a sub-grid model for SMBH accretion. One speci\ufb01c common outcome between our calculation and their simulation is that most of the gas formed into stars, regardless of the feedback strength. A notable di\ufb00erence between our calculation and theirs is that their simulation resolution, a gravitational softening length of 47 pc, is signi\ufb01cantly larger than rQ (Equation 8). As a result, it is possible that their simulations do not resolve small scale that separates stable accretion from unstable, fragmenting disc, which is crucial to our quantitative conclusion (note that they use the viscosity parameter \u03b1 = 0.05 \u22120.15 that is larger than our \ufb01ducial value of 0.01, which would yield a still smaller rQ, see Equation 9). Thanks to that di\ufb00erence, we were able to conclude that, even without considering any feedback from the central SMBH, the SMBH during the starburst phase does not grow to anywhere close the observed \ufb01nal mass, because star formation can more favorably deplete the gas that may otherwise accrete to the SMBH, whereas they \ufb01nd SMBH masses to be too large even with substantial feedback (note that they use 10 times L/c radiation pressure force assuming multiple scatterings of each converted FIR photon). It seems likely that their di\ufb00erent conclusion may be due to a much higher accretion rate at their resolution scale, which we argue does not re\ufb02ect the actual accretion onto the SMBH, but rather the disc is unstable at that scale and mostly forms stars. As we have noted in the previous paragraph, observations indicate that the SMBH masses in the starburst phase appear to be smaller than the \ufb01nal values seen in quiescent elliptical galaxies by an order of magnitude, consistent with our conclusion. Substantially higher resolution (a factor of \u223c100) simulations may be necessary in order to realistically and more accurately simulate the intricate competition between accretion and star formation. 4. A Comparison of Feedback Energetics Between Star Formation and SMBH Having shown the unlikelihood of substantially growing SMBH during the starburst phase, we now turn to a comparison of the energetics of SMBH and SF to show that, feedback from starburst itself should play the leading role in shutting down or quenching star formation, i.e., promptly sweeping away the \ufb01nal portion of the gas, where needed. To avoid any apparent bias against SMBH or a possibly circular looking argument by the assertion that most of the SMBH growth takes place in the post-starburst phase (as we will show in \u00a74), we shall for the moment generously assume that the entire SMBH growth occurs during the starburst phase, to maximize the energy output from the SMBH, when comparing the energetics from the SMBH and the starburst. In Table 1, under the assumptions that MBH : MBG = 2 : 1000, a Salpeter IMF for stars and a radiative e\ufb03ciency of SMBH accretion of 10%, energy output from both SF and SMBH in various forms are listed: (1) \f\u2013 11 \u2013 total radiation energy, (2) ionizing radiation, (3) X-ray radiation in 2 \u221210keV band, (4) mechanical energy, which is supernova explosion energy for SF and broad absorption line (BAL) out\ufb02ow for SMBH, respectively, and (5) radio jets. To obtain energy is ergs per Mstar formed, one just needs multiply each coe\ufb03cient in Table 1 by Mstarc2, where c is speed of light. The relevant references are Elvis et al. (1994) and Sazonov et al. (2004) for both \u03f5BH(LL) and \u03f5BH(2-10keV), Ranalli et al. (2003) for \u03f5\u2217(2-10keV), Moe et al. (2009) and Dunn et al. (2010) for \u03f5BH(BAL) and Allen et al. (2006) for \u03f5BH(jet) (if one uses energy seen in the most powerful radio jet lobes and assumes that they are produced by the most massive SMBH, a comparable value is obtained). The entry for the BAL energy is based on two cases and very uncertain, primarily due to lack of strong constraints on the location of the BAL and their covering factor. It is evident that aside from the energy in the form of radio jets and hard X-rays, SF is at least competitive compared to SMBH. Heating due to hard X-rays from SMBH via metal line or Compton heating a\ufb00ects only the very central region surrounding the SMBH, not over the entire galaxy (Ciotti & Ostriker 2007). Within the physical framework outlined here, most of the SMBH growth occurs post-starburst and radio jets occur at a still later stage in core elliptical galaxies, energy output (or momentum output derived from it) from SF in all relevant forms should dominate over that of SMBH. Our argument that radio jets occur at a later stage in galaxy evolution is not at present based on a physical model, but on empirical evidence. Observationally, it appears that all signi\ufb01cant radio jets are launched in elliptical galaxies that have \ufb02at cores (Balmaverde & Capetti 2006), with a very few exception that originated in disc galaxies (e.g., Evans et al. 1999; Ledlow et al. 2001) or S0\u2019s (e.g., V\u00b4 eron-Cetty & V\u00b4 eron 2001). But none has been associated with elliptical galaxies with an inner powerlaw brightness pro\ufb01le slope. It has been plausibly argued that powerlaw elliptical galaxies are produced by gas-rich mergers (we adopt this scenario where a powerlaw elliptical galaxy is produced following each major gas-rich merger triggered starburst) (e.g., Faber et al. 1997), whereas core elliptical galaxies are produced later by dry mergers of two elliptical galaxies where the \ufb02at core is carved out by the merger of the two SMBHs via Table 1. # Form SF SMBH (1) total radiation \u03f5\u2217(rad) = 7 \u00d7 10\u22123 \u03f5BH(rad) = 2 \u00d7 10\u22124 (2) ionizing radiation (\u226513.6eV) \u03f5\u2217(LL) = 1.4 \u00d7 10\u22124 \u03f5BH(LL) = 3 \u00d7 10\u22125 (3) X-ray (2 \u221210keV) \u03f5\u2217(2-10keV) = 9 \u00d7 10\u22128 \u03f5BH(2-10keV) = 5 \u00d7 10\u22126 (4) mechanical \u03f5\u2217(SN) = 1 \u00d7 10\u22125 \u03f5BH(BAL) = (0.2 \u22122.8) \u00d7 10\u22125 (5) radio jets \u03f5\u2217(jet) = 0 \u03f5BH(jet) = 4 \u00d7 10\u22125 \f\u2013 12 \u2013 dynamical friction (e.g., Milosavljevi\u00b4 c & Merritt 2001). Directly supporting this statement is the lack of radio jet in available observations of ULIRGs (e.g., Alexander et al. 2010), in agreement with other observations that indicate a signi\ufb01cant time-delay between starburst and radio activities (e.g., Emonts et al. 2006). An independent, additional argument comes from the fact that radio jets are highly collimated and, for the most powerful ones that are energetically relevant, they appear to dissipate most of the energy at scales larger than that of the bulge region, suggesting that, even if one were to ignore the previous timing argument, the e\ufb03ciency of heating by radio jets for the bulge region is likely low and at best non-uniform. Weaker radio feedback, observed almost exclusively in galaxies with an atmosphere of hot gas, may be able to steadily provide feedback energy but it is too weak to be energetically important. Besides, they appear to only operate in elliptical galaxies with hot atmospheres (e.g., Best et al. 2005). The amount of supernova explosion energy that couples to the surrounding medium is ESN = 1 \u00d7 10\u22125M\u2217c2, which is exactly equivalent to 5 \u00d7 10\u22123 MBHc2 used in the in\ufb02uential simulations of Hopkins et al. (2006) with thermal AGN feedback, assuming MBH : MBG = 2 : 1000. Because the energy output from supernovae is subject to less cooling than that from the AGN, since the former is at larger radii and lower densities than the latter, we expect that the amount of energy due to supernovae can at least as e\ufb00ectively as that proposed from AGN to drive the gas away. Thus, when most of the gas have formed into stars (i.e., the bulge is largely in place after \u223c107 \u2212108yr of starburst), the remaining gas should be blown away by collective supernova explosions and the starburst comes to a full stop, reminiscent of what is seen in the simulations of Hopkins et al. (2006) with AGN feedback. Detailed high-resolution simulations will be necessary, taking into account cooling and other physical processes, to ascertain the fraction of gas that is blown away. In short, the bulk of galactic winds is likely driven by stellar feedback from the starburst. Galactic winds are observed and casual connection between SF rate and wind \ufb02uxes has been \ufb01rmly established (e.g., Heckman 2001; Weiner et al. 2009), lending strong observational support for the argument. 5. Post-Starburst: Main Growth of SMBH with Self-Regulation The previous section ends when the starburst has swept away the remaining gas and ended itself. This section describes what happens next the post-starburst period, the initial period of which is also known as K+A galaxies. The newly minted (future) bulge enters its \u201cpassive\u201d evolutionary phase, as normally referred to. We would like to show that this is when most of the action for SMBH begins, fueled by recycled gas from aging low-to-intermediate mass stars. Since two-body relaxation time is much longer than the Hubble time, it is safe to assume that the stars formed in the inner region during the starburst phase remain roughly in place radially. Angular momentum \f\u2013 13 \u2013 relaxation may also be ignored for our purpose (e.g., Rauch & Tremaine 1996). However, the stellar distribution in the inner region that initially formed on a disc probably has vertically thickened substantially and we will assume that they no longer substantially contribute to local gravity on the gas disc (within the thickness of the assumed thin gas disc) subsequently formed from returned stellar gas. Because stars in the inner regions are already mostly rotationally supported, the shedded gas rains almost \u201cstraight down\u201d to land at a location that their speci\ufb01c angular momentum allows, to form a disc. Obviously, going out radially, the rotational support lessens and star formation may occur in a 3-d fashion. But that does not alter our argument about what happened at small radii. The orientation of the disc is approximately the same as the previous disc out of which stars in the inner regions were formed, since the overall angular momentum distribution of stars has not much changed in the absence of any subsequent intrusions. The most important di\ufb00erence of this new accretion disc, compared to the disc formed during the starburst phase, is that this new disc starts with almost no material and surface density increases with time gradually on the timescale of hundreds of megayears to gigayear. To have a better gauge how the results obtained depend on the assumed inner density slope, instead of assuming a Mestel disc as is done in \u00a73 here we present a more general case assuming the inner density pro\ufb01le of the form \u03a3g(r) = \u03a30 \u0012 r r0 \u0013\u2212n , (10) where n \u223c[0.5, 1] (e.g., Faber et al. 1997; Kormendy et al. 2009). For this case Equation (8) is modi\ufb01ed, taking into account the gradual change of the gas disc surface density with time, to be rQ = 1 (\u03c0(3\u03c0)1/2)4/3(3\u22122n) \u02d9 M M !2/3(3\u22122n) (frecfg)\u22122/(3\u22122n)\u03b1\u22122/3(3\u22122n)G\u22121/3(3\u22122n)M 5/3(3\u22122n)\u03a3\u22122/(3\u22122n) 0 r\u22122n/(3\u22122n) 0 (11) where frec is the total fractional stellar mass that recycles back to ISM and fg(t) the fraction of recycled gas that has returned by time t (out of the fraction frec). The process of SMBH accretion in this case goes as follows. The SMBH will accrete all the gas within its Bondi radius rB over some period of time, as long as rQ \u2265rB, where rB is de\ufb01ned as rB \u2261G MBH \u03c32 n , (12) with \u03c3n being the velocity dispersion of the inner region of the bulge (r \u226420pc or so for MBH = 108 M\u2299). For the moment we ignore any feedback e\ufb00ect from the SMBH. Since rB grows with time and rQ decreases with time with increasing fg for r > rB that has been \f\u2013 14 \u2013 accumulating gas, the condition rQ \u2265rB may be violated at some time t, at which point the SMBH is cut o\ufb00gas supply at its Bondi radius and the SMBH will subsequently grow by consuming the \ufb01nal patch of gas on the disc within its Bondi radius. Before the condition rQ \u2265rB is reached, the recycled gas that has landed outside (time varying) rB continues to accumulate (some of the accumulated gas possibly forms stars). Using Equations (11,12) we \ufb01nd the turning point rQ = rB is reached when frecfg = 2 \u2212n 2 (13) with the disc mass within rQ = rB, i.e., SMBH mass, being MF = 3(2 \u2212n)3 8 \u02d9 M M !\u22121 \u03b1\u03c33 n G . (14) From Equation (13) we see that (2\u2212n)/(2frec) > 1 for n = [0.5\u22121]. Thus, we simply correct Equation (14) by a factor of 2frec/(2 \u2212n) to \ufb01nally arrive at MBH = 3(2 \u2212n)2 4 \u02d9 M M !\u22121 frec\u03b1\u03c33 n G = 1.9 \u00d7 108(2 \u2212n)2\u03b10.01l\u22121 E \u03f50.1 \u0012 \u03c3n 200 km/s \u00133 M\u2299, (15) with the radius when rQ = rB = rBQ being: rBQ = 34(2 \u2212n)3\u03b10.01l\u22121 E \u03f50.1 \u0012 \u03c3n 200 km/s \u0013 pc. (16) Equations (15, 16) suggest that the SMBH accreted the recycled gas at r \u226420( MBH/108 M\u2299) pc or so for lE \u223c1; it could be substantially larger for smaller lE. The reason that the accretable mass is so much larger during this period than the starburst phase is because the accretion disc in this period is replenished continuously at a moderate rate such that it is stable within a much large radius than the case of sturburst phase with a much thicker (surface density-wise) disk. Equation (15) resembles the observed MBH \u2212\u03c3 relation (Tremaine et al. 2002). We argue the resemblance is deceptive, in a general sense, because it hinges on a value of \u03b1 \u223c0.01 or so and lE \u223c1. As we mentioned earlier, the currently allowed value of \u03b1 could range from 10\u22124 to 1 and at the moment we do not know what value nature has picked to grow her SMBHs. In light of this situation, using Equation (15) to declare victory is premature. However, Equation (15) does suggest that there is enough material and time to grow the SMBH to the observed value during the post-starburst phase. This is in stark contrast with the starburst phase when there is not enough accretable matter even if one pushes the viscosity value to the limit (see Equation 9). \f\u2013 15 \u2013 7 8 9 10 6 7 8 9 10 log 0.002*MBG*(\u001f /200) (Msun km/s) log MBH (Msun) Observation: Marconi & Hunt (2003) model prediction Fig. 1.\u2014 The circles are data from Marconi & Hunt (2003). The solid line is predicted by Equation 17 using A = 1 (see Equation 18). A scenario where the allowed range of viscosity value is limited to one side, i.e., \u03b1 is allowed to have values greater than say 0.01, is much less \ufb01ne tuned. In this case, some self-regulation for the SMBH growth will be necessary. This self-regulation for the SMBH growth is indeed achievable during the post-starburst phase, as we will now describe. The total amount of radial momentum that radiation pressure of the SMBH may exert on the surrounding gas is \u03f5c MBH (this is likely a lower bound, neglecting the possibility of multiple scatterings of photons in the optically thick regime). Equating \u03f5\u03b2c MBH to frec MBG(1\u2212f\u2217)vesc (that is the momentum of the driven-way gas escaping the galaxy) gives MBH MBG = frecvesc \u03f5\u03b2c (1 \u2212f\u2217) (1 + frecf\u2217) = A 2 1000\u03c3200, (17) where A is A = (fesc/0.15)(1 \u2212f\u2217) (1 + (frec/0.15)f\u2217)\u03b2\u03b7\u03f50.1 vesc 2\u03c3 , (18) where f\u2217is the fraction of recycled gas that subsequently re-formed into stars and vesc is escape velocity [for an isothermal sphere truncated at virial radius rv, vesc(r)/2\u03c3 = (1 + ln(1 + rv/r))1/2 at radius r]; \u03b2 is the fractional solid angle that absorbs the radiation from \f\u2013 16 \u2013 the SMBH; the term (1 + fescf\u2217) takes into account additional stars added to the bulge stellar mass formed from the recycled gas. Of the parameters in Equation 18, fesc = 0.15 is reasonable taking into account that about the half of mass return occuring at early times by type II supernovae can escape without additional aid; radiative e\ufb03ciency of \u03f5 = 0.1\u03f50.1 is consistent with observations (Yu & Tremaine 2002; Marconi et al. 2004); some fraction of the recycled gas forming into stars is probably unavoidable, since some gas with column density greater than Compton column will slip through radiation pressure (see discussion below); f\u2217also includes the (possibly very large) amount of gas at large radii that would not have accreted onto the SMBH in the \ufb01rst place even in the absence of any feedback (e.g., molecular clouds on the Galactic disk are not being fed to the Galactic center SMBH in a consistent fashion); the factor \u03b7 (greater than one) takes into account additional stars that are not formed from the starburst event. Overall, considering all these balancing factors, a value of A of order unity seems quite plausible. Figure 1 plots the relation between MBH and \u03c3 MBG predicted by Equation 17 using A = 1. It is clear that it provides a very good \ufb01t to the observed data. A similar scaling relation as Equation 17 was derived based on a di\ufb00erent, radio jet feedback mechanism (Soker & Meiron 2010). A similar scenario of linear momentum feedback from AGN radiation pressure has been considered by Silk & Nusser (2010) to possibly produce the observed MBH \u2212MBG relation during the starburst phase but they conclude that the radiation pressure is insu\ufb03cient by an order of magnitude to be able to blow the unwanted gas away. The magnitude of the radiation pressure and escape velocity requirement considered here are the same as theirs. The di\ufb00erence is that here the amount of gas that need to be regulated in the post-starburst phase is nearly a factor of 10 lower and further allowance for star formation from the recycled gas make possible that the radiation pressure from the central AGN may be adequate to self-regulate the SMBH growth so as not to overgrow it. We note that Equation 17 would work without much variation if the gas that is blown away is uniformly distributed. The recycled gas is expected to be non-uniform. Even if it were uniform initially, thermal instabilities likely make the distribution non-uniform. Given that, we elaborate further on Equations (17,18) and the physical processes of radiation pressure driven winds. Some distinction may be made between about 1/3 of the total solid angle where UV and other photons are directly seen from AGN and the other \u03b2 \u223c2/3 of the solid angle that has a nearly Compton thick or thicker obscuring screen, most of which probably stems from the so-called molecular torus (e.g., Risaliti et al. 1999). For every \u2206Macc of mass accreted, roughly \u03f5c/vesc\u2206Macc = 100\u03f50.1(vesc/300 km/s)\u22121\u2206Macc of mass that rain down by aging stars could be driven away by the radiation momentum from the AGN. In the 1/3 opening solid angle some portion of the radiation pressure driven winds will be accelerated to high velocities, perhaps in a fashion similar to what is seen in simulations (e.g., Kurosawa & Proga 2009), observationally manifested as broad emission or absorption lines as well as out\ufb02ows seen in narrow lines (e.g., Crenshaw et al. 2003; Greene et al. 2011). \f\u2013 17 \u2013 A signi\ufb01cant fraction of the material may be accumulated in the remaining \u03b2 \u223c2/3 of the solid angle (i.e., Type 2 AGNs), including recycled gas that comes from the other 1/3 solid angle that is too heavy to be accelerated away \u201con the \ufb02y\u201d by the radiation pressure. In this 2/3 of the solid angle, high velocity winds radially exterior to the molecular torus is unlikely given the heaviness (i.e., low opacity) of the molecular torus. We discuss some of the physics here. To gain a more quantitative understanding, a look at some observed properties of the torus is instructive. Ja\ufb00e et al. (2004) measured the radius and height of the molecular torus of NGC 1068 to be 1.7pc and 2.1pc, respectively. The mass of the SMBH in NGC 1068 is (8.3 \u00b1 0.3) \u00d7 106 M\u2299(e.g., Marconi & Hunt 2003). If we extrapolate to a 108 M\u2299SMBH assuming that the location and height of the molecular torus is proportional to the SMBH mass, we have a surface area of the torus equal to 3200 pc2 at a SMBH-centric radius of 20pc. If we assume that the column density of the molecular torus is 1024cm\u22122 (e.g., Risaliti et al. 1999), its total mass is then 2 \u00d7 107 M\u2299. The dynamical time at 20 pc is 105 yrs. A SMBH of mass 108 M\u2299accreting at Eddington rate would grow a mass of \u223c105 M\u2299in 105 yrs, while the overall rate of gas return would be \u223c2 \u00d7 107 M\u2299over the entire bulge during that period. Thus, the abundant gas supply rate suggests that the necessary (not su\ufb03cient) condition for a near \u201csteady\u201d state is met such that the molecular torus may be kept roughly invariant with time, with the rate of driven-away gas by radiation pressure plus that of gas forming into stars equal to the rate of gas return from aging stars. Given the short star-formation timescale of the very dense gas in the molecular torus, it would be unavoidable that star formation should occur there (as well as some regions exterior to it). This \u201clightens up\u201d the torus to the extent that it may be pushed away by the radiation pressure, when the condition that the deposited radiation momentum divided by the accumulated mass exceeds the escape velocity (assuming, in the absence of radiation pressure, the torus would just be in a bound circular orbit). In this sense the radiation momentum from the SMBH serves to retard gas supply to accretion from the torus to let SF take over to have it mostly depleted. In combination with the analysis in the preceding paragraph, it seems physically plausible that radiation pressure and depletion of gas by star formation is able to jointly reduce and regulate the amount of gas that feeds the central SMBH. Given that the overall margin, in an \u201con average\u201d sense, is quite thin (i.e., A \u223c1 in Equation 18), it is likely that there are signi\ufb01cant variations in A, perhaps up to a factor of a few. In the 1-d simulations of Ciotti & Ostriker (2007) for an elliptical galaxy, the SMBH growth appear to be intermittent. The intermittency in their simulations was caused by a hot X-ray heated bubble that prevents continued gas accretion, until it bursts, which is then followed by another accretion episode, and so on. We suggest that Rayleigh-Taylor instability on the shell enclosing the X-ray bubble may prevent the X-ray bubble from in\ufb02ating, as hinted by recent 2-d simulations of Novak et al. (2010). It it reasonable to assume that shell fragmentation in three-dimension is still more pronounced to allow continued de\ufb02ation \f\u2013 18 \u2013 of a notional X-ray bubble. Observationally, the lack of signi\ufb01cant X-ray emission from circumnuclear region in powerlaw elliptical galaxies host AGNs, which we argue are the post-starburst galaxies we consider here, supports the picture that the hot bubble is not robust (e.g., Pellegrini 2005). In the absence of a hot X-ray bubble guarding the SMBH, we suggest that the recycled gas from aging stars is able to reach the disc and the accretion, with self-regulation argued above, is quasi-steady without major \ufb02ares of magnitude seen in 1-d simulations. As we will show later, a steady declining accretion rate proportional to the gas return rate provides a much better match to at least two observations: (1) the observed early-type host galaxies of AGNs are mostly in the green valley of the galaxy color-luminosity diagram with a small fraction in the red sequence (\u00a75.2) (e.g., Salim et al. 2007; Silverman et al. 2008; Hickox et al. 2009; Schawinski et al. 2010), but very few in the blue cloud, which would have been the case if AGN \ufb02ares are accompanied by starbursts (Ciotti & Ostriker 2007); (2) the observed AGN accretion rate for early-type galaxies in the local universe displays a powerlaw distribution with the amplitude and decay rate (Kau\ufb00mann & Heckman 2009) that is expected from the non-\ufb02are scenario that is proposed here. This indicates that bursty AGN accretion, while quite possible and sometimes perhaps unavoidable, is probably not the dominant mode. It is currently a challenge but will be of great value to carry out 3-d high-resolution simulations to more accurately quantify this outcome. 6. Model Predictions and Discussion We have presented a physically motivated picture for the coevolution of galaxies and SMBH starting with a triggered starburst. Let us now summarize the entire evolution in \u00a75.1 and then give an incomplete list of implications and predictions in \u00a76.2-6.9 to be qualitatively compared/veri\ufb01ed with observations. 6.1. Three Distinct Periods of Coevolution of Galaxies and SMBH From the onset of a signi\ufb01cant central starburst to becoming a quiescent bulge there are three distinct periods, as summarized in Figure 2 for an example merger of two gas-rich spirals each of mass \u223c1012 M\u2299that eventually becomes a powerlaw elliptical galaxy of velocity dispersion of 200 km/s. We stress that the trigger event is not limited to major mergers. This three-stage scenario is not new and its successes with respect many observations have been discussed previously (e.g., Granato et al. 2004, 2006; Cirasuolo et al. 2005; Lapi et al. 2006; Lamastra et al. 2010). The new theoretical element here is the primary growth of SMBH in the post starburst phase, which is re\ufb02ected in the color and other properies of AGN hosts and we will show is in remarkable accord with latest observations, in contrast to the conventional scenario where SMBH growth primarily occurs during the starburst phase. \f\u2013 19 \u2013 The time boundaries between di\ufb00erence consecutive phases (three ovals) are approximate (uncertain to a factor of at least a few). Given the complexity and variety of starburst trigger events, one should expect signi\ufb01cant variations from case to case. The expected consequences or predictions of this model are in many ways di\ufb00erent from and often opposite to those of models that invoke AGN feedback to shut down both starburst and AGN activities (e.g, Silk & Rees 1998; Hopkins et al. 2006). A new and in some way perhaps the most fundamental \ufb01nding of this work is that the SMBH does not grow during the starburst phase as much as previously thought, required in AGN-feedback based models, despite the obvious condition that there is a lot of gas being \u201cjammed\u201d into the central region; this is di\ufb00erent from almost all previous work (e.g, Silk & Rees 1998; Hopkins et al. 2006; DeBuhr et al. 2010) that either need to advocate very strong SMBH feedback or appear to overgrow the SMBH. The idea of feeding the SMBH with recycled stellar material in the post-starburst phase is not new (e.g., Norman & Scoville 1988; Ciotti & Ostriker 2007) and we inherit most of the already known elements from prior work, including gas return rate and the likelihood of continued star formation. Our analysis shows the likelihood that the SMBH may be fed too much in the post-starburst period in the absence of feedback from the SMBH, in dramatic contrast with the starburst phase when SMBH feedback is insu\ufb03cient. While energy feedback from the SMBH certainly plays a role, we show that the more robust momentum feedback from SMBH radiation pressure can play a critical role in regulating SMBH growth, not necessarily only by blowing powerful winds, but rather, in combination, by also pushing away thus retarding accretion of unwanted (by SMBH) gas to be instead consumed by star formation. While our analysis may have captured some of the essential physics in terms of accretion and star formation demarcation, to more realistically model the complex accretion and star formation dynamics, much higher resolution 3-d radiation hydrodynamic simulations will be required and will be of tremendous value. The \u201csize\u201d of the starburst depends on the \u201csize\u201d of the triggering event, with at least some fraction of ULIRGs and SMGs due to major mergers of massive gas-rich gas. However, irrespective of the size of the starburst event, the time scales involved, being largely due to physics of stellar interior and accretion time scale, remain the same. (1) \u201cStarburst Period\u201d: this phase is triggered by some event. The SMBH grows modestly during this period to possibly attain a mass that is up to order ten percent of its \ufb01nal mass. This phase lasts about 107 \u2212108yrs for typical starbursts and the host galaxies during this phase are in the blue cloud in the luminosity-color diagram. The feedback energy/momentum from the starburst, i.e., supernovae, drives the last patch of gas away and shuts down star formation, if needed. In other words, the starburst is self-regulated, not by the central AGN during this period. (2) \u201cSMBH Prime Period\u201d: several hundred million years after the end of the starburst, aging low-to-intermediate mass stars, now in their post-main-sequence phases, start to return a \f\u2013 20 \u2013 substantial fraction of their stellar mass to the ISM. The SMBH accretion is fueled by this recycled gas lasting for order of gigayear. The growth of SMBH is self-regulated, readily provided by the radiation pressure from the AGN. The host galaxies during this period start out light-blue or in the \u201cgreen valley\u201d and migrate to the \u201cred sequence\u201d. Because the rate of gas return from stars diminishes with time and SMBH mass grows, the Eddington ratio of the SMBH decreases with time. The SMBH growth is synchronous with star formation from recycled gas during this period. The accompanying star formation rate may also be substantial but typically does not constitute a starburst during this period. The entire duration of this phase depends sensitively on the lower cuto\ufb00mass of the initial mass function (IMF) \u2013 a sensitive and powerful prediction of this model. (3) \u201cQuiescent Bulge\u201d: several gigayears after the end of the starburst the bulge is now truly red and dead gas return rate is now negligible so both accretion to the central SMBH and residual star formation have ceased. It is possible that a disk is grown later around the bulge. The feeding of the central SMBH in the bulge of spiral galaxy during this period is no longer by overhead material from aging stars, rather by occasional objects that happen to be on some plunging orbits to be disrupted by the SMBH and form a short-lived accretion disc. Candidate objects may include molecular clouds, some tidally disruptable stars or gas streams. Signi\ufb01cant disturbances or torques, such as minor mergers and galactic bars, could provide the necessary drivers for some more consistent accretion events. How is a red and dead bulge with a hot atmosphere able to remain star-formation-free? This is a major topic on its own right and beyond the scope of the current paper, but will be addressed in a future paper. 6.2. Some \u201cObvious\u201d Implications of the Model There are some unambiguous discriminating signatures of this model that already can be directly \u201cread o\ufb00\u201d Figure 2. We highlight several here. (1) Starburst and AGN growth are not coeval in this model. AGN does not regulate the starburst, consistent with observations (e.g., Schawinski et al. 2009; Kaviraj 2009). AGN activities is expected to outlive the starburst, in agreement with observations (e.g., Georgakakis et al. 2008). These predictions are opposite to those of models that invoke AGN feedback as the primary regulating agent. (2) The apparent requirement of a rapid migration of early-type galaxies from the blue cloud to the red sequence, in order to produce a bimodal distribution in color (e.g., Blanton et al. 2003), is primarily due to the prompt shutdown of SF by stars (i.e., supernovae) at the end of the starburst phase; there is no need to invoke other ingredients, consistent with observations (e.g., Kaviraj et al. 2010). Observationally, there is no evidence that the \f\u2013 21 \u2013 presence of an AGN is related to quenching of star formation or the color transformation of galaxies (e.g., Aird et al. 2012). This prediction is di\ufb00erent from that of models that invoke AGN feedback to quench star formation. (3) AGN activities in ongoing starburst galaxies, i.e., buried AGN activities, are not expected to be dominant in this model, in agreement with observations (e.g., Genzel et al. 1998; Ivison et al. 2000; Ptak et al. 2003; Ivison et al. 2004; Alexander et al. 2005a,b; Schweitzer et al. 2006; Kawakatu et al. 2006; Alexander et al. 2008; Veilleux et al. 2009). Note that the above statement is not inconsistent with AGN/QSOs being associated with galaxies in the process of merging, which may enhance accretion activities in the involved (yet to merge) galaxies (e.g, Bahcall et al. 1997; Hennawi et al. 2010; Smith et al. 2010). (4) The most luminous quasars that accrete with high Eddington ratios occur order of 100Myr after the end of the starburst. They may contain substantially more merger signatures, which appears to be indicated by observations (e.g., Bennert et al. 2008). If one were to identify a population in-between ULIRGs and more regular QSO hosts in terms of spectral properties, they should show some more signs of tidal interactions that are yet to fully settle since the starburst, also consistent with observations (e.g., Canalizo & Stockton 2001). (5) Low Eddington ratio AGNs that are expected to last order of Gyr are not expected to show a close linkage to major disturbances that trigger the starburst (e.g., mergers), since possible signatures of the trigger merger event have largely been erased over time, consistent with observations (e.g., Grogin et al. 2005; Cisternas et al. 2011). Thus, one does not expect to see merger signatures to be associated with moderate-luminosity AGNs, which is in contrast with AGN feedback based models where most of the moderate luminosity AGNs are expected to coincide with starburst. (6) While the green-valley morphologically early-type galaxies that host AGN is the evolutionary link between starburst galaxies (in the blue cloud) and the red elliptical galaxies (on the red sequence), it is useful to distinguish between them and the other class of green galaxies that simply continuously form a modest amount of stars (such as our own Galaxy). The former are chronologically immediate successors to starburst galaxies and should be in early-type galaxies, strongly supported by observations (e.g., Salim et al. 2007; Silverman et al. 2008; Hickox et al. 2009; Schawinski et al. 2010), whereas the latter are not a chronologically intermediate class between the blue cloud and the red sequence. The total green galaxy population will be the sum of these two di\ufb00erent morphological types, with some obvious implications, such as green galaxies having mixed morphological types with limited merger signatures, consistent with observations (e.g., Mendez et al. 2011). This prediction is in contrast with AGN feedback based models where most AGN hosts are expected to coincide with starburst and a small fraction, mostly the most luminous AGNs (occuring near the end of the starburst phase), is expected to have matured early-type morphologies. \f\u2013 22 \u2013 (7) While the early-type AGN host galaxies may have similar morphologies as and will eventually evolve to inactive elliptical galaxies, the former should have much bluer colors than the latter, consistent with observations (e.g., S\u00b4 anchez et al. 2004). The basic morphological properties of the host galaxies of the most luminous quasars, corresponding to the most massive SMBHs in the prime growth phase should resemble those of giant elliptical galaxies, consistent with observations (e.g., Dunlop et al. 2003). (8) Because of the expected rate of gas return (\u221dt\u22121.3 on gigayear scales) to which both SMBH accretion and star formation are proportional and because more powerful AGN accretion occurs closer in time to the preceding starburst, it is expected that more powerful AGNs are hosted by early-type galaxies with younger mean stellar ages, consistent with observations (e.g., Kau\ufb00mann et al. 2003; Jahnke et al. 2004). (9) The accompanying star formation rate of elliptical galaxies may be quite substantial, on the order of \u223c(5 \u221210)(M\u2217/1011 M\u2299)(t/1Gyr)\u22121.3 M\u2299yr\u22121. Thus, while most AGN host galaxies have left the blue cloud, a signi\ufb01cant fraction of them, especially those hosting luminous AGNs, should still have substantial SFR, consistent with observations (e.g., Silverman et al. 2009; Shi et al. 2009). It is expected that the incidence of star formation signatures (e.g., dust) in the nuclear region should correlate positively with AGN activities for elliptical galaxies, because the strengths of both are proportional to the gas return rate, consistent with observations (e.g., Sim\u02dc oes Lopes et al. 2007). These predictions are opposite to AGN feedback based models where star formation is expected to be completely quenched after AGN feedback clears the gas out. 6.3. Origin of Two AGN Accretion Regimes Kau\ufb00mann & Heckman (2009) presented an insightful observational result of two distinct regimes of black hole growth in nearby galaxies along with its apparent implications. They \ufb01nd that star-forming galaxies display a lognormal distribution of Eddington ratios; their interpretation is that in this regime accretion on to the SMBH is not limited by the supply of gas but by feedback processes that are intrinsic to the SMBH itself. Our model provides the following alternative interpretation for this phenomenon: this lognormal distribution merely re\ufb02ects two random processes at work: (1) the amount of gas that landed on the stable accretion disc to provide accretion to the SMBH during the starburst phase depends on many \u201crandom\u201d variables of the triggering event (in the case of a merger, such as merging orbit inclination, velocity, spin alignment, etc), and (2) observations catch a random moment during the accretion of this gas. Central theorem should then give rise to a lognormal distribution. Another class of possible triggering events for SMBH accretion in star-forming galaxies (e.g., dormant SMBH in the bulge of disk galaxies) is stochastic feeding due to some random events, which should also follow a lognormal distribution. \f\u2013 23 \u2013 Separately, they \ufb01nd that galaxies with old stellar populations is characterized by a power-law distribution function of Eddington ratios and the AGN accretion rate is about 0.3 \u22121% of the gas return gas from recycling. In our model the expect accretion rate is expected to be MBH/(frec MBG) = 1.3 \u00d7 10\u22122A\u03f5\u22121 0.1\u03c3200. This expected relation between SMBH accretion rate and gas return rate is remarkably close to their observed value. As Kau\ufb00mann & Heckman (2009) already pointed out, the powerlaw distribution is consistent with the recycling gas return rate \u221dt\u22121.3 (Mathews 1989). This is a strong support for the proposed model here. 6.4. Initial Mass Function and AGN Accretion History Because the least massive stars live the longest, the cuto\ufb00mass of the initial stellar mass function (IMF) plays an important role in shaping the evolution on longer time scales of \u22651Gyr. For example, a 0.92 M\u2299star (solar metallicity) has a lifetime of 10Gyr, whereas a 1.4 M\u2299only lives \u223c2Gyr. Thus, the duration of the \u201cSMBH Prime Period\u201d depends sensitively on the lower mass cuto\ufb00of the IMF. Figure 3 shows several cases of the evolution of the SMBH growth tracks. It shows that the evolution and duration of SMBH growth in the post-starburst phase depend sensitively on the low mass cuto\ufb00of the IMF. We see that for a cuto\ufb00mass of 0.92 M\u2299the SMBH spends about 100Myr accreting at Eddington limit when its mass is up to about 10% of its \ufb01nal mass and a signi\ufb01cant period (\u22651Gyr) at less than 1% of the Eddington rate, and most of the time at about 0.1% of the Eddington rate when its mass approaches its \ufb01nal mass. On the other hand, with a mass cuto\ufb00of 1.4 M\u2299 the entire SMBH accretion shortens to 2Gyr and does not extend below 10\u22122 Eddington rate. Since not all elliptical galaxies at present time are observed to accrete at 0.1% of the Eddington rate, this already suggests that a higher than 0.92 M\u2299cuto\ufb00mass in the IMF may be required. Presently there is circumstantial evidence for massive star formation in galactic centers, including our own Galaxy (e.g., Lu et al. 2009) and M31 (e.g., Bender et al. 2005). Given the very sensitive dependence of stellar lifetime on stellar mass, careful considerations along this line may prove to be very powerful in placing constraints on the low-mass cuto\ufb00in the IMF as well as testing this model. Detailed comparisons between theoretical prediction with observational data in terms of the AGN luminosity-mass plane (e.g., Steinhardt & Elvis 2010), the Eddington ratio range (e.g., Woo & Urry 2002), AGN ages at di\ufb00erent redshifts (e.g., Martini 2004) or at di\ufb00erent luminosities (e.g., Adelberger et al. 2005) should also prove very powerful in constraining the IMF. We note that our assumption used to derive the light curves in Figure 3 is extremely simplistic and therefore we do not expect that they provide satisfactory matches to observations. It is called for that additional ingredients be included to account for, e.g., variations in stellar distribution, possible variations of IMF as a function of local star formation conditions, dependence of initial seed SMBH mass on galaxy model, etc, in order to have a more encompassing analysis. We shall carry out a more detailed \f\u2013 24 \u2013 analysis with additional parameters in a future study, especially when measurements of both SMBH masses and accretion rates become signi\ufb01cantly more precise for a large sample of active galaxies. 6.5. Super-Solar Metallicity of Accreting Gas One clear implication is that the accretion gas, being shedded from aging stars, should be very metal rich with supersolar metallicity, in agreement with observations (e.g. Hamann & Ferland 1993), especially to explain super-solar N/He ratio (e.g., Hamann & Ferland 1999). This is because nitrogen is believed to be secondary nature, where its abundanace scales quadratically with metallicity. The recycled gas that is feeding the SMBH in our model \ufb01ts the bill most naturally. In addition, the metallicity of accretion gas is not expected to depend on redshift, being intrinsic to stellar evolution, consistent with all accreting gas being very metal rich at all redshifts, including the highest redshift SDSS quasars (e.g., Fan et al. 2006). 6.6. Relative Cosmic Evolution Between Starburst Galaxies and AGN Given the modest amount of time delay (several 100Myrs) between the starburst phase and the SMBH prime growth phase, it is unsurprising that one should expect to see nearly synchronous evolution between the starburst and SMBH growth on longer, cosmic time scales, consistent with observations (e.g., Boyle et al. 1988; Nandra et al. 2005). In the context of the observed cosmic downsizing phenomenon, the downsizing of galaxies (e.g., Cowie et al. 1996; Treu et al. 2005) should thus be closely followed by downsizing of AGNs (e.g., Barger et al. 2005; Hasinger et al. 2005). There is, however, a very important di\ufb00erence between the two classes in post peak activities, predicted in this model. For starburst the shutdown time scale is expected to be about \u223c100Myrs, whereas for moderateluminosity AGNs (i.e., Eddington ratio \u223c10\u22123) the decay time scale is of order of \u223c1Gyrs. With a deep AGN survey that is capable of subdividing early-type galaxies in terms of their masses, one should be able to di\ufb00erentiate between the downturn time of starburst galaxies and that of AGNs hosted by elliptical galaxies at a \ufb01xed mass. This prediction would be a strong di\ufb00erentiator between this model and AGN-based feedback models. \f\u2013 25 \u2013 6.7. AGN Broad Emission and Absorption Lines Some of the overhead material raining down onto SMBH accretion disc from recycled gas from aging low-to-intermediate mass stars provides the material observed as broad emission lines (BEL) and broad absorption lines (BAL). When some of this gas, probably in the form of some discrete clouds, reaches the inner region of the the SMBH (at r \u2264102rs, where rs is Schwarzschild radius), the clouds will be accelerated by radiation pressure, likely through some absorption lines, to velocities up to 0.1c. These clouds will be the observed BEL and BAL. The fact that only 15-20% of type I AGN to have BAL may be indicative of the discrete nature of the clouds, not unexpected from discrete stellar remnants or from cooling instabilities. An advantage of this overhead material is that it naturally provides gas clouds that are presumably to be some \u226550o o\ufb00the equatorial plane, in order not to be obscured by the molecular torus (there are of course BEL and BAL gas clouds at smaller angles but they are not seen directly). In this model we do not need any additional pressure force to lift the gas o\ufb00the accretion disc some of the raining down gas clouds from aging stars will be launched outwards before they reach the disc, physics of which is well known (e.g., Murray et al. 1995). 6.8. Evolution of SMBH Mass Relative to Bulge Mass Massive elliptical galaxies appear to have increased their masses by 30 \u2212100% in the last 7Gyr (e.g., Brown et al. 2008). The growth of the elliptical mass is not expected to be always accompanied by corresponding growth in the mass of the central SMBH. For example, merger of a spiral galaxy without a signi\ufb01cant SMBH and an elliptical galaxy would make the \ufb01nal SMBH appear less massive. Given the dependence of MBH/ MBG \u221d\u03c3 \u221d(1 + z)1/2 predicted in this model, we predict that the MBH/ MBG relation should evolve with redshift stronger than (1 + z)1/2 for quiescent elliptical galaxies. 6.9. On Relation between SMBHs and Pseudo-bulges It is useful to add a note on the di\ufb00erence between classic bulges and pseudo-bulges (Kormendy & Kennicutt 2004) with respect to the central SMBHs in this model. The relation derived, Equations (17, 18), that matches the observed MBH \u2212MBG relation is dependent on the abundant supply of recycled gas in the inner region. Given the su\ufb03cient gas supply from recycled gas, the feedback from the SMBH then can regulate its own growth. This essential ingredient of su\ufb03cient gas supply is consistent with the observed inner slope \f\u2013 26 \u2013 of classic bulges (e.g., Faber et al. 1997; Kormendy et al. 2009), as we have shown. The situation would be very di\ufb00erent, if star formation is not as centrally concentrated as in classic bulges, for example, in rings (Kormendy & Kennicutt 2004, and references therein) of high angular momentum with a hollow core. In this case, the amount of recycled gas raining down from the innermost region may depend on other unknown factors. For instance, if secular processes act promptly, compared to the time scales of stellar gas recycle (\u223c0.1 \u22121 Gyr), to be able to substantially \ufb01ll the central region with stars initially formed in outer regions, the SMBH may follow the track we described. If, on ther other hand, secular processes evolve on longer time scales, the recycled stellar gas would predominantly land in outer regions that do not e\ufb03ciently accrete to the SMBH, which would in turn not grow substantially. It would seem likely that there may be two trends for pseudo-bulges: (1) there will be large variations in MBH \u2212MBG relation and (2) SMBH masses may lie below that of the MBH \u2212MBG relation derived from inactive classic elliptical galaxies/bulges, both consistent with independent considerations in the context of hierarchical structure formation model (e.g., Shankar et al. 2012). Observations, while very challenging, may have already provided some hints of both (Greene et al. 2008). Moreover, we do not expect any discernible correlation between the SMBH and galaxy disk or dark matter halo, simply because the stars in disks do not a\ufb00ect SMBH growth and the overall dark matter halo, while indirectly a\ufb00ect the escape velocity that enters Equation (18), does not control the amount of gas that feeds the SMBH. This prediction is consistent with observations (e.g., Kormendy & Bender 2011). In addition, some stellar population in the outskirts (either on a disk or just at large radii of an elliptical galaxy) of AGN hosts may be unrelated to the preceding starburst and could be substantially di\ufb00erent from bulge stars (e.g., Nolan et al. 2001). \f\u2013 27 \u2013 Green Valley Red Sequence Starburst 0 0.1 3 10 gas blowout by supernovae & starburst ends SMBH Prime Growth Quiescent Elliptical self-regulated SMBH growth fueled by recycled gas from aging bulge stars gas return rate & SFR = t-1.3 5x106 1x107 1x108 200 gas-rich merger Phases Blue Cloud Galaxy Color Time (Gyr) SMBH Mass Red Sequence depend on low mass cutof of IMF SFR decreasing Eddington ratio About Gas may become a bulge with a newly grown spiral disk SMBH Physics SMBH growth limited to inner disc supply-based growth w/ regulation stochastic feed or by hot gas SF Physics self-regulated star formation/burst supply-based star formation no star formation in bulge Fig. 2.\u2014 shows the entire evolutionary process for an example merger of two gas-rich spirals of mass \u223c1012 M\u2299each that eventually produces a powerlaw elliptical galaxy of velocity dispersion of 200 km/s. This scenario is not limitd to merger events but encompasses any signi\ufb01cant event triggering a starburst. Note that the time boundaries between di\ufb00erence consecutive phases are approximate and uncertain to within a factor of a few. The numbers in brown indicate the BH masses and the numbers in red indicate SFR. These numbers are very approximate and given mainly for illustration purpose. Clearly, given the complexity, one should expect large variations from case to case. \f\u2013 28 \u2013 7 8 9 \u22124 \u22123 \u22122 \u22121 0 0.01 0.1 0.3 1 2 5 10 0.01 0.1 0.3 1 2 5 10 log MBH (Msun) log lE Minit=10% Mfinal, mcut=0.92 Minit=1% Mfinal, mcut=0.92 7 8 9 \u22124 \u22123 \u22122 \u22121 0 0.01 0.1 0.3 1 2 0.01 0.1 0.3 1 2 log MBH (Msun) log lE Minit=10% Mfinal, mcut=1.4 Minit=1% Mfinal, mcut=1.4 7 8 9 43 44 45 46 0.01 0.1 0.3 1 2 5 0.01 0.1 0.3 1 2 5 log MBH (Msun) log Lbol (erg/s) Minit=10% Mfinal, mcut=0.92 Minit=1% Mfinal, mcut=0.92 7 8 9 43 44 45 46 0.01 0.1 0.3 1 2 0.01 0.1 0.3 1 2 log MBH (Msun) log Lbol (erg/s) Minit=10% Mfinal, mcut=1.4 Minit=1% Mfinal, mcut=1.4 Fig. 3.\u2014 Top left panel: evolutionary growth tracks in the SMBH massEddington ratio plane of an example SMBH of \ufb01nal mass 109 M\u2299with two cases of seed black mass of 107 and 108 M\u2299, respectively. A low mass cuto\ufb00for the IMF of 0.92 M\u2299that has a turno\ufb00lifetime of 10 Gyr is assumed. We assume that the SMBH accretion rate is proportional to the recycle gas return rate of the form \u221dt\u22121.3 Ciotti et al. (1991) capped at the Eddington rate with a radiative e\ufb03ciency of \u03f5 = 0.1, starting 200Myrs after the end of the starburst. Also indicated along each track are the times in Gyrs elapsed since the start of the accretion. Top right panel: the case for a low mass cuto\ufb00for the IMF of 1.4 M\u2299that has a turno\ufb00lifetime of 2 Gyr. Bottom panels: tracks for the cases in top panels but in the SMBH mass-luminosity plane. \f\u2013 29 \u2013 7." + }, + { + "url": "http://arxiv.org/abs/1010.5014v1", + "title": "The Nature of Damped Lyman Alpha Systems and Their Hosts in the Standard Cold Dark Matter Universe", + "abstract": "Using adaptive mesh-refinement cosmological hydrodynamic simulations with a\nphysically motivated supernova feedback prescription we show that the standard\ncold dark matter model can account for extant observed properties of damped\nLyman alpha systems (DLAs). We then examine the properties of DLA host\ngalaxies. We find: (1) While DLA hosts roughly trace the overall population of\ngalaxies at all redshifts, they are always gas rich. (2) The history of DLA\nevolution reflects primarily the evolution of the underlying cosmic density,\ngalaxy size and galaxy interactions. With higher density and more interactions\nat high redshift DLAs are larger in both absolute terms and in relative terms\nwith respect to virial radii of halos. (3) The variety of DLAs at high redshift\nis richer with a large contribution coming from galactic filaments, created\nthrough close galaxy interactions. The portion of gaseous disks of galaxies\nwhere most stars reside makes relatively small contribution to DLA incidence at\nz=3-4. (4) The vast majority of DLAs arise in halos of mass M_h=10^10-10^12\nMsun at z=1.6-4. At z=3-4, 20-30% of DLA hosts are Lyman Break Galaxies (LBGs).\n(5) Galactic winds play an indispensable role in shaping the kinematic\nproperties of DLAs. Specifically, the high velocity width DLAs are a mixture of\nthose arising in high mass, high velocity dispersion halos and those arising in\nsmaller mass systems where cold gas clouds are entrained to high velocities by\ngalactic winds. (6) In agreement with observations, we see a weak but\nnoticeable evolution in DLA metallicity. The metallicity distribution centers\nat [Z/H]=-1.5 to -1 at z=3-4, with the peak moving to [Z/H]=-0.75 at z=1.6 and\n[Z/H]=-0.5 by z=0. (7) The star formation rate of DLA hosts is concentrated in\nthe range 0.3-30Msun/yr at z=3-4, gradually shifting lower to peak at ~0.5-1\nMsun/yr by z=0.", + "authors": "Renyue Cen", + "published": "2010-10-24", + "updated": "2010-10-24", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.GA" + ], + "main_content": "Introduction Damped Ly\u03b1 systems (DLAs) are fundamentally important, because they contain most of the neutral gas in the universe at all times since cosmological reionization (e.g., StorrieLombardi & Wolfe 2000; P\u00b4 eroux et al. 2003; Prochaska & Wolfe 2009). Molecular clouds, within which star formation takes place, likely condense out of cold dense neutral atomic gas contained in DLAs, evidenced by the fact that the neutral hydrogen (surface) density in DLAs and molecular hydrogen (surface) density in molecular clouds form a continuous sequence (e.g., Kennicutt 1998; Zwaan & Prochaska 2006). Therefore, DLAs likely hold key to understanding the fuel for and ultimately galaxy formation. A substantial amount of theoretical work has been devoted to studying the nature of DLAs (e.g., Gardner et al. 1997a; Gardner et al. 1997; Haehnelt et al. 1998; Gardner et al. 2001; Maller et al. 2001; Cen et al. 2003; Nagamine et al. 2004b,a, 2007; Razoumov et al. 2006, 2008; Pontzen et al. 2008; Tescari et al. 2009; Hong et al. 2010), since the pioneering investigation of Katz et al. (1996) in the context of the cold dark matter (CDM) cosmogony. A very interesting contrast is drawn between the observationally based inference of simple large disk galaxies possibly giving rise to DLAs (Wolfe et al. 1986; Prochaska & Wolfe 1997) and the more naturally expected hierarchical buildup of structures in the CDM cosmogony where galactic subunits may produce some of the observed kinematics of DLAs (Haehnelt et al. 1998). Clearly, the implications on the evolution of galaxies in the two scenarios are very di\ufb00erent. We have carried out a set of Eulerian adaptive mesh re\ufb01nement (AMR) simulations with a resolution of 0.65 kpc proper and a sample size of several thousand galaxies with mass \u22651010 M\u2299to statistically address the physical nature of DLAs in the current standard cosmological constant-dominated CDM model (LCDM) (Komatsu et al. 2010). Mechanical feedback from star formation driven by supernova explosions and stellar winds is modeled by a one-parameter prescription that is physically and energetically sound. Part of the motivation was to complement and cross-check studies to date that are largely based on smooth-particle-hydrodynamics (SPH) simulations. With the simulation set and detailed analysis performed here this study represents a signi\ufb01cant extension of previous works to simultaneously subject the LCDM model to a wider and more complete range of comparisons with observations. We examine in detail the following properties of DLAs in a self-consistent fashion within the same model: DLA column density distribution evolution, line density evolution, metallicity distribution evolution, size distribution evolution, velocity width distribution evolution, kinematic structural parameters evolution, neutral mass content evolution and others. A gallery of DLAs is presented to obtain a visual understanding of the physical richness of DLA systems, especially the e\ufb00ects of galactic winds and large-scale gaseous structures. In comparison to the recent work of Hong et al. (2010) we track the metallicity distribution and evolution explicitly and show that the metallicity distribution of DLAs is, in good agreement with observations, very wide, which itself calls for a self-consistent \f\u2013 3 \u2013 treatment of metals transport. In agreement with the conclusions of Hong et al. (2010), although not in the detailed process, we show that galactic winds are directly responsible for a large fraction of wide DLAs at high redshift, by entraining cold clouds to large velocities and causing large kinematic velocity widths. We \ufb01nd that the simulated Si II \u03bb1808 line velocity width, kinematic shape measures and DLA metallicity distributions that are all in excellent agreement with observations. Taking all together, we conclude that the standard LCDM model gives a satisfactory account of all properties of DLAs. Finally, we examine the properties of DLA hosts, including their mass, star formation rate, HI content, gas to stellar mass ratio and colors, and show that DLAs arise in a variety of galaxies and roughly trace the entire population of galaxies at any redshift. This may reconcile many apparently con\ufb02icting observational evidence of identifying DLAs with di\ufb00erent galaxy populations. Speci\ufb01cally, at z \u223c3 we show that 20 \u221230% of DLAs are Lyman Break Galaxies (LBGs) (Steidel et al. 1996), while the majority arise in smaller galaxies. The outline of this paper is as follows. In \u00a72 we detail our simulations, method of making galaxy catalogs, method of making DLA catalogs and procedure of de\ufb01ning Si II line pro\ufb01le shape measures. Results are presented in \u00a73. In \u00a73.1 we present a gallery of twelve DLAs. We give Si II line velocity width distribution functions in \u00a73.2, demonstrating excellent agreement between simulations and observations, particularly at high velocity width end. \u00a73.3 is devoted to the three kinematic measures of the Si II absorption line and we show the model produces results that are consistent with observations. Column density distribution and evolution, line density and neutral gas density evolution are described in \u00a73.4, where, while simulations are consistent with observations, we emphasize large cosmic variance from region to region with di\ufb00erent large-scale overdensities. The focus is shifted to metallicity distribution and evolution in \u00a73.5 and simulations are found to be in excellent agreement with observations where comparisons can be made. The next subsection \u00a73.6 performs a detailed analysis of the size of DLAs and \ufb01nds that the available observed QSO pairs with DLAs are in accord with the expectation of our model. Having found agreement between simulations and observations in all aspects pertinent to DLAs, we turn our attention to the properties of DLA hosts in \u00a73.7, where we show that DLAs, while slightly favoring galaxies that are more gas rich, less massive and bluer in color, and have higher HI mass and higher gas to stellar mass ratio, roughly trace the entire population of galaxies at all redshifts. Conclusions are given in \u00a74. \f\u2013 4 \u2013 2. Simulations 2.1. Hydrocode and Simulation Parameters We perform cosmological simulations with the adaptive mesh re\ufb01nement (AMR) Eulerian hydro code, Enzo (Bryan 1999; Bryan & Norman 1999; O\u2019Shea et al. 2004; Joung et al. 2009). First we ran a low resolution simulation with a periodic box of 120 h\u22121Mpc on a side. We identi\ufb01ed two regions separately, one centered on a cluster of mass of \u223c2 \u00d7 1014 M\u2299and the other centered on a void region at z = 0. We then resimulate each of the two regions separately with high resolution, but embedded in the outer 120h\u22121Mpc box to properly take into account large-scale tidal \ufb01eld and appropriate boundary conditions at the surface of the re\ufb01ned region. We name the simulation centered on the cluster \u201cC\u201d run and the one centered on the void \u201cV\u201d run. The re\ufb01ned region for \u201cC\u201d run has a size of 21 \u00d7 24 \u00d7 20h\u22123Mpc3 and that for \u201cV\u201d run is 31 \u00d7 31 \u00d7 35h\u22123Mpc3. At their respective volumes, they represent 1.8\u03c3 and \u22121.0\u03c3 \ufb02uctuations. The initial condition in the re\ufb01ned region has a mean interparticleseparation of 117h\u22121kpc comoving, dark matter particle mass of 1.07 \u00d7 108h\u22121 M\u2299. The re\ufb01ned region is surrounded by two layers (each of \u223c1h\u22121Mpc) of bu\ufb00er zones with particle masses successively larger by a factor of 8 for each layer, which then connects with the outer root grid that has a dark matter particle mass 83 times that in the re\ufb01ned region. Because we still can not run a very large volume simulation with adequate resolution and physics, we choose these two runs to represent two opposite environments that possibly bracket the average. At redshift z > 1.6, as we will show, the average properties of most quantities concerning DLAs in \u201cC\u201d and \u201cV\u201d runs are not very di\ufb00erent, although the abundances of DLAs in the two runs are already very di\ufb00erent. It is only at lower redshift where we see signi\ufb01cant divergence of some quantities of DLAs between the two runs, presumably due to di\ufb00erent dynamic evolutions in the two runs. We choose the mesh re\ufb01nement criterion such that the resolution is always better than 460h\u22121pc physical, corresponding to a maximum mesh re\ufb01nement level of 11 at z = 0. We also ran an additional simulation for \u201cC\u201d run with a factor of two lower resolution to assess the convergence of the results which we name \u201cC/2\u201d run and, as we will show in the Appendix, the convergence is excellent for all quantities examined here. The simulations include a metagalactic UV background (Haardt & Madau 1996), and a model for shielding of UV radiation by neutral hydrogen (Cen et al. 2005). They also include metallicity-dependent radiative cooling (Cen et al. 1995). Star particles are created in cells that satisfy a set of criteria for star formation proposed by Cen & Ostriker (1992). Each star particle is tagged with its initial mass, creation time, and metallicity; star particles typically have masses of \u223c106 M\u2299. Supernova feedback from star formation is modeled following Cen et al. (2005). Feedback energy and ejected metal-enriched mass are distributed into 27 local gas cells centered \f\u2013 5 \u2013 at the star particle in question, weighted by the speci\ufb01c volume of each cell, which is to mimic the physical process of supernova blastwave propagation that tends to channel energy, momentum and mass into the least dense regions (with the least resistance and cooling). We allow the whole feedback processes to be hydrodynamically coupled to surroundings and subject to relevant physical processes, such as cooling and heating, as in nature. As we will show later, the extremely inhomogeneous metal enrichment process demands that both metals and energy (and momentum) are correctly modeled so that they are transported into right directions in a physically sound (albeit still approximate at the current resolution) way. The primary advantages of this supernova energy based feedback mechanism are three-fold. First, nature does drive winds in this way and energy input is realistic. Second, it has only one free parameter eSN, namely, the fraction of the rest mass energy of stars formed that is deposited as thermal energy on the cell scale at the location of supernovae. Third, the processes are treated physically, obeying their respective conservation laws (where they apply), allowing transport of metals, mass, energy and momentum to be treated self-consistently and taking into account relevant heating/cooling processes at all times. We use eSN = 1 \u00d7 10\u22125 in these simulations. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier IMF translates to eGSW = 6.6\u00d710\u22126. Observations of local starburst galaxies indicate that nearly all of the star formation produced kinetic energy (due to Type II supernovae) is used to power GSW (e.g., Heckman 2001). Given the uncertainties on the evolution of IMF with redshift (i.e., possibly more top heavy at higher redshift) and the fact that newly discovered prompt Type I supernovae contribute a comparable amount of energy compared to Type II supernovae, it seems that our adopted value for eSN is consistent with observations and within physical plausibility. We use the following cosmological parameters that are consistent with the WMAP7normalized (Komatsu et al. 2010) LCDM model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. Convergence test of results are presented separately in Appendix A in order not to disrupt the \ufb02ow of the presentation in the results section. The tests show that our results are quite converged and should be robust at the accuracies concerned here, suggesting that our resolution has reached an adequate level for the present study. The reader may go to the Appendix any time to gauge the convergence of relevant computed quantities. The fact that most of the contributions to DLA incidence come from galaxies of mass \u223c1011 M\u2299that are well above our resolution, the results of our convergence tests are self-consistent. 2.2. Simulated Galaxy Catalogs We identify galaxies in our high resolution simulations using the HOP algorithm (Eisenstein & Hu 1999), operated on the stellar particles, which is tested to be robust and insen\f\u2013 6 \u2013 sitive to speci\ufb01c choices of concerned parameters within reasonable ranges. Satellites within a galaxy are clearly identi\ufb01ed separately. The luminosity of each stellar particle at each of the Sloan Digital Sky Survey (SDSS) \ufb01ve bands is computed using the GISSEL stellar synthesis code (Bruzual & Charlot 2003), by supplying the formation time, metallicity and stellar mass. Collecting luminosity and other quantities of member stellar particles, gas cells and dark matter particles yields the following physical parameters for each galaxy: position, velocity, total mass, stellar mass, gas mass, mean formation time, mean stellar metallicity, mean gas metallicity, star formation rate, luminosities in \ufb01ve SDSS bands (and various colors) and others. 2.3. Simulated Damped Lyman Alpha System Samples While our simulations also solve relevant gas chemistry chains for molecular hydrogen formation (Abel et al. 1997), molecular formation on dust grains (Joung et al. 2009) and metal cooling extended down to 10 K (Dalgarno & McCray 1972), at the resolution of the simulations, molecular clouds are not properly modeled. To correct for that, we use the Hidaka & Sofue (2002) observation that at nc = 5HI/cm3 H2 fraction is about 50% and then implement the following prescription to remove neutral gas in extrapolated high density regions and put it in H2 phase. In detail, we assume that the density pro\ufb01le is isothermal below our resolution, which would translate the fraction of mass in H2 is min(1, 0.5(nc/nres)\u22121/2). Thus, we post-process the neutral neutral density in the simulation by the following transformation: nHI(after) = nHI(before)(1\u2212min(1, 0.5(nc/nHI(before))\u22121/2)), where nHI(before) is the HI density directly from the simulation, and nHI(after) is that after this processing step. A very precise choice of the parameter in the above equation is unimportant; changing 0.5 to 1.0 makes marginally noticeable di\ufb00erences in the results. The primary e\ufb00ect of doing this is to remove very high HI column DLAs and causes the HI column density distribution function to steepen at NHI \u226522.5, in agreement with observations. In addition, because of that, the total amount of neutral gas in DLAs also become convergent and more stable. After the above post-process step, we shoot rays through the entire re\ufb01ned region of each simulation along all three orthogonal directions using a cell size of 0.915h\u22121kpc comoving. In practice, this is done piece-wise, one small volume of the simulation box at a time, due to limited computer memory. The spectral bin size is 3km/s. All physical e\ufb00ects are taken into account, including temperature broadening and peculiar velocities. Both intrinsic Lorentzian line pro\ufb01le and Doppler broadening are taken into account for both Ly\u03b1 and Si II \u03bb1808 line, although, in practice, for DLAs, Doppler broadening is important for Si II \u03bb1808 line and Lorentzian pro\ufb01le for Ly\u03b1 line. All relevant atomic data are taken from Morton (2003). A DLA is de\ufb01ned, as usual, a system with HI column larger than 1020.3cm\u22122. We assume that the fractional abundance of Si II is equal to fractional abundance of HI. Since, as we will see \f\u2013 7 \u2013 later, the HI regions of DLAs are \u201cpeaky\u201d with well-de\ufb01ned line-of-sight boundaries and since DLAs are very optically opaque to ionizing photons, any re\ufb01ned treatment of radiative selfshielding etc is unlikely to have any signi\ufb01cant e\ufb00ect. Note that we have already included a crude self-shielding method during the simulation, which should work well for optically opaque regions. As a side, one numerical point to note is that, because of the very large dynamic range of both line cross sections as a function of frequency shift from the line center and the delta function like cross section shapes in the line core regions, the convolution operations involved in the detailed calculations of optical depths require at least 64\u2212bits precision for \ufb02oating point numbers. For each DLA, we compute the HI column weighted metallicity, register its position relative to the center of the primary galaxy (i.e, the impact parameter), and for DLAs that are physically connected by at least one cell side in projection we merge them and in the end compute projected area A of each connected region to de\ufb01ne it size rDLA = (A/\u03c0)1/2. For each galaxy we also register the maximum velocity width v90,max among its associated DLAs. We are able to identify more than one million DLAs through ray tracing at each redshift examined in each of the runs. So the statistical errors are very small for each speci\ufb01c run at any redshift. But that does not speak to cosmic variance and as we shall show later, cosmic variance is indeed quite large concerning quantities that directly or indirectly pertain to the number density of DLAs. Other quantities, such as size, metallicity, kinematic properties, etc., however, appear to depend weakly on environments and their variances are small. A DLA \u201cbelongs\u201d to the largest galaxy in the region, within whose virial radius the DLA lies. For example, a DLA that is physically more closely located to a satellite galaxy that in turn is within the virial radius of a larger galaxy is said to belong to that larger galaxy. 2.4. Kinematic Measures for Si II Line We do not add instrumental noise to the simulated spectra, but we adopt the same observational procedure to compute the kinematic measures for the Si II absorption lines. For all relevant measures for the Si II line, we follow identically the procedures and de\ufb01nitions in Prochaska & Wolfe (1997). We generate synthetic spectra for both Ly\u03b1 and Si II line with 3km/s pixels and then smooth it with a 9-pixel boxcar averaging procedure. We de\ufb01ne the velocity width of a Si II absorption line associated with a DLA to be the velocity interval of 90% of the total optical depth, v90. For the three kinematic shape measures for Si II line we use all intensity troughs (optical depth peaks) without the 0.1 \u2264I(vpk)/\u00af I \u22640.6 constraint, where \u00af I is the continuum \ufb02ux, as re-emphasized by Prochaska & Wolfe (2010). The kinematic shape measures, fmm, fedg and f2pk, are de\ufb01ned exactly the same way as in Prochaska & Wolfe (1997). \f\u2013 8 \u2013 3. Results 3.1. A Garden Variety of DLAs y (kpc) pressure 0 10 20 30 40 50 60 70 0 10 20 30 40 50 3 3.5 4 4.5 5 y (kpc) atomic hydrogen density 0 10 20 30 40 50 60 70 0 10 20 30 40 50 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 60 70 0 10 20 30 40 50 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 60 70 0 10 20 30 40 50 3 4 5 6 7 baryon overdensity 0 10 20 30 40 50 60 70 0 10 20 30 40 50 0 1 2 3 4 5 x (kpc) stellar surface density 0 10 20 30 40 50 60 70 0 10 20 30 40 50 2 4 6 8 10 12 0 10 20 30 40 50 60 70 \u22125 \u22124 \u22123 \u22122 \u22121 0 1 nHI & nH,tot 0 10 20 30 40 50 60 70 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 \u2212800 \u2212600 \u2212400 \u2212200 0 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.7 log Mh=12.6 v90=129km/s fmm=0.02 fedge=0.12 f2pk=\u22120.81 FLya \u2212100 0 100 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 1.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and \ufb01nally Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. We \ufb01rst present a gallery of twelve DLAs at z = 3.1 (Figure 1-12) to show the richness of their physical properties. For each DLA six maps and \ufb01ve quantitative panels are displayed. In each of the six maps the line of sight (LOS) intercepting the DLA is shown as a white horizontal line and the exact location of the primary component of the DLA is at the intersection with another, white vertical line. In cases with multiple components along the LOS, the primary component coincides with the highest neutral density. Four of the maps top left (temperature in Kelvin), middle left (atomic hydrogen density in cm\u22123), bottom \f\u2013 9 \u2013 y (kpc) pressure 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 y (kpc) atomic hydrogen density 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 x (kpc) y (kpc) metallicity 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 3 3.5 4 4.5 5 5.5 6 6.5 baryon overdensity 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 0 1 2 3 4 x (kpc) stellar surface density 40 50 60 70 80 90 100 110 20 30 40 50 60 70 80 2 4 6 8 10 0 10 20 30 40 50 60 70 80 90 100110120 \u22125 \u22124 \u22123 \u22122 nHI & nH,tot 0 10 20 30 40 50 60 70 80 90 100110120 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 80 90 100110120 \u2212200 0 200 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=20.5 log Mh=12.3 v90=219km/s fmm=0.18 fedge=0.15 f2pk=0.15 FLya \u2212200 \u2212100 0 100 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 2.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. left (metallicity in solar units) and top middle (pressure in units of Kelvin cm\u22123) have a physical thickness of 1.3kpc. Also indicated in top middle (pressure) is the peculiar velocity \ufb01eld with a scaling of 5kpc corresponding to 500km/s. The remaining two maps middle middle (baryonic overdensity) and bottom middle (stellar surface density in M\u2299/kpc2) are projected over the entire galaxy of depth of order of the virial diameter of the primary galaxy. While these two projected maps give an overall indication of relative projected location of the DLA respect to the galaxy, the exact depth of the DLA inside the paper is, however, not shown. When we quote distance from the galaxy, we mean the projected distance on the paper plane. The \ufb01ve panels on the right column show various physical quantities along the line of \f\u2013 10 \u2013 y (kpc) pressure 20 30 40 50 60 70 30 40 50 60 70 80 3 3.1 3.2 3.3 3.4 3.5 3.6 y (kpc) atomic hydrogen density 20 30 40 50 60 70 30 40 50 60 70 80 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 20 30 40 50 60 70 30 40 50 60 70 80 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 20 30 40 50 60 70 30 40 50 60 70 80 3 3.5 4 4.5 5 5.5 6 6.5 baryon overdensity 20 30 40 50 60 70 30 40 50 60 70 80 0 1 2 3 4 5 x (kpc) stellar surface density 20 30 40 50 60 70 30 40 50 60 70 80 2 4 6 8 10 0 10 20 30 40 50 60 70 80 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 70 80 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 80 \u2212400 \u2212200 0 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=20.8 log Mh=11.9 v90=303km/s fmm=0.19 fedge=0.09 f2pk=\u22120.86 FLya \u2212400 \u2212300 \u2212200 \u2212100 0 100 0.96 0.97 0.98 0.99 1 v(LOS) (km/s) FSi II Fig. 3.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. sight (i.e, along the white horizontal line shown in the maps on the left two columns). From top to bottom they are: atomic hydrogen density (in cm\u22123; red solid curve with a narrower shape) along with total hydrogen density (dotted green curve with a more extended shape), gas metallicity (in solar units), line-of-sight proper peculiar velocity (in km/s), Ly\u03b1 \ufb02ux and \ufb02ux for Si II \u03bb1808 line. The top three panels are plotted against physical distance, whereas the bottom two panels are plotted versus the LOS velocity. Also indicated in the Ly\u03b1 \ufb02ux panel (second from bottom) are several quantitative measures of the DLA, including the neutral hydrogen column density (log N(HI)), the halo mass of the primary galaxy in the system (log Mh), the velocity width of the associated Si II line (v90) and three kinetic measures of the Si II line, fmm, fedg, f2pk. We now describe in turn each of the twelve DLA examples. \f\u2013 11 \u2013 y (kpc) pressure 0 10 20 30 40 50 20 30 40 50 60 70 3 3.2 3.4 3.6 3.8 4 y (kpc) atomic hydrogen density 0 10 20 30 40 50 20 30 40 50 60 70 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 20 30 40 50 60 70 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 20 30 40 50 60 70 3 3.5 4 4.5 5 5.5 6 6.5 baryon overdensity 0 10 20 30 40 50 20 30 40 50 60 70 1 2 3 4 5 x (kpc) stellar surface density 0 10 20 30 40 50 20 30 40 50 60 70 2 4 6 8 10 12 0 10 20 30 40 50 60 70 \u22125 \u22124 \u22123 \u22122 \u22121 nHI & nH,tot 0 10 20 30 40 50 60 70 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 \u2212200 0 200 400 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=20.3 log Mh=11.9 v90=318km/s fmm=0.79 fedge=0.92 f2pk=0.92 FLya \u2212100 0 100 200 300 400 v(LOS) (km/s) FSi II Fig. 4.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. Figure 1 shows a DLA produced by the LOS intersecting the tip of a long chimney at a distance of \u223c30kpc, The velocity structure suggests that it is still moving away (upwards) from the galaxy at a velocity of \u223c500km/s, likely caused by galactic winds. The metallicity at the interception is [Z/H] \u223c\u22121.5 but there are large gradients and variations of metallicity (we found that some other very nearby DLA systems intersecting di\ufb00erent parts of the chimney have metallicity [Z/H] < \u22123, not shown), suggesting very inhomogeneous enrichment process by galactic winds. While the primary galaxy has a mass of 4 \u00d7 1012 M\u2299, i.e., a 1-d velocity dispersion of 500km/s, the kinetic width of this line is only 129km/s with NHI = 21.7. Although the Ly\u03b1 \ufb02ux appears as a single component, as will be the case in all subsequent examples, the Si II absorption has several separate features, re\ufb02ecting the two-peak structure of the absorbing column and complex velocity structure within. We note \f\u2013 12 \u2013 that the nearby satellite galaxies may have triggered the starburst and the galactic winds. The responsible gas for this DLA is probably cooling and con\ufb01ned by external pressure likely due to thermal instability, as seen in the pressure panel. y (kpc) pressure 40 50 60 70 80 90 60 70 80 90 100 110 3 3.2 3.4 3.6 3.8 4 4.2 4.4 y (kpc) atomic hydrogen density 40 50 60 70 80 90 60 70 80 90 100 110 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 40 50 60 70 80 90 60 70 80 90 100 110 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 40 50 60 70 80 90 60 70 80 90 100 110 3 3.5 4 4.5 5 5.5 6 6.5 7 baryon overdensity 40 50 60 70 80 90 60 70 80 90 100 110 0 0.5 1 1.5 2 2.5 3 3.5 4 x (kpc) stellar surface density 40 50 60 70 80 90 60 70 80 90 100 110 2 4 6 8 10 0 10 20 30 40 50 60 70 80 90 100110120 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 70 80 90 100110120 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 80 90 100110120 \u2212400 \u2212200 0 200 400 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.4 log Mh=12.3 v90=420km/s fmm=0.39 fedge=0.47 f2pk=\u22120.74 FLya \u2212200 \u2212100 0 100 200 300 400 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 5.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. In Figure 2 a DLA of width 219km/s is created jointly by two major components along the sightline, one at x \u223c70kpc of metallicity of [Z/H] \u223c[\u22121.0, \u22120.5] and size of \u223c20kpc at an impact parameter of \u223c30kpc and the other at x \u223c110kpc of metallicity of [Z/H] \u223c0.0 and size of \u223c30kpc. What is striking is many long gaseous structures in this galaxy. As we will see frequently, there are often long gaseous structures connected with galaxies that seem always coincidental with visible galaxy interactions of multiple galaxies or galaxies and satellites in close proximity. We shall call these features \u201cgalactic \ufb01laments\u201d hereafter. It seems likely that some of these galactic \ufb01laments are cold streams (Kere\u02c7 s et al. 2005; Dekel \f\u2013 13 \u2013 y (kpc) 10 20 30 40 10 20 30 40 3 3.5 4 4.5 5 5.5 6 6.5 y (kpc) 10 20 30 40 10 20 30 40 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 1 x (kpc) y (kpc) 10 20 30 40 10 20 30 40 \u22122 \u22121.5 \u22121 \u22120.5 0 10 20 30 40 10 20 30 40 3 4 5 6 7 10 20 30 40 10 20 30 40 1 2 3 4 5 x (kpc) 10 20 30 40 10 20 30 40 2 4 6 8 10 0 10 20 30 40 50 60 70 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 70 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 \u2212200 0 200 400 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.5 log Mh=11.3 v90=510km/s fmm=0.59 fedge=0.95 f2pk=0.59 FLya \u2212200 0 200 400 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 6.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. & Birnboim 2006). However, galactic \ufb01laments found in our simulations appear to be very rich in variety and disparate in metallicity (spanning 3 decades or more in metallicity). In other words, they are not necessarily primordial cold streams. In the case of this DLA, the \ufb01laments are likely made of gas pre-enriched, having cooled (as the pressure panel shows) and now rotating about the galaxy (roughly counter-clockwise). In contrast, note that the DLA in Figure 1 was still moving away from the galaxy. Like in Figure 1, the rich galactic \ufb01laments appear to be associated with signi\ufb01cant satellite structures in close proximity. The Si II absorption has several separate features, re\ufb02ecting the two separate physical components as well as substructures within each component. Figure 3 shows a DLA that is associated with a low metallicity ([Z/H] \u223c[\u22122.0, \u22121.5]) \f\u2013 14 \u2013 y (kpc) pressure 40 50 60 70 50 60 70 80 3 3.5 4 4.5 5 5.5 6 y (kpc) atomic hydrogen density 40 50 60 70 50 60 70 80 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 x (kpc) y (kpc) metallicity 40 50 60 70 50 60 70 80 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 40 50 60 70 50 60 70 80 3 4 5 6 7 baryon overdensity 40 50 60 70 50 60 70 80 1 2 3 4 5 x (kpc) stellar surface density 40 50 60 70 50 60 70 80 2 4 6 8 10 12 0 10 20 30 40 50 60 70 80 90 100110120130 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 70 80 90 100110120130 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 80 90 100110120130 \u2212600 \u2212400 \u2212200 0 200 400 600 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.7 log Mh=12.5 v90=522km/s fmm=0.60 fedge=0.59 f2pk=\u22121.0 FLya \u2212100 0 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 7.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. \ufb01lament that is feeding a small satellite, which in turn appears to be interacting and possibly feeding the primary galaxy at a projected distance of \u223c20kpc. This is yet another example of interacting galaxies producing rich gas-feeding \ufb01laments, as already seen in Figures 1 and 2. The relatively large width of 303km/s is produced by steep velocity gradient in the region from x \u223c45 to 50kpc. One could see that galactic winds are blowing to the upper left corner by the primary galaxy, whose starburst is likely triggered by the interaction. Figure 4 shows a DLA that is made up by several \ufb01laments at distances of 30 \u221240kpc from the galaxy. The metallicity of all the components is near solar, indicating that these are probably pre-enriched gas cooling due to thermal instability. The velocity structures show that they are falling back towards the galaxy, in a fashion perhaps similar to galactic \f\u2013 15 \u2013 y (kpc) pressure 0 10 20 30 40 50 10 20 30 40 50 3 3.2 3.4 3.6 3.8 y (kpc) atomic hydrogen density 0 10 20 30 40 50 10 20 30 40 50 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 10 20 30 40 50 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 10 20 30 40 50 3 3.5 4 4.5 5 5.5 6 6.5 baryon overdensity 0 10 20 30 40 50 10 20 30 40 50 0.5 1 1.5 2 2.5 3 3.5 4 x (kpc) stellar surface density 0 10 20 30 40 50 10 20 30 40 50 2 4 6 8 10 0 10 20 30 40 50 60 70 \u22125 \u22124 \u22123 \u22122 \u22121 nHI & nH,tot 0 10 20 30 40 50 60 70 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 200 400 600 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=20.4 log Mh=10.5 v90=306km/s fmm=0.78 fedge=0.87 f2pk=0.87 FLya 200 300 400 500 600 0.6 0.7 0.8 0.9 1 v(LOS) (km/s) FSi II Fig. 8.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. fountains (Shapiro & Field 1976), on somewhat extended scales. Once again, it appears that galaxy-galaxy interactions may be responsible for the rich gas \ufb01laments, as seen in Figures 1, 2 and 3. There is evidence that winds are blowing upwards from the galaxy. Figure 5 shows another example of a DLA arising from a galaxy with a very rich \ufb01lament system due to galaxy interactions. The primary galaxy is the same as the one shown in Figure 2 and we are now looking at its south side. These \ufb01laments that are responsible for the neutral column of the DLA appear to have been enriched to a level of [Z/H] \u223c\u22121.0 and have cooled to low temperature. The large width of 420km/s is due to the multiple components spanning a spatial range of \u223c40kpc each of physical depth of several kpc and individual velocity width \u2264100km/s. Interestingly, for this DLA system, while most of \f\u2013 16 \u2013 y (kpc) pressure 0 10 20 30 40 50 60 10 20 30 40 50 60 3 3.5 4 4.5 5 5.5 y (kpc) atomic hydrogen density 0 10 20 30 40 50 60 10 20 30 40 50 60 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 60 10 20 30 40 50 60 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 60 10 20 30 40 50 60 3 4 5 6 7 baryon overdensity 0 10 20 30 40 50 60 10 20 30 40 50 60 1 2 3 4 5 x (kpc) stellar surface density 0 10 20 30 40 50 60 10 20 30 40 50 60 2 4 6 8 10 0 10 20 30 40 50 60 \u22125 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 \u22121 0 [Z/H] 0 10 20 30 40 50 60 \u22121000 \u2212800 \u2212600 \u2212400 \u2212200 0 200 400 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.4 log Mh=10.4 v90=516km/s fmm=0.19 fedge=0.49 f2pk=\u22120.51 FLya \u2212200 0 200 400 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 9.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. the gas \ufb01laments are now falling back toward the galaxy (not necessarily radially), galactic winds are still blowing towards the upper right corner. Comparison of Figure 5 and Figure 2 indicates that the metallicity in the upper right quadrant of the galaxy is somewhat more metal enriched ([Z/H] \u223c0) than other regions ([Z/H] \u223c\u22121 or lower), consistent with the directions of ongoing galactic winds. This is strongly suggestive that metallicity enrichment process not only is episodic, multi-generational, anisotropic, but also in general possesses no parity. Figure 6 shows a DLA intercepting two \ufb01laments at a small inclined angle, giving rise to a broad physical extension of \u223c25kpc. All the visible \ufb01laments appear to run roughly topleft to bottom-right, whereas the metal enriched regions seem to spread out like a butter\ufb02y in \f\u2013 17 \u2013 y (kpc) pressure 0 10 20 30 40 50 60 0 10 20 30 40 50 60 3 3.2 3.4 3.6 3.8 y (kpc) atomic hydrogen density 0 10 20 30 40 50 60 0 10 20 30 40 50 60 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 60 0 10 20 30 40 50 60 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 60 0 10 20 30 40 50 60 3 3.5 4 4.5 5 5.5 6 6.5 baryon overdensity 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0.5 1 1.5 2 2.5 3 3.5 4 x (kpc) stellar surface density 0 10 20 30 40 50 60 0 10 20 30 40 50 60 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 \u22125 \u22124 \u22123 \u22122 \u22121 nHI & nH,tot 0 10 20 30 40 50 60 \u22121 0 [Z/H] 0 10 20 30 40 50 60 \u2212200 0 200 400 600 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=20.4 log Mh=10.5 v90=501km/s fmm=0.79 fedge=0.79 f2pk=0.79 FLya \u2212200 \u2212100 0 100 200 300 400 500 0.97 0.98 0.99 1 v(LOS) (km/s) FSi II Fig. 10.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. the direction roughly perpendicular to the \ufb01laments. This is the classic picture that galactic winds tend to blow in directions that are perpendicular to the \ufb01laments that are feeding the galaxy. The inner regions of the \ufb01laments appear to be less enriched ([Z/H] \u223c\u22122) than the outer regions of the \ufb01laments ([Z/H] \u223c[\u22120.5, 0]), strongly indicative of galactic winds tending to circumvent the denser \ufb01laments. The large width of 510km/s appears to be caused by oppositely moving (i.e., converging) \ufb02ows at x \u223c15 \u221240kpc, probably caused by the bipolar winds interacting with the complex \ufb01lament structures. This galaxy has a mass of 2 \u00d7 1011 M\u2299and we notice that most of its surrounding regions is relatively cold, whereas in Figures (1-5) we consistently see a hot atmosphere permeating the circumgalactic regions. The galaxies in Figures (1-5) all have mass \u22651012 M\u2299, consistent with the mass demarcation of cold and hot accretion modes (Kere\u02c7 s et al. 2005; Dekel & Birnboim 2006). \f\u2013 18 \u2013 y (kpc) pressure 0 10 20 30 40 50 60 10 20 30 40 50 60 3 3.5 4 4.5 5 y (kpc) atomic hydrogen density 0 10 20 30 40 50 60 10 20 30 40 50 60 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 60 10 20 30 40 50 60 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 60 10 20 30 40 50 60 3 4 5 6 7 baryon overdensity 0 10 20 30 40 50 60 10 20 30 40 50 60 0.5 1 1.5 2 2.5 3 3.5 4 x (kpc) stellar surface density 0 10 20 30 40 50 60 10 20 30 40 50 60 2 4 6 8 10 0 10 20 30 40 50 60 \u22125 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 \u2212200 0 200 400 600 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.6 log Mh=10.7 v90=501km/s fmm=0.0 fedge=0.2 f2pk=0.2 FLya \u2212200 0 200 400 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 11.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. Nevertheless, the existence of cold galactic \ufb01laments seen in Figures (1-5) is consistent with the suggested cold mode of accretion of massive galaxies at high redshift (Dekel et al. 2009). Figure 7 shows a \u201cnormal\u201d DLA where a relatively quiet galactic disk is pierced edge-on. Its large width of 522km/s is simply due to the large halo that the galaxy is residing in of mass 3 \u00d7 1012 M\u2299(at z = 3.1). The surrounding environment seems relatively \u201cpristine\u201d with no widespread metal enrichment at a level of [Z/H] \u2265\u22121. However, the temperature panel indicates that there is a hot halo permeating the entire region and embedding and pressure-con\ufb01ning (see the pressure panel) the cold neutral clouds. It seems likely that this hot gaseous halo is produced by gravitational shocks rather than galactic wind shocks. There are several \ufb01laments attached to the galaxy. This is a good example of cold streams feeding \f\u2013 19 \u2013 y (kpc) pressure 0 10 20 30 40 50 60 70 0 10 20 30 40 50 3 4 5 6 7 y (kpc) atomic hydrogen density 0 10 20 30 40 50 60 70 0 10 20 30 40 50 \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 x (kpc) y (kpc) metallicity 0 10 20 30 40 50 60 70 0 10 20 30 40 50 \u22122 \u22121.5 \u22121 \u22120.5 0 temperature 0 10 20 30 40 50 60 70 0 10 20 30 40 50 3 4 5 6 7 baryon overdensity 0 10 20 30 40 50 60 70 0 10 20 30 40 50 0 1 2 3 4 5 x (kpc) stellar surface density 0 10 20 30 40 50 60 70 0 10 20 30 40 50 2 4 6 8 10 12 0 10 20 30 40 50 60 70 \u22125 \u22124 \u22123 \u22122 \u22121 0 nHI & nH,tot 0 10 20 30 40 50 60 70 \u22123 \u22122 \u22121 0 [Z/H] 0 10 20 30 40 50 60 70 \u2212200 0 200 400 x (kpc) vp (km/s) \u22124000 \u22122000 0 2000 4000 0 0.2 0.4 0.6 0.8 1 log N(HI)=21.3 log Mh=10.7 v90=504km/s fmm=0.38 fedge=0.96 f2pk=\u22121.0 FLya \u2212200 0 200 400 0 0.2 0.4 0.6 0.8 1 v(LOS) (km/s) FSi II Fig. 12.\u2014 Top left: temperature (K); middle left: atomic hydrogen density (cm\u22123); bottom left: metallicity (solar units); top middle: pressure (Kelvin cm\u22123); the above maps have a thickness of 1.3kpc. Middle middle: baryonic overdensity; bottom middle: SDSS U band luminosity surface density ( L\u2299/kpc2); these two maps are projected over the virial diameter of the galaxy. Included in pressure map is peculiar velocity \ufb01eld with 5kpc corresponding to 500km/s. The \ufb01ve panels on the right column, from top to bottom, are: atomic hydrogen density (cm\u22123; red solid curve) with total hydrogen density (dotted green curve), gas metallicity (solar units), LOS proper peculiar velocity, Ly\u03b1 \ufb02ux and Si II \u03bb1808 \ufb02ux. The top three panels are plotted against physical distance, whereas the bottom two versus LOS velocity. Indicated in the second from bottom panel are properties of the DLA: log N(HI), log Mh, v90, fmm, fedg, f2pk. a massive galaxy by penetrating a hot atmosphere. Figure 8 shows a DLA with a large velocity width arising from a relatively small galaxy of total mass 3 \u00d7 1010 M\u2299. The galactic winds are blowing in the north-east direction that entrain cold neutral clouds with it. The LOS of the DLA intercepts a high velocity component at x = 35 \u221255kpc. The combination of this high velocity component with the low velocity component at x \u223c0 produces the relatively large width of 306km/s. Note that an isotropic Maxwellian velocity distribution of dispersion equal to that of the halo velocity dispersion would only yield a width of v90 = 2.33vvir = 176km/s. Clearly, galactic winds are directly responsible for the large width of this DLA, by entraining cold gas clouds to a high velocity. Figure 9 shows another DLA produced by a small galaxy of mass 2 \u00d7 1010 M\u2299with a \f\u2013 20 \u2013 large velocity width. The galaxy system have multiple, interaction galaxies at close distances. The galactic winds are blowing primarily, in a bipolar fashion, in north-east and south-west direction, roughly perpendicular to the galactic disk, that entrain the cold neutral cloud at x \u223c40kpc to a broad velocity of vx = 0 \u2212400km/s relative to the galaxy itself. In combination with another complex structure at x = 45 \u221270kpc the galactic winds produce a very large width of 516km/s. Note that for this galaxy 2.33vvir = 160km/s. Figure 10 shows yet another DLA arising from a small galaxy but having a large velocity width of 501km/s. This is an interesting case where the galactic winds are blowing, by the primary galaxy, towards and passing through the satellite galaxy at (x, y) \u223c(40, 20)kpc in the north-east direction. The SDSS u band luminosity map suggests that the satellite galaxy itself is experiencing a starburst and likely blowing and enhancing the north-east/north winds. A very large positive velocity gradient in the positive x-direction (downstream) of \u223c700km/s over an LOS physical interval of \u2206x \u223c20kpc is produced, resulting in a very large width. Note that 2.33vvir = 162km/s for this galaxy. The entrained neutral gas cloud and its downstream appear to have escaped the metal enrichment by ongoing winds and remain at [Z/H] \u223c\u22121 \u201cshadowing\u201d e\ufb00ect due to dense clouds and \ufb01laments. Figure 11 shows another wide DLA from a small galaxy. For this DLA the majority of the column is due to the intersection with the disk of the galaxy, which would have produced a velocity width of \u223c200km/s on its own. The galactic winds blowing at the south-east direction entrain some cold clouds at x = 35 \u221245kpc to a velocity up to 500km/s. Together, a large width of 501km/s is produced. Given the small mass of the galaxy the surrounding regions are not embedded in a hot gravitationally shock heated atmosphere. There are some solid angles with low gas column that have been heated by galactic winds, probably triggered by the binary galaxy interaction, as can be seen by comparing temperature map and the density map. Note that 2.33vvir = 204km/s for this galaxy. Finally, Figure 12 shows the last example of a wide DLA from a small galaxy. Two galactic \ufb01laments make up this DLA, one at x \u223c10kpc and the other at x \u223c35kpc. The galactic system is a primary-satellite binary that is interacting, which has likely caused both to experience starbursts. The primary galaxy at (x, y) \u223c(23, 36)kpc is blowing bipolar galactic winds mainly in the north-south direction, whereas the satellite at (x, y) \u223c(35, 35)kpc is blowing bipolar galactic winds in the east-west direction. Together they produce a very complex, multi-stream velocity structure. The total velocity width of this DLA is 500km/s, although each of the two components individually has a velocity width of \u2264200km/s. Note that 2.33vvir = 200km/s for the primary galaxy. In summary, we see that DLAs arise in a wide variety of cold gas clouds, from galactic disks to cold streams to cooling gas from galactic winds to cold clouds entrained by hot galactic winds at a wide range of distances from galaxies, with a wide range of metallicity and in galaxies of all masses from 1010 \u221210\u221212.5 M\u2299at z \u223c3. Inspection of the gallery has \f\u2013 21 \u2013 already hinted that many large velocity width DLAs may be produced directly or indirectly by galactic winds. That is, directly by entraining cold gas clouds and compressing cold gas clouds with high pressure and indirectly by enhancing cooling and thermal instability with added metals and shock compression. In addition, the composite nature of many large width DLA systems should also help remove the perceived failure of the standard LCDM model with respect to producing large width systems (e.g., Prochaska & Wolfe 1997). Quantitative results later prove this is indeed the case. 3.2. Kinematic Velocity Width Distribution Functions 1.5 2 2.5 3 \u22127 \u22126 \u22125 \u22124 \u22123 log v90 (km/s) log d2nDLA/dXdv z=1.6 (C) z=3.1 (C) z=4.0 (C) z=1.6 (V) z=3.1 (V) z=4.0 (V) Fig. 13.\u2014 Velocity distribution functions, de\ufb01ned to be the number of DLAs per unit width velocity per unit absorption length, at z = 1.6, 3.1, 4.0. Two sets of simulation results are shown, one for the \u201cC\u201d run (solid symbols) and \u201cV\u201d run (open symbols). The corresponding observational data for each of the individual redshifts (Prochaska et al. 2005) are shown as open squares, which span the redshift range of z = 1.7 \u22124.5. We now present more quantitative statistical results on the velocity width distribution functions of DLAs at several redshifts. Since the velocity structure in Ly\u03b1 \ufb02ux of a DLA is \u201cdamped\u201d and does not provide the kinematic information of the underlying physical cloud. Following Prochaska & Wolfe (1997), the velocity width, v90, is de\ufb01ned to be the velocity interval of 90% of the optical depth of the Si II \u03bb1808 absorption line associated with the DLA. Figure 13 shows the velocity width distribution at three redshifts (z = 1.6, 3.1, 4.0), \f\u2013 22 \u2013 covering most of the observed redshift range. We see a factor of \u223c10 variation from \u201cC\u201d to \u201cV\u201d run, indicating the need to have a larger statistical set of simulations covering, more densely, di\ufb00erent environments, before a more precise comparison can be made with observations. Insofaras the observed velocity width distribution function lies inbetween the two bracketing runs, \u201cC\u201d and \u201cV\u201d, and the shape of the functions are in excellent agreement with observations, including the high velocity tail (v90 \u2265300km/s), this should be considered a success for the LCDM model there is no lack of large width DLAs with v90 \u2265300km/s in the LCDM simulation. This conclusion is consistent with that of Hong et al. (2010), who studied this issue with a di\ufb00erent code and a di\ufb00erent feedback implementation. There is a signi\ufb01cant di\ufb00erence between our results and theirs in the that we \ufb01nd galactic winds are directly responsible for many of the large width DLAs, by entraining neutral dense clouds to large velocities. In addition, they conclude that a large halo mass (\u22651011 M\u2299) is a necessary condition for producing large velocity widths, while we \ufb01nd that a non-negligible fraction of large velocity width DLAs arise in halos less massive than 1011 M\u2299. 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=1.6 (C) v90,max=2.33vvir 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=3.1 (C) v90,max=2.33vvir 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=1.6 (V) v90,max=2.33vvir 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=3.1 (V) v90,max=2.33vvir Fig. 14.\u2014 The maximum v90 of all DLAs associated with each galaxy against the halo mass of the galaxy Mhalo for z = 1.6 and z = 3.1 for \u201cC\u201d and \u201cV\u201d run. The black line v90 = 2.33vvir is what v90 would be if the velocity distribution is an isotropic and Maxwellian distribution with its dispersion equal to vvir, and the Si II gas density is constant across the DLA. \f\u2013 23 \u2013 To help understand the large velocity width DLAs, we plot in Figure 14 the maximum v90 of all DLAs, v90,max, associated with each galaxy against the halo mass of the galaxy, Mhalo. The black line v90 = 2.33vvir is what v90 would be if the velocity distribution is an isotropic and Maxwellian distribution with its dispersion equal to vvir, and the Si II gas density is constant across the DLA. We see that the Maxwellian velocity distribution (the black line) approximately provides a lower bound to v90, although there is, unsurprisingly, some fraction of systems that lie below (see Figure 1 for an example). What is very interesting is that at z = 3.1 there is a large number of galaxies whose v90,max are substantially larger than what vvir could produce, i.e., \u201csuper-gravitational motion\u201d in the terminology of Hong et al. (2010). This super-gravitational motion is produced by galactic winds, as we have seen clearly in Figures 8, 9, 10, 11 and 12 in \u00a73.1. We also note that at z = 1.6 for both \u201cC\u201d and \u201cV\u201d run (and especially the \u201cC\u201d run), the correlation between v90 and vvir becomes substantially better with much reduced scatter, and the excess of DLAs with large v90/vvir is much removed. This is circumstantial but strong evidence that galactic winds are responsible for most of the large v90/vvir DLAs, because of higher star formation activities hence galactic winds at z = 3.1 than z = 1.6. Figure 16 below will further strengthen this point. It appears that the redshift evolution at a \ufb01xed environment is relatively mild in the redshift range z = 4.0 to z = 1.6. We speculate that the weak evolution of the velocity width distribution from z = 4.0 to z = 1.6 may be coincidental and attributable to two countering processes: growth of halo mass hence virial velocity with time and diminution of supergravitational motion produced by galactic winds with time (due to reduced star formation activities with time at z \u22642). This prediction of a weak evolution of velocity width distribution with redshift is veri\ufb01able with future larger DLA sample and is a powerful test for the non-gravitational origin of a large fraction of the large width systems. Figure 14 does not, however, fairly characterize the relative contribution of halos of di\ufb00erent masses to velocity width distribution function, because it does not specify the number of DLAs at a given halo mass. In Figure 15 we show the halo mass probability distribution function for DLAs above three velocity width cuts, v90 \u2265150, 300, 600km/s, respectively. We see a clear trend that larger halos make larger contribution to larger width DLAs, as one would have expected. For example, about one half of all DLAs with v90 \u2265 600km/s arise in halos of mass greater than 1012 M\u2299at z = 3.1, whereas that division line drops to 2 \u00d7 1011 M\u2299for v90 \u2265150km/s. It should be noted that the ratio of the virial velocity of a halo of mass 2 \u00d7 1011 M\u2299to that of 1012 M\u2299is 0.58, signi\ufb01cantly greater than 0.25 = 150/600, indicating an overweight of DLA cross-section by large galaxies. For moderate to large velocity width of v90 \u2265150km/s, halos of mass 1 \u00d7 1011 M\u2299dominate the contribution to DLA incidence, largely in agreement with Hong et al. (2010). Slightly at odds with Hong et al. (2010), however, we \ufb01nd a signi\ufb01cant fraction of these relative wide systems arising in galaxies of mass less than 1 \u00d7 1011 M\u2299: (24%, 18%, 12%) of DLAs with velocity width larger than (150, 300, 600)km/s are due to galaxies with mass less than 1 \u00d7 1011 M\u2299. \f\u2013 24 \u2013 9 10 11 12 13 0 0.1 0.2 0.3 log Mhalo (Msun) PDF z=3.1 (C) v90>150km/s z=3.1 (C) v90>300km/s z=3.1 (C) v90>600km/s Fig. 15.\u2014 shows the DLA incidence weighted halo mass probability distribution function for DLAs above three velocity width cuts, v90 \u2265150, 300, 600km/s at z = 3.1 for the \u201cC\u201d run. Note that a DLA associated with a satellite galaxy or any gas cloud within the virial radius is given the halo mass of the primary galaxy. We note that our de\ufb01nition of associating DLAs with galaxies biases associating them to larger galaxies; Figure 3 gives an example, where the DLA is de\ufb01ned to arise from the larger galaxy of mass 8\u00d71011 M\u2299, even though it is more closely related to a much smaller satellite galaxy that is orbiting around the larger galaxy. Our results are perhaps unsurprising in the sense that one would have expected that galactic winds, when they are blowing, should be stronger, or at least not weaker, in dwarf starburst galaxies than larger galaxies thanks to shallow gravitational potential wells in the former, when cold gas is still abundant at high redshift. Both Figure 15 and gallery pictures in \u00a73.1 con\ufb01rm this point. Galactic winds, however, could be weaker in dwarf galaxies if star formation is inproportionally less vigorous. This may be the case at lower redshift, as shown in Figure 16. What is interesting, and further evidence, is that at z = 0 the dwarf galaxies in \u201cV\u201d run appear to have more super-gravitational motion than in the \u201cC\u201d run, simply because the former are gas richer and have higher star formation rate than the latter. Thus, it seems that galactic winds are a bivariate function of galaxy mass and star formation rate, in a fashion that is consistent with observations (e.g., Martin 2005). \f\u2013 25 \u2013 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=0 (C) v90,max=2.33vvir 10 11 12 13 2 2.5 3 3.5 log Mhalo (Msun) log v90,max (km/s) z=0 (V) v90,max=2.33vvir Fig. 16.\u2014 The maximum v90 of all DLAs associated with each galaxy against the halo mass of the galaxy Mhalo at z = 0 for \u201cC\u201d and \u201cV\u201d run. The black line v90 = 2.33vvir is what v90 would be if the velocity distribution is an isotropic and Maxwellian distribution with its dispersion equal to vvir, and the Si II gas density is constant across the DLA. 3.3. Si II Line Pro\ufb01le Shape Measures Having found an overall good agreement with observations with respect to the velocity width distribution, we now turn to shape measures of Si II \u03bb1808 absorption line pro\ufb01le. Before comparing to observational data from Prochaska & Wolfe (1997), we shall \ufb01rst try to understand the relationship among the optical depth of a Si II line, HI column density metallicity and velocity width. Assuming that the optical depth pro\ufb01le of the Si II line is a simple top-hat (assuming a di\ufb00erent pro\ufb01le such as a gaussian makes no material di\ufb00erence for our purpose), it can be shown that \u03c4Si II = 0.01( NHI 2 \u00d7 1020 cm\u22122)( Z Z\u2299 )( v90 100 km/s)\u22121, (1) where Z is the metallicity of DLA in solar units. Left panel of Figure 17 shows v90 as a function of log NHI for Z = 0.1 Z\u2299. As expected, an increase in velocity width requires a corresponding increase in column density to produce a same optical depth. More important is that, quantitatively, in order to achieve an optical depth of 0.1, with a width v90 \u223c100km/s it requires a DLA column of \u223c2\u00d71022cm\u22122, if the DLA is composed of one single component with [Z/H] = \u22121. Since the abundance of DLAs with NHI \u22651022.5cm\u22122 declines rapidly (see Figure 21) but the abundance of Si II line peaks near v90 \u223c100km/s (Figure 13), this suggests that a signi\ufb01cant fraction of Si II lines must have multiple components. To quantitatively illustrate this, we de\ufb01ne a new simple two-component measure as follows. If there are at least two peaks in the optical depth pro\ufb01le that are separated by more than 0.5v90 and the ratio of the peak heights is greater than 1/15, we de\ufb01ne the DLA to be a two-component DLA. The ratio, 1/15, comes about such that the lower peak is guaranteed \f\u2013 26 \u2013 20 21 22 23 0 1 2 3 log NHI (cm\u00ef2) log v90 (km/s) oSi II 1808=0.1 oSi II 1808=0.5 oSi II 1808=2.0 1 1.5 2 2.5 3 0 0.1 0.2 0.4 0.6 0.8 1 v90 (km/s) Two\u2212Peak % two\u2212peak statistics Fig. 17.\u2014 Left panel: v90 as a function of log NHI assuming Z = 0.1 Z\u2299. Right panel: the percentage of DLAs that have multiple components, as a function of v90. to be included in the accounting of v90 interval, although changing it to say 1/10 makes no dramatic di\ufb00erence in the results. Note that DLAs with more than two components are included as two-component systems. Right panel of Figure 17 shows the percentage of two-component DLAs as a function of v90. In good agreement with the simple expectation, we see that at v90 = 100km/s, about 50% of DLAs have more than one component and that number increases to \u223c90% at v90 = 300km/s. This result is also consistent with the anecdotal evidence shown in the gallery examples in \u00a73.1, where most of large width DLAs contain more than one physical component. We now turn to the three kinematic shape measures de\ufb01ned in Prochaska & Wolfe (1997), fmm, fedg, f2pk, representing, respectively, measures of the symmetry, leading-edgeness and two-peakness of the pro\ufb01le of Si II \u03bb1808 absorption lines associated with DLAs (see the bottom right panels of the gallery pictures in \u00a73.1). Figures (18,19,20) show comparisons of simulation results with observations at three redshifts z = 1.6, z = 3.1 and z = 4.0. We see the overall agreement between simulations and observations is excellent, with K-S tests (indicated in the \ufb01gures) for both runs (\u201cC\u201d and \u201cV\u201d) at three compared redshifts (z = 1.6, 3.1, 4.0) all being at acceptable levels. Our results are in good agreement with one of the models with feedback in Hong et al. (2010), except for the case of f2pk: our simulations \ufb01nd acceptable K-S test values of 26-29%, 4-34% and 23-34% at z = 1.6, 3.1 and z = 4.0, respectively, whereas they \ufb01nd none of their models have probability higher than 5% at z = 3.1. We speculate that di\ufb00erence in the detailed treatments of metal transport process as well as feedback prescription between our simulations and theirs may have partly contributed to this di\ufb00erence; with detailed metal transport we \ufb01nd very inhomogeneous metallicity distributions across space and among DLAs in our simulations (see Figure 23 \f\u2013 27 \u2013 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF z=1.6 (C) p(K\u2212S)=0.18 z=1.2\u22122.0 (obs) z=3.1 (C) p(K\u2212S)=0.58 z=4.0 (C) p(K\u2212S)=0.27 z=3.8\u22124.2 (obs) z=1.6 (C) p(K\u2212S)=2E\u221289 z=3.1 (C) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF z=1.6 (V) p(K\u2212S)=0.16 z=1.2\u22122.0 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF z=3.1 (V) p(K\u2212S)=0.48 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF z=4.0 (V) p(K\u2212S)=0.33 z=3.8\u22124.2 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fmm PDF z=1.6 (V) p(K\u2212S)=2E\u221246 z=3.1 (V) Fig. 18.\u2014 Left set of four panels: fmm distributions for \u201cC\u201d run at redshift z = 1.6 (top left), z = 3.1 (top right) and z = 4.0 (bottom left). For z = 1.6 we compared to observed DLAs at the redshift range z = 1.2 \u22122.2; for z = 3.1 we compared to observed DLAs at z = 2.9 \u22123.3; for z = 4.0 we compared to observed DLAs at z = 3.8 \u22124.2. Observed sample is an updated version of Prochaska & Wolfe (1997), shown as the black histogram. Also shown in each K-S test probability that the two distributions (computed and observed) are drawn form the same underlying distribution. In the bottom right panel, we compare computed z = 1.6 and z = 3.1 distributions along with the K-S test probability to show a signi\ufb01cant evolution of this shape distribution function with redshift. Right set of four panels: fmm distributions for \u201cV\u201d run. below), whereas they assume a constant metallicity of [Z/H] = \u22121 for all DLAs. It is also noted that the metallicity distributions of our simulations at redshift range z = 1.6 \u22124.0 are in excellent agreement with observations (Figure 23). At the bottom-right panels of each four-panel set in Figures (18,19,20) we show a comparison between z = 1.6 and z = 3.1 distributions for each of the shape statistics and \ufb01nd that there is signi\ufb01cant evolution in all three shape measures. Current small observational sample does not allow for such a test. Our results demonstrate that the standard LCDM model, with a proper modeling of astrophysical processes, including galaxy formation and feedback in the forms of mechanical feedback and metal enrichment, can successfully produce Si II line shapes that are in good agreement with observations. \f\u2013 28 \u2013 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=1.6 (C) p(K\u2212S)=0.37 z=1.2\u22122.0 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=3.1 (C) p(K\u2212S)=0.64 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=4.0 (C) p(K\u2212S)=0.63 z=3.8\u22124.2 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=1.6 (C) p(K\u2212S)=3E\u2212120 z=3.1 (C) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=1.6 (V) p(K\u2212S)=0.29 z=1.2\u22122.0 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=3.1 (V) p(K\u2212S)=0.41 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=4.0 (V) p(K\u2212S)=0.66 z=3.8\u22124.2 (obs) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 fedg PDF z=1.6 (V) p(K\u2212S)=3E\u221258 z=3.1 (V) Fig. 19.\u2014 Left set of four panels: fedg distributions for \u201cC\u201d run at redshift z = 1.6 (top left), z = 3.1 (top right) and z = 4.0 (bottom left). For z = 1.6 we compared to observed DLAs at the redshift range z = 1.2 \u22122.2; for z = 3.1 we compared to observed DLAs at z = 2.9 \u22123.3; for z = 4.0 we compared to observed DLAs at z = 3.8 \u22124.2. Observed sample is an updated version of Prochaska & Wolfe (1997), shown as the black histogram. Also shown in each K-S test probability that the two distributions (computed and observed) are drawn form the same underlying distribution. In the bottom right panel, we compare computed z = 1.6 and z = 3.1 distributions along with the K-S test probability to show a signi\ufb01cant evolution of this shape distribution function with redshift. Right set of four panels: fedg distributions for \u201cV\u201d run. 3.4. Column Density Distribution, Line Density and \u2126g(DLA) Evolution Let us now address the fundamentally important observable: the column density distribution of DLAs and its evolution. Figure 21 shows the column density distribution at several redshifts from z = 0 to z = 4. Where comparisons can be reliably made with observations, at z = 2.5, z = 3.1 and z = 4, we see that the overdense run \u201cC\u201d and underdense run \u201cV\u201d appropriately bracket the observational data in amplitude. Similar to the situation for the velocity distribution function (Figure 13), the strong environmental dependence of the column density distribution renders it impractical to make vigorous comparisons between the simulations and observations. Given that the amplitude of observed column density distribution lies between that of \u201cC\u201d and that of \u201cV\u201d run, and the shapes of both simulated functions are in reasonable agreement with observations we tentatively conclude that the standard LCDM model can reasonably reproduce the observed the column density distribution. Note that the shape at the highest column end depends on the treatment of high density regions, for which we have used an empirical relation. Ultimately, when pc resolution \f\u2013 29 \u2013 \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=1.6 (C) p(K\u2212S)=0.26 z=1.2\u22122.0 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=3.1 (C) p(K\u2212S)=0.34 z=2.9\u22123.3 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=4.0 (C) p(K\u2212S)=0.34 z=3.8\u22124.2 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=1.6 (C) p(K\u2212S)=2E\u221237 z=3.1 (C) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=1.6 (V) p(K\u2212S)=0.29 z=1.2\u22122.0 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=3.1 (V) p(K\u2212S)=0.04 z=2.9\u22123.3 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=4.0 (V) p(K\u2212S)=0.23 z=3.8\u22124.2 (obs) \u22121\u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 f2pk PDF z=1.6 (V) p(K\u2212S)=9E\u221216 z=3.1 (V) Fig. 20.\u2014 Left set of four panels: f2pk distributions for \u201cC\u201d run at redshift z = 1.6 (top left), z = 3.1 (top right) and z = 4.0 (bottom left). For z = 1.6 we compared to observed DLAs at the redshift range z = 1.2 \u22122.2; for z = 3.1 we compared to observed DLAs at z = 2.9 \u22123.3; for z = 4.0 we compared to observed DLAs at z = 3.8 \u22124.2. Observed sample is an updated version of Prochaska & Wolfe (1997), shown as the black histogram. Also shown in each K-S test probability that the two distributions (computed and observed) are drawn form the same underlying distribution. In the bottom right panel, we compare computed z = 1.6 and z = 3.1 distributions along with the K-S test probability to show a signi\ufb01cant evolution of this shape distribution function with redshift. Right set of four panels: f2pk distributions for \u201cV\u201d run. is reached, we can make more de\ufb01nitive tests. What is also interesting is that the variations between di\ufb00erent environments are larger than the redshift evolution of the column density distribution in each run. It is further noted that, seen in the lower-left panel of Figure 21, the evolution of the column density distribution in \u201cC\u201d and \u201cV\u201d run is di\ufb00erent. In \u201cC\u201d run, we see weak evolution from z = 4 to z = 1.6 and then a relatively large drop in amplitude at z = 0. In \u201cV\u201d run, on the other hand, we see practically little evolution from z = 3.1 to z = 0. This likely re\ufb02ects the dynamical stage of a simulated sample, where the \u201cC\u201d run is more dynamically advanced than the \u201cV\u201d at a same redshift, consistent with the behavior seen in Figure 25 below. Left panel of Figure 22 shows the redshift evolution of DLA line density, de\ufb01ned to be the number of DLAs per unit absorption length. Right panel of Figure 22 shows the redshift evolution of neutral gas density in DLAs. Inherited from the situation shown in Figure 21, there is a large variation of both plotted quantities between the two (\u201cC\u201d and \u201cV\u201d) runs. What is reassuring is that the observed data lie sensibly between results from these two \f\u2013 30 \u2013 20.3 21 21.5 22 22.5 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 log NHI (cm\u22122) fHI(N,X) z=3.1 (C) z=3.1 (V) 20.3 21 21.5 22 22.5 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 log NHI (cm\u22122) fHI(N,X) z=4.0 (C) z=4.0 (V) 20.3 21 21.5 22 22.5 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 log NHI (cm\u22122) fHI(N,X) z=0.0 (C) z=1.6 (C) z=3.1 (C) z=4.0 (C) z=0.0 (V) z=1.6 (V) z=3.1 (V) z=4.0 (V) 20.3 21 21.5 22 22.5 \u221226 \u221225 \u221224 \u221223 \u221222 \u221221 log NHI (cm\u22122) fHI(N,X) z=2.5 (C) z=2.5 (V) Fig. 21.\u2014 Column density distributions, de\ufb01ned to be the number of of DLAs per unit column density per unit absorption length, at z = 2.5 (lower right), at z = 3.1 (upper left), at z = 4.0 (upper right), separately, and together for z = 0, 1.6, 3.1, 4.0 (lower left). In each panel, two sets of simulation results are shown, one for the \u201cC\u201d run (solid dots) and \u201cV\u201d run (open circles). The corresponding observational data for each of the individual redshifts are an updated version with SDSS DR7 from Prochaska et al. (2005), shown as open squares. bracketing environments. If one assumes that the cosmic mean of each of the two plotted quantities should lie between \u201cC\u201d and \u201cV\u201d run, reading the range spanned by the two runs suggests that the LCDM model is likely to agree with observations to within a factor of \u223c2 with respect to both quantities, although what the overall temporal shape will look like is di\ufb03cult to guess. To \ufb01rmly quantify these important observables and to more precisely assess the agreement/disagreement between the predictions of the LCDM model and observations, a larger set of simulations sampling, more densely, di\ufb00erent environments in a statistically correct fashion will be necessary, so is a more accurate treatment of the transition from atomic to molecular hydrogen in very high density regions (that likely a\ufb00ects the shape at the high column density end). We reserve this for future work. \f\u2013 31 \u2013 0 1 2 3 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 z lDLA(X) (C) (V) Prochaska et al (2005) Rao et al (2006) 0 1 2 3 4 0 1 2 3 4 5 z 1g DLA (x103) (C) (V) Prochaska et al (2005) Rao et al (2006) Fig. 22.\u2014 Left panel: the redshift evolution of the DLA line density for the \u201cC\u201d run (solid dots) and \u201cV\u201d run (solid squares). The observational data at z > 2 are an updated version with SDSS DR7 from Prochaska et al. (2005), shown as open squares, the observational data at z < 2 are from Rao et al. (2006), shown as open circles. Right panel: the redshift evolution of the neutral gas density in DLAs for the \u201cC\u201d run (solid dots) and \u201cV\u201d run (solid squares). The observational data at z > 2 are an updated version with SDSS DR7 from Prochaska et al. (2005), shown as open squares, the observational data at z < 2 are from Rao et al. (2006), shown as open circles. 3.5. Metallicity Distribution and Evolution The current set of simulations is vastly superior to those used in our earlier work addressing the observed relatively weak but non-negligible evolution of DLA metallicity (Cen et al. 2003) and here we return to this critical issue. Figure 23 shows the DLA metallicity distributions at four redshift, z = 0, 1.6, 3.1, 4.0. For the three redshifts, z = 1.6, 3.1, 4.0, where comparisons can be made, we \ufb01nd that the agreement between simulations and observations at z = 1.6 to z = 0 is excellent, as K-S tests show. This is a non-trivial success, given that our feedback prescription has essentially one free parameter, that is, the supernova energy that is driving galactic winds transports energy, metals and mass throughout interstellar (ISM), circumgalactic (CGM) and intergalactic space (IGM). Furthermore, the absolute amount of metals is totally \ufb01xed by requiring that 25% of stellar mass with metallicity equal to 10 Z\u2299 returning to the ISM, CGM and IGM. The agreement indicates that our choices of both the supernova ejecta mass and its metallicity and the explosion energy, which are inspired by theories of stellar interior and direct observations, may provide a reasonable approximation of truth. We see that the peak of the DLA metallicity distribution evolves from [Z/H] = \u22121.5 \f\u2013 32 \u2013 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 0 0.1 0.2 0.3 0.4 [Z/H] PDF \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 0 0.1 0.2 0.3 0.4 [Z/H] PDF \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 0 0.1 0.2 0.3 0.4 [Z/H] PDF \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 0 0.1 0.2 0.3 0.4 [Z/H] PDF z=0 (C) z=0 (V) z=1.6 (C) p(K\u2212S)=0.95 z=1.6 (V) p(K\u2212S)=0.83 z=1.2\u22122.0 (obs) z=3.1 (C) p(K\u2212S)=0.20 z=3.1 (V) p(K\u2212S)=0.25 z=2.9\u22123.3 (obs) z=4.0 (C) p(K\u2212S)=0.13 z=4.0 (V) p(K\u2212S)=0.40 z=3.8\u22124.2 (obs) Fig. 23.\u2014 shows the DLA metallicity distributions at four redshift, z = 0, 1.6, 3.1, 4.0 for both \u201cC\u201d (red histograms) and \u201cV\u201d (green histograms) run. The observational data are from Prochaska et al. (2005), shown as black histograms. Because there is non-negligible evolution, the comparisons between simulations at a given redshift are only made with observed DLAs within a narrow redshift window, as shown. Probabilities that simulated and observed samples are drawn from the same underlying distribution are indicated in each panel, separately for \u201cC\u201d and \u201cV\u201d run. at z = 3 \u22124, to [Z/H] = \u22120.75 at z = 1.6, and to [Z/H] = \u22120.5 at z = 0. Thus, both simulations and nature indicate that there is a weak but real evolution in DLA metallicity. What is also important to note is that, in agreement with observations, simulations indicate that the distribution of metallicity is very wide, spanning three or more decades at z \u2265 1.6 \u22124. This wide range re\ufb02ects the rich variety of neutral gas that composes the DLA population, from relatively pristine gas clouds falling onto or feeding galaxies, to metalenriched cold clouds that are falling back to (galactic fountain) or still moving away from (due to entrainment of galactic winds) galaxies, to cold neutral gas clouds in galactic disks. There is a metallicity \ufb02oor at [Z/H] \u223c\u22123 at z = 1.6 \u22124 and that \ufb02oor moves up to [Z/H] \u223c\u22121.5 by z = 0, consistent with observations (Prochaska et al. 2003). The distribution at z = 0 is signi\ufb01cantly narrower, partly re\ufb02ecting the overall enrichment of the IGM and partly due to much reduced variety of DLAs with galactic disks becoming a more dominant contributor to DLAs (see discussion below). \f\u2013 33 \u2013 Ellison et al. (2010) \ufb01nd that proximate DLAs (PDLAs), those within a velocity distance from the QSO \u2206v < 3000km/s, seem to have metallicity higher than the more widely studied, intervening DLAs. It seems conceivable that the total sample of PDLAs plus conventional (intervening) DLAs may somewhat shift the metallicity distribution to the right, perhaps bringing it to a still better agreement with our simulations. 1.5 2 2.5 3 \u22123 \u22122 \u22121 0 log v90 (km/s) PDF(>v90) z=3.1 [Z/H]<\u22121 (C) z=2.5\u22123.7 [Z/H]<\u22121 (obs) z=3.1 [Z/H]>\u22121 (C) z=2.5\u22123.7 [Z/H]>\u22121 (obs) 1.5 2 2.5 3 \u22123 \u22122 \u22121 0 log v90 (km/s) PDF(>v90) z=0 [Z/H]<\u22121 (C) z=0 [Z/H]>\u22121 (C) Fig. 24.\u2014 Left panel: shows the cumulative velocity width probability function for two subsets of the DLA sample, divided by DLA metallicity at [Z/H] = \u22121, at z = 3.1 from the \u201cC\u201d run; the results with \u201cV\u201d run, not shown, are nearly identical. The observational data is an updated version of Prochaska & Wolfe (1997), divided into two subsets such that the ratio of the number of DLAs in the two subsets is equal to that of the simulated sample to enable a fair comparison. The observed data points are slightly shifted to the right by a small amount for more clear reading. Right panel: z = 0 from the \u201cC\u201d run. Observations have found a strong positive correlation between galaxy mass and metallicity (e.g., Erb et al. 2006). We divide the simulated DLA sample at z = 3.1 into two subsets, one with metallicity less than [Z/H] = \u22121 and the other more than [Z/H] = \u22121. We then compute the velocity width functions separately for each subset, which are shown as solid dots (lower metallicity) and solid squares (higher metallicity) in the left panel of Figure 24. What we see is that there is a small excess of large velocity width DLAs for the higher metallicity subset compared to the lower metallicity one. This is of course in the sense that is consistent with the observed metallity-mass relation. However, current observational data sample is consistent with simulations, and the di\ufb00erence between the two simulated subsets and between the two observed subsets is statistically insigni\ufb01cant. A larger sample (by a factor of 4) may allow for a statistically signi\ufb01cant test. Do we expect a larger di\ufb00erence in the disk model (Wolfe et al. 1986; Prochaska & Wolfe 1997)? We do not have a straight answer to this question, without a very involved modeling. However, we we suggest that the picture we have presented, where DLAs arise from a variety of galactic systems, in a vari\f\u2013 34 \u2013 ety of locations of widely varying metallicity (see the gallery in \u00a73.1), would be consistent with the small di\ufb00erence found, because the velocity widths of large width DLAs do not strongly correlate with galaxy mass (see Figure 14). In other words, the observed correlation between metallicity and galaxy mass is largely washed out by DLAs that do not arise in disks and whose metallicity do not strongly correlate with galaxy mass. If one combines the information provided by Figure 15 and Figure 24, one may reach a similar conclusion. 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 0 0.1 0.2 log dDLA (kpc) PDF z=0 (C) z=0 (V) 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 0 0.1 0.2 log dDLA (kpc) PDF z=1.6 (C) z=1.6 (V) 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 0 0.1 0.2 log dDLA (kpc) PDF z=3.1 (C) z=3.1 (V) 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 0 0.1 0.2 log dDLA (kpc) PDF z=4.0 (C) z=4.0 (V) Fig. 25.\u2014 shows the distributions of distance of DLA from the center of galaxy (i.e., impact parameter) for \u201cC\u201d (red histograms) and \u201cV\u201d (green histograms) run at redshift z = 0 (top right), z = 1.6 (top left), z = 3.1 (bottom left) and z = 4.0 (bottom right). The implication may be that DLAs do not arise predominantly in gaseous disks of spiral galaxies at high redshift, in agreement with Maller et al. (2001) and Hong et al. (2010). We shall elaborate further on this signi\ufb01cant point. In Figure 25 we show the distribution of physical distance of DLAs from the galactic center (i.e., impact parameter) at four redshifts, z = 0, 1.6, 3.1, 4.0. Since we have shown in Figure 15 that DLA incidence contribution peaks at \u223c1011.5 M\u2299, let us make a simple estimate of their size at z \u223c3. As a reference, let us take the radius of Milky Way (MW) stellar disk to be 15kpc. Taking MW to z = 3 self-similarly would give a radius of 3.8kpc and for a 1011.5 M\u2299galaxy the stellar disk radius would be 2.5kpc at z = 3, corresponding to 0.40 in the shown x-axis. Observed large galaxies \f\u2013 35 \u2013 (of mass likely in the range \u223c1011 \u22121012 M\u2299) at z \u223c3 appear to have sizes of \u223c1 \u221210kpc (Lowenthal et al. 1997; Ferguson et al. 2004; Trujillo et al. 2006; Toft et al. 2007; Zirm et al. 2007; Buitrago et al. 2008), roughly consistent with the simple scaling. The distance distribution peaks at dDLA \u223c20 \u221230kpc at z = 3 \u22124, which is much larger than a few kpc of the observed (or expected based on z = 0 galaxies) stellar disk size at z \u223c3. It is noted that the virial radius of a Milky Way size galaxy is about \u223c50kpc at z = 3. Thus, these gaseous structures occur at about half the virial radius at z = 3, Thus, we conclude that at z = 3 \u22124 most of the DLAs do not arise from large galactic stellar disks. They appear to come from regions that are \u223c5 \u22128 larger than the stellar disks. The ubiquitous extended structures galactic \ufb01laments appear to be at the right distances of dDLA \u223c20 \u221230kpc, seen in the gallery examples in \u00a73.1. While the extremely close association of galactic \ufb01laments with galaxy interactions suggest that the host galaxies are likely experiencing starbursts, as seen in the gallery examples, the clouds that give rise to DLAs do not appear to have ongoing in situ star formation. Clearly, most DLAs do not arise in disks and most DLAs have low metallicities, as we have shown, are self-consistent. In other words, aside from those DLAs that arise from galactic disks and are metal rich, the metallicity of the vast majority of more metal poor DLAs do not appear to be forming stars. It may be that, if and when the gas in the galactic \ufb01laments forms stars, either they are destroyed by star formation feedback and remove themselves from the DLA category or they have already incorporated into disks of galaxies. We suggest that our model gives a natural explanation to the apparent puzzle of the lack of obvious star formation of gas-rich DLAs (Wolfe & Chen 2006). On the other hand, the inferred cooling rates of DLAs may be provided, in part, by radiative heating from the host galaxy (see Figure 31 below) and possibly in part by compression heating as we frequently see higher external pressure in \u00a73.1. Figure 26 shows the ratio of gas metallicity for DLAs at di\ufb00erent subset of DLAs with di\ufb00erent column density ranges to the mean metallicity of ongoing star-forming gas. It is clear that only the high end of the high column density range (NHI \u22651022cm\u22122) DLAs are forming stars; most of the DLAs have little star formation. Returning to Figure 25, at z = 1.6 there is a very interesting divergence between the two distributions for \u201cC\u201d and \u201cV\u201d run, where the distribution for \u201cC\u201d run peaks at dDLA \u223c 40kpc and for \u201cV\u201d run at dDLA \u223c10kpc. This is consistent with the expectation that the overdense region in the \u201cC\u201d run and the underdense region in the \u201cV\u201d run start to \u201cfeel\u201d the di\ufb00erence in their respective local large-scale density environment and evolve di\ufb00erently dynamically. That is, in \u201cC\u201d run gravitational shock heating due to large-scale structure formation begins to signi\ufb01cantly a\ufb00ect the cold gas in galaxies, whereas in \u201cV\u201d run the galaxies have not changed signi\ufb01cantly since z = 3 \u22124 except that they are now somewhat smaller due to lower gas density at lower redshift. By z = 0 the two distributions once again become nearly identical; this is rather intriguing and may re\ufb02ect the following physical picture: while galaxies in the \u201cV\u201d run has by now dynamically \u201ccaught up\u201d with the \ufb01eld \f\u2013 36 \u2013 \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=1.6 NHI=[20.3 21.0] (C) z=1.6 NHI=[21.0 21.8] (C) z=1.6 NHI=[21.8 22.5] (C) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=0 NHI=[20.3 21.0] (C) z=0 NHI=[21.0 21.8] (C) z=0 NHI=[21.8 22.5] (C) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=3.1 NHI=[20.3 21.0] (C) z=3.1 NHI=[21.0 21.8] (C) z=3.1 NHI=[21.8 22.5] (C) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=4.0 NHI=[20.3 21.0] (C) z=4.0 NHI=[21.0 21.8] (C) z=4.0 NHI=[21.8 22.5] (C) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=1.6 NHI=[20.3 21.0] (V) z=1.6 NHI=[21.0 21.8] (V) z=1.6 NHI=[21.8 22.5] (V) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=0 NHI=[20.3 21.0] (V) z=0 NHI=[21.0 21.8] (V) z=0 NHI=[21.8 22.5] (V) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=3.1 NHI=[20.3 21.0] (V) z=3.1 NHI=[21.0 21.8] (V) z=3.1 NHI=[21.8 22.5] (V) \u22123 \u22122 \u22121 0 0 0.5 1 1.5 ZDLA/ PDF z=4.0 NHI=[20.3 21.0] (V) z=4.0 NHI=[21.0 21.8] (V) z=4.0 NHI=[21.8 22.5] (V) Fig. 26.\u2014 shows the distribution of the ratio of gas metallicity for DLAs at di\ufb00erent column density ranges to the mean metallicity of ongoing star-forming gas in the \u201cC\u201d run (left set of four panels) at z = 0, 1.6, 3.1, 4.0 and in the \u201cV\u201d run (right set of four panels). We expect that gas that has the x-axis value close or greater than 0 may be forming stars. galaxies in the \u201cC\u201d run, giving rise to the similar gaussian-like distribution centered at dDLA = 10 kpc, the original gas-rich galaxies in the \u201cC\u201d run have fallen into the cluster, lost gas and \u201cdisappeared\u201d from the DLA population. While there is almost no DLA that is further away than 50kpc at z = 3\u22124, there is a second bump at dDLA = 100\u2212300kpc in the distribution for the \u201cC\u201d run at z = 0. This bump is likely due to gas rich satellite galaxies orbiting larger galaxies or small groups of mass 1012 \u22121013 M\u2299. Beyond dDLA = 300kpc, there is no DLA in the \u201cC\u201d run, which is due to gas-starvation of galaxies in still larger groups or clusters at z = 0. With direct inspection of simulation data we \ufb01nd that there is virtually no gas rich galaxies within the virial radius of the primary cluster in the \u201cC\u201d run. What is also interesting is that the peak distance of dDLA \u223c10kpc at z = 0 is totally consistent with the notion that gaseous disks of \ufb01eld galaxies, like the one in our own Galaxy, signi\ufb01cantly contribute to DLAs. Right panel of Figure 24 shows the velocity distributions of two subsets of DLAs, divided at metallicity at [Z/H] = \u22121 at z = 0. Here we see a very clear di\ufb00erence between the two distributions: the higher metallicity subset have large velocity widths, i.e., there is a strong positive correlation between metallicity and velocity width at z = 0. This supports the picture that a large fraction of DLAs arise in gaseous disks of large \ufb01eld galaxies. Most of the DLAs at z = 0 have a higher metallicity of [Z/H] \u2265\u22121.0 with the overall distribution peaking at [Z/H] = \u22120.5, also providing support for this picture. Therefore, by z = 0 the situation appears to have reversed: galactic disks of large galaxies make a major contribution to DLAs at z = 0. The fact that the peak distance has dropped \f\u2013 37 \u2013 from 30\u221240kpc at z = 3\u22124 to 10kpc at z = 0 is physically in partly due to a large decrease (a factor of \u223c100) in the mean gas density of the universe from z = 3 \u22124 to z = 0. 3.6. Size Distribution 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=0 single (C) z=0 total (C) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=1.6 single (C) z=1.6 total (C) Cooke et al (2010) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=3.1 single (C) z=3.1 total (C) Rauch et al (2008) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=4.0 single (C) z=4.0 total (C) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=0 single (V) z=0 total (V) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=1.6 single (V) z=1.6 total (V) Cooke et al (2010) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=3.1 single (V) z=3.1 total (V) Rauch et al (2008) 0 0.2 0.4 0.6 0.8 1 1.5 0 0.1 0.2 0.3 log rDLA (kpc) PDF z=4.0 single (V) z=4.0 total (V) Fig. 27.\u2014 Left set of four panels: the DLA size distribution at redshift z = 0, 1.6, 3.1, 4.0 for \u201cC\u201d run. Each individual DLA size rDLA (see text for de\ufb01nition) is shown as red histograms, whereas the total DLA size of a galaxy rtot (see text for de\ufb01nition) is shown as green histograms. Right set of four panels: the DLA size distribution at redshift z = 0, 1.6, 3.1, 4.0 for \u201cV\u201d run. The observationally inferred DLA size, shown as an open square in both z = 1.6 panels, is from Cooke et al. (2010), and that shown as an open circle in both z = 3.1 panels is from Rauch et al. (2008) with the shown dispersion estimated by this author. Binary quasars, physical or lensed, provide an unique tool to probe the size of DLAs. Here we present our predictions of size distributions of DLAs in the LCDM model. As we have described in \u00a72.3, any cells (of size 0.915h\u22121kpc comoving) that are connected by one side in projection are merged into a \u201csingle isolated\u201d DLA. The area of each \u201cisolated\u201d DLA, A, is then used to de\ufb01ne the size (radius) of the DLA by rDLA = (A/\u03c0)1/2. The total area of all isolated DLA associated with a galaxy along three orthogonal directions (x, y, z), Ax, Ay and Az, are summed to obtain Atot = pA2 x + A2 y + A2 z and the total DLA size (radius) of the galaxy is de\ufb01ned to be rtot = (Atot/\u03c0)1/2. Note that, if DLAs arises from a thin disk, the Atot computed this way will be the exact size of the disk face on, regardless of its orientation. On the other hand, if each DLA cloud is a sphere, this method overestimate the size (area) by a factor of \u221a 3. Figure 27 shows the size (radius) distribution at redshift z = 0, 1.6, 3.1, 4.0 for individual \f\u2013 38 \u2013 10 14 10 15 10 16 10 17 10 18 10 19 10 20 1 3 10 30 50 100 NHI (cm\u22122) % (= 30Msun or (B) 2-4% of\nstellar mass being Population III massive metal-free stars at z~6. While there\nis no compelling physical reason or observational evidence to support (A), (B)\ncould be fulfilled plausibly by continued existence of some pockets of\nuncontaminated, metal-free gas for star formation. (2) The volume-weighted\nneutral fraction of the IGM of _V~ 10^-4 at z=5.8 inferred from the SDSS\nobservations of QSO absorption spectra provides enough information to ascertain\nthat reionization is basically complete with at most ~0.1-1% of IGM that is\nun-ionized at z=5.8. (3) Barring some extreme evolution of the IMF, the neutral\nfraction of the IGM is expected to rise quickly toward high redshift from the\npoint of HII bubble percolation, with the mean neutral fraction of the IGM\nexpected to reach 6-12% at z=6.5, 13-27% at z=7.7 and 22-38% at z=8.8.", + "authors": "Renyue Cen", + "published": "2010-07-05", + "updated": "2010-07-06", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.HE" + ], + "main_content": "Introduction How the universe becomes transparent at z \u223c5.8 is debated (Fan et al. 2006; Becker et al. 2007). Whether reionization is complete by z = 5 \u22126 has been questioned (Mesinger 2009). What kind of stars reionizes the universe at z \u223c6 remains less than certain. We examine in greater detail this endgame to assess how reionization process may have proceeded approaching z \u223c5.8, how complete reionization is at z \u223c5.8 and what role Population III (Pop III) stars may have played in the \ufb01nal reionization phase at z \u223c6, in the context of stellar reionization in the standard cold dark matter model (Komatsu et al. 2010). We are 1Princeton University Observatory, Princeton, NJ 08544; cen@astro.princeton.edu arXiv:1007.0704v2 [astro-ph.CO] 6 Jul 2010 \f\u2013 2 \u2013 also motivated by the exciting possibility of being able to statistically measure the neutral fraction of the IGM at redshift above six in the coming years, as a variety of techniques are applied to larger samples that will become available. Those include methods based on (1) QSO Stromgren sphere measures (e.g., Wyithe & Loeb 2004; Mesinger et al. 2004), (2) measurements of damping wings of high redshift gamma-ray bursts (GRB) (e.g., Totani et al. 2006), (3) statistical analyses of high redshift Lyman alpha emitters from a variety of surveys (e.g., Malhotra & Rhoads 2004; Ouchi et al. 2007, 2008; Nilsson et al. 2007; Cuby et al. 2007; Stark et al. 2007; Willis et al. 2008; McMahon et al. 2008; Hibon et al. 2009). In addition, polarization measurements of the cosmic microwave background (CMB) \ufb02uctuations by Planck satellite and others may provide some useful constraints (e.g., Kaplinghat et al. 2003). Finally, the James Webb Space Telescope (JWST) will likely be able to detect the bulk of dwarf galaxies of halo mass \u223c109 M\u2299that are believed to be primarily responsible for cosmological reionization at z \u223c6 (e.g., Stiavelli et al. 2004), especially if a signi\ufb01cant fraction of stars in them are active Pop III stars. 2. Evolution of the Intergalactic Medium Toward z \u223c6 We use a semi-numerical method (Cen 2003) to explore the parameter space and compute the coupled thermal and reionization history with star formation of the universe. The reader is referred to \u00a74 of Cen (2003) for details. For a simple understanding the essential physics pertaining to reionization may be encapsulated into a single parameter, \u03b7, de\ufb01ned as \u03b7(z) \u2261c\u2217fescRh(z)\u03f5UV(z)mpc2 \u03b1(T)C(z)n0(1 + z)3h\u03bd0 , (1) where z is redshift, c\u2217the star formation e\ufb03ciency (i.e., the ratio of the total amount of stars formed over the product of the halo mass and the cosmic baryon to total mass ratio), fesc the ionizing photon escape fraction, Rh(z) the total baryonic mass accretion rate of halos above the \ufb01lter mass (i.e., those that are able to accrete gas) over the total baryonic mass in the universe, \u03f5UV(z) the ionizing photon production e\ufb03ciency, de\ufb01ned to be the total emitted energy above hydrogen Lyman limit over the total rest mass energy of forming stars, mp proton mass, c speed of light, \u03b1(T) the case-B recombination coe\ufb03cient, C(z) the clumping factor of the recombining IGM, n0 the mean hydrogen number density at z = 0 and h\u03bd0 hydrogen ionization potential. The numerator on the right hand side of Equation 1 is the rate of ionizing photons per baryon pumped into the IGM from stars, whereas the denominator is the destruction rate of Lyman limit photons per baryon due to case-B recombination. If \u03b7 < 1, the universe is opaque. When \u03b7 > 1 is sustained, the universe becomes fully reionized and a UV radiation background is built up with time with its amplitude determined by the balance between UV emissivity, recombination and universal expansion. If \u03b7 goes above unity at an earlier epoch and subsequently drops below unity, a double reionization would \f\u2013 3 \u2013 occur Cen (2003). Present calculations are done with the following updates of input physics. \u2022 We adopt the standard WMAP7-normalized (Komatsu et al. 2010) parameters for the cosmological constant dominated, \ufb02at cold dark matter model: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.81, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. \u2022 We replace the standard Press-Schechter formalism of spherical collapse model with the more accurate ellipsoidal collapse model (Sheth & Tormen 2002) to compute the halo formation rate Rh. \u2022 Latest ultra-high resolution (0.1pc) radiation hydrodynamic simulations indicate that c\u2217fesc \u223c0.02 \u22120.03 for atomic cooling halos and drops about two order of magnitude for minihalos, with fesc \u223c40 \u221280% (Wise & Cen 2009). Note that c\u2217fesc and \u03f5UV(z) are degenerate. Therefore, we adopt, conservatively, c\u2217fesc = 0.03 for the calculations presented here, which enables a \ufb01rm conclusion with respect to a required high value for \u03f5UV(z), as will be clear later. \u2022 We allow for an evolving IMF with redshift, parameterized by an evolving ionizing photon production e\ufb03ciency, \u03f5UV(z) = \u03f5UV,6( 1+z 7 )\u03b3, where \u03f5UV,6 is \u03f5UV(z) at z = 6. \u2022 The clumping factor, C(z), of the recombining IGM at z \u223c6 may be lower than previous estimates. We adopt the suggested range C6 = 3 \u22126 for the clumping factor at z = 6 based on recent calculations (Pawlik et al. 2009). In our semi-numerical method, the evolution of the clumping factor of the IGM is determined by one parameter, Ch, that takes into account the contribution of collapsed gas to the overall clumping factor: C(z) = \u03c6h(z)Ch + [1 \u2212\u03c6h(z)], where \u03c6h(z) is the fraction of mass in halos above the \ufb01lter mass (Gnedin 2000) that is followed self-consistently; we adjust Ch along with the other free parameter, \u03f5UV(z), until we obtain simultaneously a desired clumping factor at z = 6, C6, and that the universe completes reionization at exactly z = 5.8. We also examine a case where reionization ends at z = 6.8. Perhaps the most uncertain of the input physics on the list is \u03f5UV(z), which we now elaborate on. For a \ufb01ducial, non-evolving IMF case \u03f5UV(z) = \u03f5UV,6. For an evolving IMF, we take cue from recent development in the \ufb01eld of star formation at high redshift, in particular, on CMB-regulated star formation physical process (e.g., Larson 2005; Tumlinson 2007; Smith et al. 2009; Bailin et al. 2010; Schneider & Omukai 2010). Following Tumlinson (2007), the CMB-regulated Bonner-Ebert mass of a collapsing cloud evolves as MBE = 3.2[(1 + z)/7]1.7 M\u2299. Specifying lower mass cuto\ufb00(Mc) of a Salpeter IMF at z = 6 and assuming that evolves as Mc[(1 + z)/7]1.7, and using the Padova 0.02 Z\u2299track (Leitherer et al. 1999) to obtain \u03f5UV,6 and \u03f5UV(z = 9), we compute \u03b3 as a function of Mc, shown in Figure 1. Depending on the exact value of Mc, \u03b3 ranges from 0.6 to 1.25 for Mc = 1\u221220 M\u2299. \f\u2013 4 \u2013 1 2 3 4 5 6 7 8 10 13 15 20 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 M c \u03b1 \u03b5 UV(z)=\u03b5 UV,6[1+z)/7]\u03b1 Fig. 1.\u2014 The mean slope of the expected evolution \u03f5UV(z) = \u03f5UV,6( 1+z 7 )\u03b3 from z = 9 to z = 6 for an evolving IMF with a Salpeter slope and a varying lower mass cuto\ufb00at z = 6 shown at the x-axis. While it is uncertain, we identify MBE = 3.2 M\u2299at z = 6 with Mc, giving rise to \u03b3 = 0.84. In our subsequent analyses, we treat \u03b3 = 0 and \u03b3 = 0.84 as two limiting cases for the evolution of IMF. With c\u2217fesc = 0.03, \u03b3, C6 and the completion redshift of reionization zri being \ufb01xed, we can \ufb01nd a unique pair of values for Ch and \u03f5UV,6. Figure 2 shows the evolutionary histories of the fraction of the un-ionized IGM, x, for six models. A feature common in all six models is that x rapidly rises toward higher redshift from zri. Analysis of the SDSS observations of QSO absorption spectra suggests a transition to a (volume-weighted) neutral fraction \u27e8fHI\u27e9V \u226510\u22123 at z \u223c6.2 from \u27e8fHI\u27e9V \u223c10\u22124 at z = 5.8 (Fan et al. 2006). As we will show below, the observed \u27e8fHI\u27e9V \u223c10\u22124 at z = 5.8 indicates that the reionization is largely complete by z = 5.8. Thus, our models suggest that x is expected to reach 6 \u221212% at z = 6.5, 13 \u221227% at z = 7.7 and 22 \u221238% at z = 8.8. It is useful to have some simple physical understanding of the results. Star formation and reionization is somewhat self-regulated in that a higher star formation rate ionizes and heats up a larger fraction of the IGM that would tend to suppress gas accretion for further star formation, whereas cooling processes induce more star formation (Cen 2003). As the response time scale for this self-regulation is on the order of the halo dynamic time that is roughly 10% of the Hubble time, so if there is a protracted period during reionization, this argument suggests that it may only take place at a neutral fraction level of x \u226510% so as to allow star formation rate to be able to dynamically respond to reionization induced heating within a halo dynamical time. Once x has dropped signi\ufb01cantly below 10%, the \ufb01nal stage of reionization should be prompt, which is greatly conspired by the rapid increase of \u03b7(z) \f\u2013 5 \u2013 6 6.5 7 7.7 8.8 0.0001 0.02 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Redshift un\u2212ionized IGM fraction \u03b3=0,C6=3,zri=5.8 \u03b3=0,C6=6,zri=5.8 \u03b3=0.84,C6=3,zri=5.8 \u03b3=0.84,C6=6,zri=5.8 \u03b3=0,C6=3,zri=6.8 \u03b3=0,C6=6,zri=6.8 Fig. 2.\u2014 The evolution of un-ionized fraction of the IGM, x, in six di\ufb00erent reionization models with speci\ufb01ed \u03f5UV(z), C6 and zri. toward the end of reionization (Equation 1). It can be shown that Rh \u221dexp(\u2212\u03b42 c/2\u03c32 M)(1+z), leading to \u03b7 \u221dexp(\u2212\u03b42 c/2\u03c32 M)(1+z)\u03b3\u22122C\u22121(z), where \u03c3M is the density variance on the mass scale of M \u223c109 M\u2299that can accrete photoheated gas and form stars (Gnedin 2000). By the end of reionization, about 1% of total mass turns out to have collapsed in these halos; in other words, the (star-forming) halo collapse rate is on the exponential rise when the universe becomes fully ionized. Since the evolution of C is much weaker than exponential (Pawlik et al. 2009), \u03b7(z) likely surpasses unity at zri in an \u201cexponential\u201d fashion from below. As a result, it takes signi\ufb01cantly less than x times Hubble time to reionize the last small x fraction of neutral IGM. These considerations are consistent with the rapid \ufb01nal reionization phase seen in Figure 2. The SDSS observations strongly suggest zri = 5.8 (Fan et al. 2006), after which the ionization state of the IGM is primarily determined, on the ionizing photon sink side, by LLS. We show here, from a somewhat di\ufb00erent angle, but in agreement with the conclusion of Fan et al. (2006), that reionization is largely complete by z = 5.8 (c.f., Mesinger 2009). The comoving mean free path (mfp) of Lyman limit photons, \u03bb, may be written as \u03bb\u22121 = \u03bb\u22121 LL + \u03bb\u22121 Lya + \u03bb\u22121 neu + \u03bb\u22121 other, (2) where \u03bbLL, \u03bbLya and \u03bbneu are comoving mfp due to LLS, Ly\u03b1 forest and un-ionized neutral IGM, respectively; \u03bbother is due to possible other sinks. Physically, LLS are in less ionized, overdense regions within the reionized portion of the universe that are individually opaque to Lyman limit photons; Ly\u03b1 forest is dominated by low density regions within the reionized \f\u2013 6 \u2013 portion of the IGM that individually are only partially opaque to Lyman limit photons; the un-ionized neutral IGM is the portion of the IGM that has not been engulfed by the reionization front. We conservatively assume \u03bbother = \u221e. Current large-scale cosmological reionization simulations do not provide su\ufb03ciently accurate results to constrain \u03bbLL due to lack of adequate resolution. An extrapolation (Gnedin & Fan 2006) of observations at lower redshift z = 0.4 \u22124.7 (Storrie-Lombardi et al. 1994) gives \u03bbLL \u223c22 \u221248 comoving Mpc/h at z \u223c5.8. We use \u03bbLL = 35 comoving Mpc/h in our calculations. The following three equations are used to compute the neutral fraction of a region, x\u03b4, at overdensity \u03b4 \u2261\u03c1b/\u27e8\u03c1b\u27e9, when the region is substantially ionized (i.e., x\u03b4 \u226a1): x\u03b4J\u03bd\u27e8\u03c3H\u27e9= \u03b4\u03b1(T)n0(1 + z)3, (3) where J\u03bd is the ionizing photon radiation intensity in units of cm\u22122 sec\u22121 and \u27e8\u03c3H\u27e9= 2.6 \u00d7 10\u221218cm2 is the spectrum-averaged photoionization cross section for a low-Z IMF ionizing spectrum at high-z. \u03a8 = C\u03b1(T)n0(1 + z)3, (4) where \u03a8 is mean ionizing photon emissivity per baryon. J\u03bd = \u03bb\u03a8n0(1 + z)2. (5) Equations 3,4,5, respectively, re\ufb02ect the local ionization balance (between photoionization and recombination) (3), global ionization balance (between mean emissivity and recombination) (4) and relationship between mean emissivity, ionizing photon intensity and mfp (5). Combining (3,4,5) we obtain x\u03b4 = \u03b4 \u03bbC\u27e8\u03c3H\u27e9n0(1 + z)3. (6) Thus, knowing C and \u03bb allows one to compute x\u03b4, which, when combined with the probability distribution function of \u03b4, PDF(\u03b4), can be used to compute the volume-weighted neutral fraction, \u27e8fHI\u27e9V : \u27e8fHI\u27e9V = Z \u221e 0 PDF(\u03b4)x\u03b4d\u03b4. (7) We use the density distribution, PDF(\u03b4), from one of the radiation-hydrodynamic simulations (Trac et al. 2008) where the universe completes reionization at z \u223c6, to compute \u27e8fHI\u27e9V at z = 5.8. A resolution of comoving 65kpc/h in the simulation is adequate for resolving the Jeans scale of photo-ionized gas. The mfp due to Ly\u03b1 forest can be computed as \u03bb\u22121 Lya = \u27e8fHI\u27e9V \u27e8\u03c3H\u27e9n0(1 + z)2/(1 + 1/e). (8) We use the same simulation to also compute \u03bbneu, simply by computing the average distance that a random ray can travel before it hits an un-ionized cell. We identify regions that have not been reionized and photon-heated with cells in the simulation box that have \f\u2013 7 \u2013 \u22123 \u22122 \u22121 0 0 1 2 log \u03bb (comoving Mpc/h) \u03bb for C=3 \u03bb for C=6 \u03bb neu for C=3 \u03bb Lya for C=3 \u03bb LL \u22123 \u22122 \u22121 0 \u22125 \u22124 \u22123 \u22122 log x V V @z=5.8 for C=3 V @z=5.8 for C=6 obs: Fan et al 2006 Fig. 3.\u2014 Top panel shows the total comoving mean free path (mfp) \u03bb for two cases with C = 3 (solid squares) and C = 6 (stars) as well as \u03bbLL (open diamonds), \u03bbLya (open triangles) and \u03bbneu (open circles) for the case with C = 3, as a function of x at z = 5.8. Bottom panel shows the volume-weighted neutral fractions of the IGM, \u27e8fHI\u27e9V , as a function of x for the two cases with C = 3 (solid squares) and C = 6 (stars). Also shown as the shaded region is the total range of \u27e8fHI\u27e9V at z = 5.8 based on the SDSS QSO sample (Fan et al. 2006). neutral fraction greater than 0.99 and temperature lower than 103K; results are insensitive to reasonable variations of the parameters: changing 0.99 to 0.50 or 103K to 100K makes no visible di\ufb00erence in the results. Since, when scaled to x, the morphology of reionization does not vary strongly (e.g., Furlanetto et al. 2004), we use \u03bbneu(z) computed as a function of redshift from the simulation as \u03bbneu(x) as a function of x at z = 5.8. The detailed procedure to simultaneously compute \u03bb and \u27e8fHI\u27e9V is as follows. At a given value of x at z = 5.8, we know \u03bbneu(x) from simulations. Combining \u03bbneu(x) with an initial guess for \u03bbLya(x) and the adopted \u03bbLL gives \u03bb (Equation 1). With \u03bb and an assumed C we compute \u27e8fHI\u27e9V (Equations 6,7), which in turn yields a new value for \u03bbLya(x) (Equation 8). This procedure is iterated until we have a converged pair of \u03bbLya and \u27e8fHI\u27e9V . The results are shown in Figure 3, where the top panel shows the total comoving mfp for two cases with C = 3 and C = 6 as well as various components for the case with C = 3, and the bottom panel shows \u27e8fHI\u27e9V for two cases with C = 3 and C = 6 at z = 5.8 as well as the observationally inferred range at z = 5.8 (Fan et al. 2006). As it turns out, we see that the total \u03bb is primarily determined by \f\u2013 8 \u2013 \u03bbneu at x \u22650.01 and \u03bbLL at x \u22640.005, and Ly\u03b1 forest has secondary importance at all x. A comparison between the computed results and observations indicates that the un-ionized fraction x does not exceed 0.1 \u22121% at z = 5.8 and reionization is complete or largely complete by z = 5.8. In combination with our previous \ufb01nding of rapid reionization near zri, it suggests that zri = 5.8 or very near it. 0.1 1 5 15 30 50 100 1 2 3 4 5 6 7 8 9 10 \u22126 \u22125.5 \u22125 \u22124.5 \u22124 \u22123.5 \u22123 C6 log \u03b5 UV,6 Mc (M\u2299) \u03b5 UV\u221d (1+z)0,z ri=5.8 \u03b5 UV\u221d (1+z)0.84,z ri=5.8 \u03b5 UV\u221d (1+z)0,z ri=6.8 Salpeter (0.02Zsun) Pop III Fig. 4.\u2014 show the required ionizing photon production e\ufb03ciency at z = 6, \u03f5UV,6, as a function of IGM clumping factor at z = 6, C6, for several models. Also shown as stars are the expected \u03f5UV for Salpeter IMF (with 0.02 Z\u2299metallicity) (Leitherer et al. 1999) with the lower mass cuto\ufb00Mc indicated by the top x-axis. The diamonds are \u03f5UV for Pop III metal-free stars, again, with the lower mass cuto\ufb00Mc indicated by the top x-axis (Schaerer 2002). Finally, in Figure 4 we show the required ionizing photon production e\ufb03ciency at z = 6, \u03f5UV,6, as a function of C6, for several models. Several expected trends are noted. First, a higher C6 requires a higher \u03f56. Second, an earlier reionization requires a higher \u03f56. Third, a rising \u03f5UV with redshift lessens the required \u03f56 fractionally. What is most striking is that stars with the standard metal-enriched IMF and Mc = 1 M\u2299fall short of providing the required ionizing photons, by a factor of 10 \u221220 at z \u223c6. Having Mc \u223c5 M\u2299would help reduce the de\ufb01cit to a factor of 3 \u22126. Only with Mc \u223c30 M\u2299and C6 = 3, one is barely able to meet the requirement to reionize the universe at z \u223c6 by Population II stars. But such an extreme scenario with Mc \u226530 M\u2299may be disfavored by the existence of old stars in \f\u2013 9 \u2013 observed high-z galaxies (e.g., Mobasher et al. 2005) that need be less massive than \u223c10 M\u2299 to be long-lived. However, we note that Pop III metal-free stars (diamonds in Figure 4), thought to be more massive than 30 M\u2299(e.g., Abel et al. 2002; Bromm et al. 2002; McKee & Tan 2008), could provide ample ionizing photons. Unfortunately, normally, Pop III stars would not be expected to form at z \u223c6, had some earlier supernovae uniformly enriched the intergalactic medium. With gaseous low-temperature coolants, it is believed that the critical metallicity for transition from Pop III to Pop II IMF is Zcrit \u223c10\u22123.5 Z\u2299(e.g., Bromm et al. 2001; Bromm & Loeb 2003). If dust is formed in the Pop III.1 supernova ejecta, Schneider et al. (2006) argue that dust cooling may signi\ufb01cantly lower the critical transition metallicity to as low as Zcrit \u223c10\u22126 (e.f., Cherchne\ufb00& Dwek 2010). If a fraction 10\u22124 of baryons forms into Pop III stars and their supernovae uniformly enrich the IGM, the expected metallicity of the IGM will likely exceed Z \u223c10\u22123.5 Z\u2299(e.g., Fang & Cen 2004). Since a fraction of \u226510\u22124 of baryons needs to form into Pop III stars to reionize the universe, therefore, in the case of uniform IGM enrichment, the contribution of Pop III stars to ionizing photon budget at z \u223c6 is expected to have become negligible. In addition, a very small amount of metals (Z \u226410\u22123 Z\u2299) would change the internal dynamics of massive stars (core temperature, size, e\ufb00ective surface temperature, etc) and render them much less e\ufb03cient UV producers and notably di\ufb00erent from Pop III massive stars (e.g., Hirschi et al. 2008), as already hinted in Figure 4 between stars (Z = 0.02 Z\u2299) and diamonds (Pop III) at Mc \u223c30 M\u2299. We suggest that, if the metal enrichment process of gas, including IGM and gas in collapsed minihalos and other galaxies, is highly inhomogeneous, then it is possible that a small fraction of star-forming gas may have remained primordial to allow for Pop III star formation at z \u223c6. A signi\ufb01cant amount of gas in the central regions of non-starforming galaxies (e.g., Wyithe & Cen 2007; Cen & Riquelme 2008) as well as a fraction of IGM that has not been swept by galactic winds emanating from star-forming galaxies could remain uncontaminated. Cosmological simulations at lower redshift (z = 0\u22126) suggest that metal enrichment process of the IGM is indeed extremely inhomogeneous, leaving signi\ufb01cant pockets of metal-free gas even at z = 0 (e.g., Cen & Ostriker 1999; Aguirre et al. 2001; Oppenheimer & Dav\u00b4 e 2006; Cen & Chisari 2010). The common assumption is that earlier generations of stars not resolved in these simulations would have put in a metallicity-\ufb02oor in all regions. But this needs not be the case. Observationally, while the majority of local star formation has metallicity close to solar, relatively low-metallicity (1/30 of solar) star formation does occur occasionally (e.g., Izotov & Thuan 1999) and some of the observed local supernovae may be pair-instability supernovae (e.g., Smith et al. 2007; Gal-Yam et al. 2009) that may be due to metal-free progenitors (c.f., Smith et al. 2007; Langer et al. 2007; Woosley et al. 2007). At redshift z = 2 \u22123 the low density Ly\u03b1 forest, regions of density around and less than the global mean, appears to have not been enriched to a detectable level (\u226410\u22123.5 Z\u2299) (e.g., Lu et al. 1998). Therefore, it seems plausible that an increasing fraction \f\u2013 10 \u2013 of star-forming gas toward high redshift may be pristine, due to a combination of ine\ufb03cient and non-uniform mixing and a decreasingly amount of metals having been injected. From Figure 4 we see that what is minimally required in order to have enough ionizing photons at z \u223c6 is that about 2-4% of stars forming at z \u223c6 are Pop III stars and the remainder normal Pop II metal-enriched stars with a Salpeter-like IMF or other forms; the lower mass cuto\ufb00for the latter is unconstrained but Mc \u223c3 M\u2299or so is perhaps physically motivated and fully in line with other evidence that hints on an evolving IMF from redshift zero (e.g., van Dokkum 2008; Dav\u00b4 e 2008). With 2-4% being Pop III stars, Pop III stars\u2019 contribution to ionizing photons and FUV are dominant over the remaining normal stars, which would give rise to an \u201capparent\u201d, very low metallicity, top-heavy IMF for these high redshift galaxies. Interestingly, galaxies with such required properties a dust-free, very low metallicity, top-heavy IMF with a very high ionizing photon escape fraction of 40 \u221280% may have already been detected in the Hubble Ultra Deep Field (UDF) at z \u223c7 \u22128 (e.g., Bouwens et al. 2010). 3." + }, + { + "url": "http://arxiv.org/abs/1005.1451v2", + "title": "Star Formation Feedback and Metal Enrichment History Of The Intergalactic Medium", + "abstract": "Using hydrodynamic simulations we compute the metal enrichment history of the\nintergalactic medium (IGM). We show that galactic superwind (GSW) feedback can\ntransport metals to the IGM and that the properties of simulated metal\nabsorbers match observations. The distance of influence of GSW is typically\nlimited to >0.5Mpc and within regions of overdensity >10. Most CIV and OVI\nabsorbers are located within shocked regions of elevated temperature\n(T>2x10^4K), overdensity (>10), and metallicity ([-2.5,-0.5]). OVI absorbers\nhave typically higher metallicity, lower density and higher temperature than\nCIV absorbers. For OVI absorbers collisional ionization dominates over the\nentire redshift range z=0-6, whereas for CIV absorbers the transition occurs at\nmoderate redshift z~3 from collisionally dominated to photoionization\ndominated. We find that the observed column density distributions for CIV and\nOVI in the range log N cm^2=12-15 are reasonably reproduced by the simulations.\nThe evolution of mass densities contained in CIV and OVI lines, Omega_CIV and\nOmega_OVI, is also in good agreement with observations, which shows a near\nconstancy at low redshifts and an exponential drop beyond redshift z=3-4. For\nboth CIV and OVI, most absorbers are transient and the amount of metals probed\nby CIV and OVI lines of column log N cm^2=12-15 is only ~2% of total metal\ndensity at any epoch. While gravitational shocks from large-scale structure\nformation dominate the energy budget (80-90%) for turning about 50% of IGM to\nthe warm-hot intergalactic medium (WHIM) by z=0, GSW feedback shocks are\nenergetically dominant over gravitational shocks at z > 1-2. Most of the\nso-called \"missing metals\" at z=2-3 are hidden in a warm-hot (T=10^{4.5-7}K)\ngaseous phase, heated up by GSW feedback shocks. Their mass distribution is\nbroadly peaked at $\\delta=1-10$ in the IGM, outside virialized halos.", + "authors": "Renyue Cen, Nora Elisa Chisari", + "published": "2010-05-10", + "updated": "2011-03-19", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO" + ], + "main_content": "Introduction One of the pillars of the Big Bang theory is its successful prediction of a primordial baryonic matter composition, made up of nearly one hundred percent hydrogen and helium with a trace amount of a few other light elements (e.g., Schramm & Turner 1998; Burles et al. 2001). The metals, nucleosynthesized in stars later, are found almost everywhere in the observable IGM, ranging from the metal-rich intracluster medium (e.g., Mushotzky & Loewenstein 1997) to moderately enriched damped Lyman systems (e.g., Pettini et al. 1997; Prochaska et al. 2003) to low metallicity Lyman alpha clouds (e.g., Schaye et al. 2003). When and where were the metals made and why are they distributed as observed? We address this fundamental question in the context of the standard cold dark matter cosmological model (Komatsu et al. 2009) using latest simulations. Our previous simulations (Cen & Ostriker 1999a; Cen et al. 2005) provided some of the earlier attempts to address this question with measured successes. In this investigation we use substantially better simulations to provide signi\ufb01cantly more constrained treatment of the feedback processes from star formation (SF) that drive energy and metals from supernovae into the IGM through galactic winds (e.g., Cen & Ostriker 1999a; Aguirre et al. 2001; Theuns et al. 2002b; Adelberger et al. 2003; Springel & Hernquist 2003). Metal-line absorption systems in QSO spectra are the primary probes of the metal enrichment of the IGM as well as in the vicinities of galaxies (e.g., Bahcall & Spitzer 1969). The most widely used metal lines include Mg II \u03bb\u03bb2796, 2803 doublet (e.g., Steidel & Sargent 1992), C IV \u03bb\u03bb1548, 1550 doublet (e.g., Young et al. 1982), and O VI \u03bb\u03bb1032, 1038 doublet (e.g., Simcoe et al. 2002). We here focus on the C IV and O VI absorption lines and the global evolution of metals in the IGM. We will limit our current investigation to the observationally accessible redshift range of z = 0\u22126, which in part is theoretically motivated simply because the theoretical uncertainties involving still earlier star formation are much larger. At z = 0 the O VI line (together with C VII and O VIII lines) provide vital information on the missing baryons (e.g., Mathur et al. 2003; Tripp et al. 2008; Danforth & Shull 2008; Nicastro et al. 2009), predicted to exist in a Warm-Hot Intergalactic Medium (WHIM) (Cen & Ostriker 1999a; Dav\u00b4 e et al. 2001). For a well understood sample of QSO absorption lines, one could derive the cosmological \f\u2013 3 \u2013 density contained in them (e.g., Cooksey et al. 2009). Early investigations indicate that \u2126CIV remains approximately constant in the redshift interval z \u223c1.5 \u22124 (Songaila 2001, 2005; Boksenberg et al. 2003). There have been recent e\ufb00orts to extend the measurements of \u2126CIV to z < 1.5 (Cooksey et al. 2009) and to z > 5 (Simcoe 2006; Ryan-Weber et al. 2006, 2009; D\u2019Odorico et al. 2009; Becker et al. 2009). Observations in these redshift ranges have been di\ufb03cult to carry out because C IV transition moves to the UV at low redshift and to the IR band at high redshift. D\u2019Odorico et al. (2009) \ufb01nd evidence of a rise in the C IV mass density for z < 2.5. Simcoe (2006) and Ryan-Weber et al. (2006) found evidence of C IV density at z \u223c6 being consistent with estimations at z \u223c2 \u22124.5. More recently, however, Becker et al. (2009) set upper limits for \u2126CIV at z \u223c5.3 and Ryan-Weber et al. (2009) observe a decline in intergalactic C IV approaching z = 6, which we will show are in good agreement with our simulations. The ionization potential of O VI and the relatively high oxygen abundance are very favorable for production of O VI absorbers in the IGM (e.g., Norris et al. 1983; Cha\ufb00ee et al. 1986). The rest wavelength of OVI (1032, 1037\u02da A) places it within the Ly-\u03b1 forest, which makes the identi\ufb01cations of these lines more complicated, although being a doublet helps signi\ufb01cantly. At z \u22652, however, O VI absorption can probe the metal content of the IGM in ways complementary to what is provided by C IV lines. For example, the O VI lines can probe IGM that is hotter than that probed by the C IV lines and can reach lower densities thank to higher abundance. There are now several observational studies at redshifts z = 2 \u22123 that describe the properties of O VI absorbers and attempt to estimate the O VI mass density, \u2126OVI (Carswell et al. 2002; Bergeron et al. 2002; Simcoe et al. 2004; Simcoe 2006; Frank et al. 2008; Danforth & Shull 2008; Tripp et al. 2008; Thom & Chen 2008b). At z \u223c2 \u22123 there is a missing metals problem: only 10-20% of the metals produced by all stars formed earlier have been identi\ufb01ed in stars of Lyman break galaxies (LBG), in damped Lyman alpha systems (DLAs) and Ly\u03b1 forest. The vast majority of the produced metals appear to be missing (e.g., Pettini 1999). The missing metals could be in hot gaseous halos of star-forming galaxies (Pettini 1999; Ferrara et al. 2005). We will show that most of the missing metals are in a warm-hot (T = 104.5\u22127K) but di\ufb00use IGM at z = 2 \u22123 of overdensities of \u223c10 that are outside of halos. The outline of this paper is as follows. In \u00a72 we detail our simulations and the procedure of normalizing the uncertain feedback processes from star formation. Results on the metal enrichment of the IGM are presented in \u00a73. In \u00a73.1 we give a full description of the properties of the C IV and O VI lines at z = 0 \u22126, followed \u00a73.2 discussing C IV and O VI absorbers as metals reservoirs. We devote \u00a73.3 to a general discussion of global distribution of metals, addressing several speci\ufb01c topics, including the metallicity of the moderate overdense regions at moderate redshift, the missing metals at z \u223c3. Conclusions are given in \u00a74. \f\u2013 4 \u2013 2. Simulations 2.1. The Hydrocode Numerical methods of the cosmological hydrodynamic code and input physical ingredients have been described in detail in an earlier paper (Cen et al. 2005). The simulation integrates \ufb01ve sets of equations simultaneously: the Euler equations for gas dynamics in comoving coordinates, time dependent rate equations for hydrogen and helium species, the Newtonian equations of motion for dynamics of collisionless (dark matter) particles, the Poisson equation for the gravitational potential \ufb01eld and the equation governing the evolution of the intergalactic ionizing radiation \ufb01eld, all in cosmological comoving coordinates. The gasdynamic equations are solved using a new, improved hydrodynamics code, \u201cCOSMO\u201d (Li et al. 2008) on a uniform mesh. The rate equations are treated using sub-cycles within a hydrodynamic time step due to the much shorter ionization time-scales (i.e., the rate equations are very \u201csti\ufb00\u201d). Dark matter particles are advanced in time using the standard particlemesh (PM) with a leapfrog integrator. The Poisson equation is solved using the Fast Fourier Transform (FFT) method on the uniform mesh. The initial conditions adopted are those for Gaussian processes with the phases of the di\ufb00erent waves being random and uncorrelated. The initial condition is generated by the COSMICS software package kindly provided by E. Bertschinger (2001). Cooling and heating processes due to all the principal line and continuum atomic processes for a plasma of primordial composition with additional metals ejected from star formation. Compton cooling due to the microwave background radiation \ufb01eld and Compton cooling/heating due to the X-ray and high energy background are computed. The cooling/heating due to metals is computed using a code based on the Raymond-Smith code assuming ionization equilibrium that takes into account the presence of a time-dependent UV/X-ray radiation background, which we have included in our simulations since Cen et al. (1995) and has now been performed by other investigators (e.g., Shen et al. 2010). We follow star formation using a well de\ufb01ned, Schmidt-Kennicutt-law-like prescription used by us in our previous work and similar to that of other investigators (e.g., Katz et al. 1996; Steinmetz 1996; Gnedin & Ostriker 1997). A stellar particle of mass m\u2217= c\u2217mgas\u2206t/t\u2217 is created (the same amount is removed from the gas mass in the cell), if the gas in a cell at any time meets the following three conditions simultaneously: (i) contracting \ufb02ow, (ii) cooling time less than dynamic time, and (iii) Jeans unstable, where \u2206t is the time step, t\u2217= max(tdyn, 107yrs), tdyn = p 3\u03c0/(32G\u03c1tot) is the dynamical time of the cell, mgas is the baryonic gas mass in the cell and c\u2217= 0.03 is star formation e\ufb03ciency (e.g., Krumholz & Tan 2007). Each stellar particle is given a number of other attributes at birth, including formation time ti, initial gas metallicity and the free-fall time in the birth cell tdyn. The typical mass of a stellar particle in the simulation is about 106M\u2299; in other words, these \f\u2013 5 \u2013 stellar particles are like coeval globular clusters. All variations of this commonly adopted star-formation algorithm essentially achieve the same goal: in any region where gas density exceeds the stellar density, gas is transformed to stars on a timescale longer than the local dynamical time and shorter than the Hubble time. Since these two time scales are widely separated, the e\ufb00ects, on the longer time scale, of changing the dimensionless numbers (here c\u2217) are minimal. Since nature does not provide us with examples of systems which violate this condition (systems which persist over many dynamical and cooling time scales in having more gas than stars), this commonly adopted algorithm should be adequate even though our understanding of star formation remains crude. Stellar particles are treated dynamically as collisionless particles subsequent to their birth. Feedback from star formation, the e\ufb00ects of the cumulative SN explosions known as Galactic Superwinds (GSW) and metal-enriched gas, will be described in more detail in the next subsection. While the code can self-consistently compute the ionizing UV-X-ray background using sources and sinks in the simulation, here we use the Haardt & Madau (1996) spectra for all runs such that we do not introduce additional variations due to otherwise varying UV backgrounds in the di\ufb00erent runs. However, a local optical depth approximation is adopted to crudely mimic the local shielding e\ufb00ects: each cubic cell is \ufb02agged with six hydrogen \u201coptical depths\u201d on the six faces, each equal to the product of neutral hydrogen density, hydrogen ionization cross section and scale height, and the appropriate mean from the six values is then calculated; analogous ones are computed for neutral helium and singly-ionized helium. In computing the local ionization and cooling/heating balance for each cell, self-shielding is taken into account to attenuate the external HM ionizing radiation \ufb01eld. Both these two shielding e\ufb00ects are essential in order to obtain self-consistent radiation background evolution and neutral hydrogen evolution. Table 1. Simulations Run Box (Mpc/h) Res (kpc/h) DM ( M\u2299) eGSW N 50 24 1.1 \u00d7 107 0 L 50 24 1.1 \u00d7 107 3 \u00d7 10\u22126 M 50 24 1.1 \u00d7 107 7 \u00d7 10\u22126 H 50 24 1.1 \u00d7 107 1 \u00d7 10\u22125 MR 50 48 8.8 \u00d7 107 7 \u00d7 10\u22126 \f\u2013 6 \u2013 2.2. Cosmological and Physical Parameters of the Simulations We have run a set of four new simulations of a WMAP5-normalized (Komatsu et al. 2009) cold dark matter model with a cosmological constant: \u2126M = 0.28, \u2126b = 0.046, \u2126\u039b = 0.72, \u03c38 = 0.82, H0 = 100hkms\u22121Mpc\u22121 = 70kms\u22121Mpc\u22121 and n = 0.96. The adopted box size is 50Mpc/h comoving and with 20483 cells of size 24kpc/h comoving; the dark matter particle mass and mean baryonic mass in a cell are equal to 1.1 \u00d7 107 M\u2299and 2.6 \u00d7 105 M\u2299, respectively. Some of the key parameters for the four simulations are summarized in Table 1. The only di\ufb00erence among the four main runs is the strength of the GSW feedback: (N) no GSW, (L) low GSW feedback, (M) moderate GSW feedback and (H) high GSW feedback. In the next subsection we will determine which feedback strength produces the star formation rate history that matches observations. We run an additional lower resolution simulation with 1024 cells a side, each of size 48kpc/h (run \u201cMR\u201d) to test convergence of results. When computing results using run \u201cMR\u201d, we multiply the metallicity of each cell in run \u201cMR\u201d by a constant factor such that its mean metallicity at any epoch match that of run \u201cM\u201d. We obtain an additional set of results by changing the amplitude of the UV background, run \u201cM2\u201d, where it is reduced to one half of that in \u201cM\u201d. 2.3. Mechanical Feedback from Star Formation It is well known that without impeding processes to counter the cooling and subsequent condensation of baryons, the stellar mass in the universe would be overproduced \u2013 the \u201covercooling\u201d problem (e.g., White & Frenk 1991; Cole 1991; Blanchard et al. 1992). Feedback from star formation is believed to play the essential role to prevent gas from overcooling. The key question is: Where does the feedback from SF throttle gas cooling and condensation? We consider three independent lines of evidence to address this question. First, while metals from supernovae ejecta can be accelerated to velocities exceeding the escape velocity, the whole interstellar gas is very di\ufb03cult to be blown away, even in starburst galaxies, based on simulations (e.g., Mac Low & Ferrara 1999), although their adopted feedback strength may be on the low side. Second, observed normal galaxies in the local universe tend to be relatively gas poor (e.g., Zhang et al. 2009). Their progeniors or their building blocks were presumably gas rich in the past when most of the star formation occurred. This implies that, once gas has collapsed, it would turn into stars on a time scale that is shorter than the Hubble time. Finally, if gas were able to collapse inside halos without hinderance, the observed soft X-ray background would be overproduced by more than an order of magnitude (Pen 1999; Wu et al. 2001). These three lines of evidence together suggest that feedback from star formation likely exerts its e\ufb00ect outside normal stellar disks, probably in regions that are tens to hundreds of kiloparsecs from halo centers, before too much gas has either \f\u2013 7 \u2013 been collected inside the virial radius or cooled and condensed onto the disk. It is currently di\ufb03cult to fully model GSW in a cosmological simulation, although signi\ufb01cant progress has been made to provide a better treatment of the multi-phase interstellar medium (e.g., Yepes et al. 1997; Springel & Hernquist 2003). It is likely that a combination of both high resolution and detailed multi-phase medium treatment (perhaps with the inclusion of magnetic \ufb01elds and cosmic rays) is a requisite for reproducing observations. Here we do not attempt to model the causes and generation of GSW, but, instead, to simply assume an input level of mass, energy and metals, and carefully compute the consequences of GSW on the surrounding medium and on subsequent galaxy formation. Our simulations have a resolution of 24kpc/h comoving (see Table 1), which may provide an adequate resolution for this purpose, given the aforementioned lines of evidence that feedback from star formation likely exerts most of its e\ufb00ects in regions on scales larger than tens of kiloparsecs. In our simulations, GSW energy and ejected metals are distributed into 27 local gas cells centered at the stellar particle in question, weighted by the speci\ufb01c volume of each cell (Cen et al. 2005). The temporal release of the feedback at time t has the following form, all being proportional to the local star formation rate: f(t, ti, tdyn) \u2261(1/tdyn)[(t \u2212 ti)/tdyn] exp[\u2212(t \u2212ti)/tdyn]. Within a time step dt, the released GSW energy and mass to the IGM from stars are eGSWf(t, ti, tdyn)m\u2217c2dt and emassf(t, ti, tdyn)m\u2217dt, respectively. We \ufb01x emass = 0.25, i.e., 25% of the stellar mass is recycled with the ejecta metallicity of 5 Z\u2299. Metals, collectively having the observed solar abundance pattern, are followed as a separate hydro variable (analogous to the total gas density or nuetral hydrogen, HeI density, HeII density) with the same hydrocode. We do not introduce any additional \u201cdi\ufb00usion\u201d process for the metals. We note that cooling process is never turned o\ufb00, before or after the deposition of thermal energy, and hydrodynamic coupling between ejected baryons and surrounding gas is not turned o\ufb00either, a departure from some of the previous simulations (e.g., Theuns et al. 2002a; Aguirre et al. 2005; Oppenheimer & Dav\u00b4 e 2006; Dalla Vecchia & Schaye 2008; Shen et al. 2010). This is physically made possible in part due to a deposition of energy at scales that are comparable or larger than the Sedov radius in our current simulations, thanks to our limited spatial resolution. The GSW strength is therefore controlled by one single adjustable parameter, eGSW. We normalize eGSW by the requirement that the computed star formation rate (SFR) history matches, as closely as possible, the observations over the redshift range z = 0 to z = 6 where comparisons can be made. Figure 1 shows the SFR history for the three runs with non-zero eGSW, (L,M,H). What is immediately evident is that the mechanical feedback strength from star formation has a dramatic e\ufb00ect on the overall SFR history, especially at low redshift (z \u22643). At the resolution of the simulation, run \u201cM\u201d provides the best and excellent match to observations, where run \u201cL\u201d and \u201cH\u201d, respectively, overand under-estimate the SFR at z < 2. At the time of this writing we prefer to avoid introducing additional ad \f\u2013 8 \u2013 0 1 2 3 4 5 6 7 \u22123 \u22122 \u22121 0 z log SFR (Msun yr\u22121 Mpc\u22123) Run L Run M Run H MR Fig. 1.\u2014 Star formation rate density as a function of redshift for three models with di\ufb00ering feedback coe\ufb03cients eGSW = 7 \u00d7 10\u22126 (run \u201cM\u201d, thick solid curve), eGSW = 1 \u00d7 10\u22125 (run \u201cH\u201d, dot-dashed curve), eGSW = 3\u00d710\u22126 (run \u201cL\u201d, dashed curve), and run \u201cM2\u201d (thin solid curve), compared with observational data taken from (from low to high redshift): Heavens et al. (2004, 3 asterisks at z \u223c0), Nakamura et al. (2004, open inverted triangle at z = 0), Lilly et al. (1996, open circles), Norman et al. (2004, \ufb01lled triangles), Cowie et al. (1999, open diamonds), Gabasch et al. (2004, open squares), Reddy et al. (2005, cross at z = 2), Barger et al. (2000, open stars at z = 2 and 4.5), Steidel et al. (1999, \ufb01lled diamonds at z = 3, 4), Ouchi et al. (2004, \ufb01lled squares at z = 4, 4.7), Giavalisco et al. (2004, open triangles at z = 3\u22126), and Bouwens et al. (2005, \ufb01lled inverted triangle at z = 6). The data are converted to the values with the Chabrier IMF and common values are assumed for dust extinction for the UV data. hoc physics to remedy this and are instead content with the ballpark agreement at z > 3 between simulations and observations, given the large uncertainties in the observational data as evidenced by the large dispersion among di\ufb00erent observations. At redshift zero we \ufb01nd that the stellar densities in the three models (L,M,H) are \u2126\u2217= (0.011, 0.0048, 0.0030), which should be compared to the observed value of \u2126\u2217,obs = 0.0041 \u00b1 0.0006 (Cole et al. 2001). Our experiments indicate that, had we set eGSW = 0, the amount of stellar density \u2126\u2217at z = 0 would exceed 0.015, in serious disagreement with observations. In this respect model \u201cM\u201d also agrees better with observations. Our \ufb01ndings are in agreement with Springel & Hernquist (2003) and Oppenheimer & Dav\u00b4 e (2006) in that star formation rate history depends sensitively on the stellar feedback, but in disagreement with Shen et al. (2010) who \ufb01nd otherwise. All the subsequent results presented are based on run \u201cM\u201d. There is some indication that a model between \u201cM\u201d and \u201cL\u201d might provide a better match to the observations at low redshift (z < 1) if the compilation of Hopkins et al. (2006) is used. But we note that such a model may run into a worse agreement with observations with respect to \u2126\u2217at z = 0. Currently, it is di\ufb03cult to reconcile the observations of star formation rate \f\u2013 9 \u2013 history and \u2126\u2217at z = 0. One might appeal to an evolving IMF to provide an attractive reconcilation between the possible discrepancy (Dav\u00b4 e 2008). This is well beyond the scope of this investigation. In any case, a slight varied simulation, say, using an eGSW value between the \u201cM\u201d and \u201cL\u201d would give qualitatively comparable results. In order to test for numerical convergence we run one additional simulation, \u201cMR\u201d, which has the same parameters as run \u201cM\u201d but have half the resolution. To test the dependence of results on the extragalactic UV background we run our software pipeline through run \u201cM\u201d but with halving the amplitude of the UV background, called run \u201cM2\u201d. Fig. 2.\u2014 shows the mean \ufb02ux for Ly\u03b1 forest as a function of redshift. Our computed results are shown in asterisks. Diamonds correspond to mean transmitted \ufb02ux values for each quasar in the sample of McDonald et al. (2000), and triangles correspond to the mean \ufb02ux for the same observational data but binned in redshift intervals: [3.39, 4.43], [2.67, 3.39] and [2.09, 2.67]. It is prudent to make a self-consistency check for the value of eGSW that is empirically determined. The total amount of explosion kinetic energy from Type II supernovae with a Chabrier IMF translates to eGSW = 6.6 \u00d7 10\u22126. Observations of local sturburst galaxies indicate that nearly all of the star formation produced kinetic energy (due to Type II supernovae) is used to power GSW (e.g., Heckman 2001). Given the uncertainties on the evolution of IMF with redshift the fact that newly discovered prompt Type I supernovae contribute a comparable amount of energy compared to Type II supernovae, we argue that our adopted \u201cbest\u201d value of eGSW = 7 \u00d7 10\u22126 is consistent with observations and entirely within physical plausibility. \f\u2013 10 \u2013 2.4. Mock Spectra and Identi\ufb01cation of Absorption Lines The photoionization code CLOUDY (Ferland et al. 1998) is used post-simulation to compute the abundance of C IV and O VI , adopting the UV background calculated by Haardt & Madau (1996). For Ly\u03b1 absorption lines we use the computed neutral hydrogen density distribution directly from the simulation that was already using the Haardt & Madau (1996) UV background in the rate equations for hydrogen and helium species. We have checked that the radiation \ufb01eld is consistent with observations by comparing the simulated mean transmitted \ufb02ux as a function of redshift with observations. Figure 2 shows the mean transmitted Ly\u03b1 \ufb02ux as a function of redshift from the simulation in comparison with observations. We see the Ly\u03b1 forest produced in the LCDM model using the adopted UV background provides an adequate match to observations over most of the redshift range compared, z = 0 \u22124. At z \u22734, our results do not seem to coincide with observations. We attribute this to the UV background used: we have only considered a quasar background, while at these high redshifts the UV radiation coming from galaxies should have a signi\ufb01cant e\ufb00ect on the Ly\u03b1 forest. Nevertheless, we do not expect this to be an issue on the metal species considered in the following sections. These correspond to much higher energies than 1 ryd that are not a\ufb00ected by the UV contribution from galaxies to the ionizing radiation. We generate random synthetic absorption spectra for each of the three absorption lines by producing optical depth distribution along lines of sight parallel to one of the three axes of the simulation box, based on density, temperature and velocity distributions in the simulation (i.e., our calculations include redshift e\ufb00ects due to peculiar velocities and thermal broadening). The code used is similar to that used in our earlier papers (Cen et al. 1994, 2001). We identify each absorption line as a contiguous region in the \ufb02ux spectrum between a down-crossing point and an up-crossing point, both at a \ufb02ux equal to 0.85. Note that \ufb02ux equal to 1 corresponds to no absorption. For each identi\ufb01ed line we compute its equivalent width (EW), Doppler width (b), mean temperature (T), mean metallicity (Z) and mean gas overdensity (\u03b4), weighted by optically depth of each pixel. We do not attempt to perform Voigt pro\ufb01le \ufb01tting, a procedure often used to analyze observed spectra. Because of this, we tend to not generate some of the very low column lines that are purely an e\ufb00ect of pro\ufb01le \ufb01tting process. Also, precise comparison between our mock absorbers and observed ones is not possible for some quantities, such as Doppler width distributions. 3. Results 3.1. C IV \u03bb\u03bb1548, 1550 and O VI \u03bb\u03bb1032, 1038 absorption lines We begin with a visual examination of density, temperature and metallicity distribution of IGM at z = 2.6 and compare cases with and without star formation feedback, shown in \f\u2013 11 \u2013 x (Mpc/h) y (Mpc/h) 0 10 20 30 40 50 0 10 20 30 40 50 \u22121 \u22120.5 0 0.5 1 1.5 2 x (Mpc/h) y (Mpc/h) 38 40 42 44 46 48 50 38 40 42 44 46 48 50 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 x (Mpc/h) y (Mpc/h) 38 40 42 44 46 48 50 38 40 42 44 46 48 50 \u22121 \u22120.5 0 0.5 1 1.5 2 x (Mpc/h) y (Mpc/h) 38 40 42 44 46 48 50 38 40 42 44 46 48 50 3 3.5 4 4.5 5 5.5 6 6.5 7 x (Mpc/h) y (Mpc/h) 38 40 42 44 46 48 50 38 40 42 44 46 48 50 \u22121 \u22120.5 0 0.5 1 1.5 2 x (Mpc/h) y (Mpc/h) 38 40 42 44 46 48 50 38 40 42 44 46 48 50 3 3.5 4 4.5 5 5.5 6 6.5 7 Fig. 3.\u2014 The top-left panel shows a slice of gas surface density in units of the mean gas surface density at z = 2.6 of size 50 \u00d7 50(Mpc/h)2 comoving and a depth of 3.125Mpc/h comoving. The C IV absorption lines are indicated by black asterisks, produced by sampling the slice using 8000 random lines of sight. The O VI absorption lines are indicated by black circles. The top-right panel shows a zoom-in slice of the gas density of size 12.5 \u00d7 12.5(Mpc/h)2 comoving and a depth of 3.125Mpc/h comoving, corresponding to the lower right corner of the top-left panel, while the bottom two panels show the corresponding gas temperature in Kelvin and gas metallicity in solar units. \f\u2013 12 \u2013 Figure 3. Comparing the density structures in runs with (middle left panel) and without (bottom left panel) GSW we see that the e\ufb00ect of GSW on the overall appearance of largescale density structure is visually non-striking and the \ufb01lamentary skeleton of the large-scale density distribution remains intact. An important and visually discernible e\ufb00ect of GSW is to \u201csmooth\u201d out density concentrations in the dense (red) knots: the high density peaks (> 102; red regions) in the run without GSW are substantially higher than those with GSW; examples include the knots at (47, 41)Mpc/h, (45.5, 42.5)Mpc/h and (42, 40)Mpc/h. This e\ufb00ect is of course re\ufb02ective of the sensitivity on GSW of the SFR history, which in turn allowed the observations of SFR history to provide a powerful constraint on GSW, as shown earlier in Figure 1. The e\ufb00ect of GSW on low density (blue) regions seems small, likely because GSW do not reach there and/or become weak even if reaching there. The e\ufb00ect of the GSW on intermediate regions, a.k.a, \ufb01laments, is most easily seen by comparing the temperature distributions of the run with (middle right) and without (bottom right) GSW. We see that large-scale gravitational collapse induced shocks at this redshift tend to center on dense regions with a spatial extent that is not larger than about 100\u2212300kpc/h; these are virialization and infall shocks due to gravitational collapse of high density peaks. Some of the larger peaks are seen to be enclosed by shocks of temperature reaching or in excess of 107 K (note that the displayed picture is inevitably subject to smoothing by projection thus the higher temperature regions have their temperatures somewhat underestimated). Galaxies form in the center of the \ufb01lamentary structures where collapse of pancake structures occurs. Most of the shock heated volume from green (105K) to red (107K) are clearly caused by GSW, because they appear prominent only in the simulation with GSW. The GSW shock heated IGM seems to extend as far as \u223c0.5Mpc/h from galaxies. The temperature of this shock heated gas falls in the WHIM temperature range of 105 \u2212107 K; we will discuss this more quantitatively in \u00a73.2. Inspecting the temperature (middle right) and metal density (top right) distribution with GSW reveals that metal enriched regions, \u201cmetal bubbles\u201d, coincide with temperature bubbles. This indicates that GSW energy and metal deposition are tightly coupled. Most of the a\ufb00ected regions have a size of a few hundred kiloparsecs to about one megaparsec, suggesting that this is the range of in\ufb02uence of GSW in transporting most of the metals to the IGM. We now inspect visually typical physical locations of C IV and O VI absorption lines, shown as asterisks (C IV ) and circles (O VI ) in the top two rows in Figure 3. The interesting feature is that C IV and O VI absorbers tend to avoid \u201cvoids\u201d and are almost exclusively located around \ufb01lamentary structures with most of them seemingly residing in regions of an overdensity of \u223c3 \u221230; however, limited resolution of our simulation prevent us from reaching \ufb01rm conclusion on this at this time. For every C IV absorber that is produced, there is almost always an O VI absorber along the same line of sight. As we will see, all these paired-up C IV and O VI in fact arise from around the same regions in space. The converse \f\u2013 13 \u2013 is not necessarily true; a lower fraction of O VI absorbers do not have C IV counterparts within the depth of the projected slice of 3.125Mpc/h comoving and they tend to be located in regions that are slightly further away from high density peaks than those occupied by O VI lines with associated C IV lines. The vast majority of both C IV and O VI absorbers appear to be located in regions that have been swept by feedback shocks, as evidenced by the similarly looking shock heated temperature bubbles (middle right panel of Figure 3) and metal enriched bubbles emanating from collective supernovae in star-forming galaxies (upper right panel of Figure 3). The C IV and O VI lines, either collisionally ionized or photoionized, unequivocally stem from regions that are shock heated and metal enriched by feedback from star formation; this conclusion will be con\ufb01rmed quantitatively later. The typical metallicity and temperature of the C IV and O VI absorbers appear to be around [C/H] \u223c\u22122 and T \u223c104.5\u22125.5K. Typical Ly\u03b1 forest clouds have comparable densities but are at a signi\ufb01cantly lower temperature, T \u223c104K and a lower metallicity [C/H] \u223c\u22123. These properties indicate that, while most of the C IV and O VI absorption lines may have comparable overdensity compared to typical hydrogen Ly\u03b1 forestabsorption lines (NHI \u223c1013 \u22121015), the former are located in somewhat hotter regions with somewhat higher metallicity than the latter. Moreover, while many C IV and O VI lines often coincide along the same line of sight within a short distance, it will be shown that the actual gas properties of regions that produce them are signi\ufb01cantly di\ufb00erent. Let us now examine the physical properties of C IV and O VI absorbers in greater detail. Figures 4,5,6 show three random sightlines through the simulation box. In order to better see details we have concatenated all the zoomed-in regions around identi\ufb01ed C IV and O VI lines for each sightline to one panel, separated into columns. The left panels are for C IV lines and right for O VI lines. Several interesting properties of C IV and O VI absorbers may be gleaned. First, both C IV and O VI absorbers sit in regions with signi\ufb01cantly elevated temperature (i.e., > 2\u00d7104K) of widths of \u223c100km/s or larger, i.e., a few hundred physical kiloparsecs or larger, which are then connected with the general photo-ionized IGM of lower temperature of \u223c104K (2nd row from top in Figures 4,5,6). The density structures (top row in Figures 4,5,6) show that the densities in the regions of allevated temperatures span a wide range from \u03b4 \u223c0 to \u223c100 and there is no clear positive correlation between density and temperature (although there is a strong anti-correlation between them near density peaks). This suggests that the elevated temperatures in these regions are not caused by gravitational compression. It is also clearly seen that at the two locations demarcating each high temperature region, there is a shock-like density jump (of a factor of a few). A closer examination of the peculiar velocity structures (2nd panel from bottom in Figures 4,5,6) shows evidence of a double shock propagating outward, with the shock fronts coincidental with the temperature and density jump. Second, there is a tight correlation between gas temperature and gas metallicity (middle \f\u2013 14 \u2013 Fig. 4.\u2014 shows the physical properties of all C IV absorption lines (left) and O VI absorption lines (right) with column greater 1012cm\u22122 along a random line of sight of length equal to the simulation boxsize of 50\u22121Mpc at z = 2.6. Small regions around of all identi\ufb01ed C IV lines along each sightline are shown in separate columns. Aside from the \ufb02ux distribution shown at the bottom panel in velocity (Hubble) space, all other panels of physical variables are shown in real space. Each identi\ufb01ed C IV absorption line in the bottom panel is indicated by a shaded region with the value of the log of its column density. The corresponding physical location that produces the line is shown by a shaded vertical line with dark shades indicating larger contributions to the column of the line. \f\u2013 15 \u2013 Fig. 5.\u2014 this is similar to Figure 4 but for another random line of sight. \f\u2013 16 \u2013 Fig. 6.\u2014 this is similar to Figure 4 but for another random line of sight. \f\u2013 17 \u2013 row in Figures 4,5,6) in the sense that higher temperatures have higher metallicity and each region with elevated temperature is bordered by a synchronous drop in both temperature and metallicity on two sides. This is a strong indication that the elevated temperature is caused by a double shock originating from a alaxy or small group of galaxies due to GSW, which plays the double role of both shock heating the surrounding IGM and metal-enriching it. To reiterate this important point, C IV and O VI absorbers are located in regions that have been swept through by metal-enriched feedback shocks, which are still propagating outward and \u201cseparate\u201d the C IV and O VI absorbers from the general IGM of temperature T \u223c104K by about 100km/s or more. Because of the high temperatures probed by C IV and O VI lines, they are not in general correlated with Ly\u03b1 lines on scales \u2264100km/s. The latter probe typically lower temperatures. Overall, the locations of C IV and O VI lines are closely correlated. The overall spatial extent of O VI lines, in terms of their distance from galaxies, are somewhat larger than that of C IV lines, as seen in Figure 3 and Figures 4,5,6 and will be veri\ufb01ed by their origin being in somewhat lower density gas than C IV lines (see Figure 9 below). Third, many C IV absorbers appear to be paired up with O VI absorbers. For brevity, our convention is that we count absorption lines from left to right in each panel. For example, the \ufb01rst and fourth O VI lines in the right panel can be respectively paired up with the \ufb01rst and third C IV lines in the left panel of Figure 4; the \ufb01rst, third and fourth O VI lines in the right panel can be respectively paired up with the \ufb01rst, second and third C IV lines in the left panel of Figure 5; the second and \ufb01fth O VI lines in the right panel can be respectively paired up with the second and third C IV lines in the left panel of Figure 6. The O VI lines that appear together with C IV lines seem to have relatively low temperature (T \u223c104.5 \u2212105K), probably with a signi\ufb01cant photoionization component. Note that collisional ionization makes maximum contribution to O VI production at T = 105.5K, whereas for C IV this happens at T = 105.0K. Thus, it appears that relatively low-temperature O VI lines are often paired up with a C IV line, for which both photoionization and collisional ionization may be relevant. The excess of O VI lines compared to the number of C IV lines is likely due to the di\ufb00erence in the number of collisionally ionized cases for the two lines, given the di\ufb00erence in the optimal temperatures for collisional ionization for C IV and O VI lines. Note that with collisional ionization alone, the abundance of each species drops when the temperature moves away from the optimal temperature to either side (lower or higher) a factor of \u223c10 drop when temperature di\ufb00ers from the optical temperature by a factor of two. Roughly speaking, while the probability of an associated O VI line for a given C IV line is close to unity, the probability of an associated C IV line for a given O VI is somewhat lower. A more detailed study of this issue will be performed in sections to come. Finally, in Figure 7 we show a close-up view of several randomly chosen C IV lines. It is clear that the regions contributing to a C IV line tend to be centered or nearly centered on a local density peak along the line of sight, which almost always corresponds to a trough \f\u2013 18 \u2013 Fig. 7.\u2014 shows a close-up view of the region around each C IV line in real space, where the physical size along the line of sight has been translated to velocity using \u2206v = H(z)\u2206x. Each tickmark is 10 km/s. \f\u2013 19 \u2013 Fig. 8.\u2014 shows a close-up view of the region around each O VI line in real space, where the physical size along the line of sight has been translated to velocity using \u2206v = H(z)\u2206x. Each tickmark is 10 km/s. \f\u2013 20 \u2013 in temperature. It is also evident that the spatial extent of the C IV producing region is limited to about up to 10 km/s, corresponding to about comoving 100kpc/h, with some regions much narrower than that. As a consequence, even though the velocity gradients in the intermediate vicinities (i.e., the whole surrounding region of elevated temperature) of C IV -producing regions are often large (with dv/dr \u223ca few 100 km/s per comoving Mpc), the velocity gradients in the actual C IV -producing regions is smaller, which, in conjunction with the narrowness of the C IV -producing region, limits the velocity contribution to the Doppler width, as will be shown quantitatively later. Physically, this tells us that each C IV absorber tends to arise primarily from a narrow region in real space that have previously thermalized through feedback shocks, have cooled and are presently relatively quiescent. There does not appear to be a visible correlation between the LOS size of C IV lines and the column density; some of the high column C IV lines shown (the second and fourth panel from left) appear to come from very narrow regions of size \u226a100kpc comoving which appear to have very steep velocity gradients (for example, the fourth from left line with log of column equal 14.46). The C IV lines are mostly intergalactic in origin, not from inside galaxies. We next examine several randomly chosen O VI lines in close-up shown in Figure 8 and make detailed comparisons of the physical properties with C IV lines, when possible. We note three points. First, in Figures 4,5,6 we noted that most C IV lines (\u22651013.5) have associated O VI lines that have comparable column densities. This indicates that both C IV and O VI lines of relatively high column (\u22651013.5) tend to arise in regions in or near density peaks and temperature troughs. Second, a typical O VI line tends to have a lower column density due to a steeper column density distribution of O VI lines (see Figure 15 below). Third, O VI lines often lie in regions that are o\ufb00set from density peaks by \u223c10 \u2212100 km/s, and often these density peaks do not have corresponding temperature troughs. This is clear evidence that many, lower column O VI lines arise from regions that are not physically bound and instead they are mostly transient, stemming from density and temperature \ufb02uctuations in shock heated regions in the neighborhood of galaxies. It may be that the steeper column density distribution for O VI lines has its origin in the abundance of these more transient structures. The low density, shock heated regions may have temperatures that are too high to produce equally abundant C IV lines in conjunction with a lower abundance of carbon than oxygen. We now quantify the properties of C IV and O VI absorbers by di\ufb00erent projections through the multi-dimensional parameter space spanned by several fundamental physical variables. Figure 9 shows the distribution of gas overdensity for C IV (left) and O VI absorbers (right) at six di\ufb00erent redshifts, z = (0, 0.5, 1.5, 2.6, 4, 5). First, a comparison of the three histograms for three subsets of C IV and O VI absorbers in each panel indicates that higher column C IV and O VI absorbers are produced, on average, by higher density gas. Second, there is a clear trend that C IV absorbers trace increasingly more overdense regions with decreasing redshift. For example, while the location of the vast majority of \f\u2013 21 \u2013 Fig. 9.\u2014 Left panel shows the distribution of gas overdensity of regions that produce the CIV absorption lines at six di\ufb00erent redshifts, z = 0, 0.5, 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNC IV cm2=[12,13],[13,14],[14,15], respectively. Right panel shows the counterpart for O VI absorption lines. C IV absorbers with log(NC IV cm2) = [12, 13] appears to be outside virialized regions (i.e., overdensity less than about 100) at z > 2.6, a signi\ufb01cant fraction of them reside in virialized regions at z < 1.5; the same is true for higher column O VI absorbers. A comparison to O VI absorbers reveals a striking contrast: the vast majority of O VI absorbers with log(NC IV cm2) \u226414 are located outside virialized regions at all redshifts. In addition, typical O VI lines arise from somewhat lower density regions than C IV lines. For example, for O VI absorbers of log(NC IV cm2) = [12, 13], the typical overdensity peaks at \u03b4 \u223c5 for O VI absorbers versus \u223c10 for C IV lines at z = 2.6 \u22125, which jumps to \u03b4 \u223c10 for O VI absorbers versus \u223c50 for C IV absorbers at z = 1.5. A more quantitative analysis of the cross correlation between C IV and O VI absorption lines and galaxies will be presented in a later paper. Figure 10 shows the distribution of gas metallicity for C IV (left) and O VI absorbers (right) at six di\ufb00erent redshifts, z = (0, 0.5, 1.5, 2.6, 4, 5). We see that C IV absorption lines arise from gas with a wide range of metallicity from [C/H]=-3 to -0.5, peaked approximately around -2.5 to -1.5 at z > 0.5. At z > 2.6 the distribution for O VI lines is roughly like taking the left end of each corresponding C IV distribution and squeezing the whole distribution rightward by an amount of \u223c0.5 \u22121.0. So the metallicity distributions for O VI absorbers are generally cut o\ufb00at a higher metallicity than those for C IV absorbers at the low end by about 0.5 \u22121.0 and peak at a metallicity that is higher by this factor. The situation appears to start reversing at z = 1.5 such that at z < 0.5 the fraction of high metallicity C IV absorbers exceeds that of O VI absorbers. What is also interesting is that the typical metallicity of C IV and O VI lines displays a non-monotonic trend at a \ufb01xed column density. \f\u2013 22 \u2013 Fig. 10.\u2014 The left panel shows the distribution of gas metallicity in solar units of regions that produce the CIV absorption lines at six di\ufb00erent redshifts, z = 0, 0.5, 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNC IV cm2=[12,13],[13,14],[14,15], respectively. Right panel shows the counterpart for O VI absorption lines. For O VI absorbers, at z = 4\u22125 the metallicity of O VI lines with log(NO V I cm2) = [12, 14] peaks at [Z/ Z\u2299] = \u22121.5 to \u22121.0, which moves to a lower value of [Z/ Z\u2299] = \u22122.0 to \u22121.5 at z = 2.6, then slightly moves back up to [Z/ Z\u2299] \u223c\u22121.5 at z = (1.5, 0.5, 0). For comparison, the overall behavior for C IV lines is as follows: the metallicity of C IV lines with log(NC IV cm2) = [12, 14] peaks at Z = \u22122.0 to \u22121.5 at z = 5, at [Z/ Z\u2299] \u223c\u22122 at z = 4, followed by a very broad distribution peaking at Z = \u22122 to \u22121 at z = 1.5 to z = 2.6 with a larger fraction reaching a relatively high metallicity gas with [Z/ Z\u2299] > \u22121. The overall trend in metallicity evolution with redshift for the C IV and O VI absorbers could be understood as follows. Let us \ufb01rst note that the ionizing radiation background at z = 4, 5 is about (1/3, 1/30) of that z = 2.6, which in turn is larger than that at z = (1.5, 0.5, 0) by a factor of \u223c(2, 7, 30). At z = 4 \u22125 both C IV and O VI absorbers are predominantly collisionally ionized with the temperatures peaking at 105K and 105.5K, respectively, as shown below in Figure 11. These regions are relatively closer to galaxies, from which metal-carrying shocks originate and have relatively high metallicities. At lower redshift z = 2.6 larger regions around galaxies have been enriched with metals and the rise of the ionizing radiation background produces a large population of photoionized C IV and O VI lines at lower temperature and lower metallicity. Towards still lower redshift z < 1.5, the decrease of the mean gas density in the universe demands a rise in overdensity of the O VI -bearing gas in order to produce a comparable column density, causing a shift of these regions to be closer to galaxies where both metallicity and density are higher, seen in Figure 9. \f\u2013 23 \u2013 The combination of lower density (Figure 9) and higher metallicity (Figure 10) for the typical (low) column density O VI absorbers compared to C IV absorbers is reminiscent of metal-carrying shocks propagating through inhomogeneous medium, exactly the situation one would expect of the feedback shocks from galaxies entering the highly inhomogeneous IGM. Given the widespread steep density gradients (steeper than \u22122) in regions just outside the virial radius of galaxies, these shocks could not only heat up lower density regions to higher temperatures but also enrich them to higher metallicity. The feedback shocks generically propagate in a direction that has the least resistance and is roughly perpendicular to the orientation of a local \ufb01lament where a galaxy sits, as seen clearly in Figure 3 and shown previously (e.g., Theuns et al. 2002b; Cen et al. 2005). While higher density regions, on average, tend to have higher metallicity (as we will show later), the dispersion is su\ufb03ciently large that the reverse and other complex situations often occur in some local regions. This appears to be what is happening here, at least for some regions that manifest in C IV and O VI lines. Fig. 11.\u2014 Shows the distribution of gas temperature of regions that produce the C IV absorption lines at six di\ufb00erent redshifts, z = 0, 0.5, 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNC IV cm2=[12,13],[13,14],[14,15], respectively. Right panel shows the counterpart for O VI absorption lines. Figure 11 shows the distribution of gas temperature for C IV (left) and O VI (right) absorbers. We see that the temperatures of C IV absorbers at z = 5 and O VI absorbers at z = 4 \u22125 narrowly peak at 105K and 105.5K, respectively, suggesting that collisional ionization makes the dominant contribution to both species and the two types of absorbers arise from di\ufb00erent regions. The rapid drop in the amplitude of the UV radiation background beyond z = 3 and increase in gas density with (z + 1)3 is the primary reason for diminished component of photoionized C IV and O VI absorbers at these high redshifts. At redshift z < 2.6 the distributions for the two absorbers become progressively broader ranging from \f\u2013 24 \u2013 104.3K to 105.5K for C IV absorbers, and from 104.3K to 106K for O VI absorbers. Thus, at z < 2.6 both C IV and O VI absorbers are a mixture of photoionized and collisionally ionized ones. For both C IV and O VI lines, while the temperature distributions of O VI lines at z < 2.6 are broad, there is no signi\ufb01cant segregation in temperature of lines of di\ufb00erent column densities. Recall that there is a noticeable correlation between column density and overdensity for both O VI lines and C IV lines (Figure 9). This is likely indicative of complex, inhomogeneous nature of metal enrichment process around galaxies. Fig. 12.\u2014 The left panel shows the column density-weighted distribution of C IV lines in the overdensity-temperature plane at six di\ufb00erent redshifts, z = 0, 0.5, 1.5, 2.6, 4, 5. Right panel shows the counterpart for O VI absorption lines. Figure 12 displays the distribution of C IV (left) and O VI (right) absorbers in the overdensity-temperature plane at redshift z = 0, 0.5, 1.5, 3, 4, 5. We again see relatively narrow peaked temperature distribution at redshift z = 4 \u22125 for O VI absorbers, whereas at the same redshifts the C IV absorbers have a relatively broader temperature distribution. Towards lower redshift there appears to be a multi-modal distribution in temperature for C IV absorbers, with the lower temperature peak at T \u223c104.2\u22124.5K being progressively more important with decreasing redshift and becoming dominant by z = 0. The lower temperature peak is photoionized. At redshift z = 1.5 \u22122.6 a higher temperature peak at T \u223c104.5\u22124.8K is dominant, which is likely a mixture of collisional and photoionization. It is interesting to note that at z = 2.6 the radiation background is high enough to allow for the existence of a small peak at (\u03b4 \u2265200, T = 104.2K) for C IV absorbers, clearly arising from gas that is within virialized regions. At redshift z = 0 \u22120.5 the peak at T \u223c104.5\u22124.8K is still prominent. But, another peak at still higher temperature of T \u223c105.0\u22125.2K emerges, which is likely dominated by collisional ionization. Overall, the composition of C IV absorbers changes from being dominated by collisional ionization at z = 4 \u22125, through a mixture of collisional and photoionization at z = 1.5 \u22122.6, to being dominated by photoionization by z = 0. The distinct high temperature peak at T \u223c105K and density \u03b4 \u223c20 at z = 0 \f\u2013 25 \u2013 is rooted in the Warm-Hot Intergalactic Medium (WHIM; Cen & Ostriker (1999b); Dav\u00b4 e et al. (2001); Cen & Ostriker (2006)), where the intergalactic medium has been heated up by gravitational shocks due to the formation of the large-scale structure. A similar progression from mainly collisionally ionized to a mixture of collisional and photoionization for O VI absorbers is also seen. However, for O VI absorbers, the photoionization peak at T \u2264105K never dominates at any redshift. For both C IV and O VI absorbers there is no visible correlation between overdensity and temperature for O VI absorbers at all redshifts. For example, there is no evidence of these regions obeying the so-called equation of state (Hui & Gnedin 1997) that is applicable to low redshift Ly\u03b1 forest clouds. This just reinforces the statement that these regions are shock heated, in a dynamical state and perhaps transient, and do not resemble photo-heated Ly\u03b1 forest region. We have also plotted (not shown) the distribution of C IV and O VI absorbers in the overdensity-metallicity plane and \ufb01nd no visible correlation between them. Fig. 13.\u2014 Left panel shows the distribution of Doppler width of computed CIV absorption lines at six di\ufb00erent redshifts, z = 0, 0.5, 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNC IV cm2=[12,13],[13,14],[14,15], respectively. Right panel shows the distribution of the parameter \u03b7 at four di\ufb00erent redshifts, z = 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNC IV cm2=[12,13],[13,14],[14,15], respectively. We note that \u03b7 = 1 corresponds to a Doppler width that is 100% thermally broadened, whereas \u03b7 = 0 corresponds to a Doppler width that has no thermal contribution. The left panel of Figure 13 shows the distribution of Doppler width of computed C IV absorption lines. The Doppler width distributions generally peak at 10 \u221220km/s at all redshifts. Such a Doppler width peak is consistent with thermal broadening by gas temperature T \u223c104.5 \u2212105K as seen Figure 11. Because of the di\ufb00erent de\ufb01nition of absorption lines we use compared to Voigt pro\ufb01le \ufb01tting procedure for obtaining lines observationally, \f\u2013 26 \u2013 a direct comparison is not possible. Nonetheless, our results are consistent with the Doppler widths of the CIV absorber sample in Danforth & Shull (2008), the mean Doppler parameter at \u27e8z\u27e9= 0.06 is \u27e8bC IV \u27e9= 23 \u00b1 13, while for our whole sample at z = 0, the mean is \u27e8bC IV \u27e915.6\u00b17.1 (1\u03c3 interval). Comparisons to other samples, such as the one in Boksenberg et al. (2003), are di\ufb03cult. The reason for this is that it is common in observational investigations to \ufb01t a several number of components (with a Gaussian velocty distribution each) to each absorption line. This \u201ccomponent\u201d vs. \u201csystem\u201d de\ufb01nition makes comparisons between our work and observations subtle at least. Our de\ufb01nition of an absorber by establishing a \ufb02ux threshold more closely resembles the standard de\ufb01nition of a \u201csystem\u201d, and in general we limit our comparisons to observational samples of \u201csystems\u201d. Using this method, a large number of \u201ccomponentes\u201d might be \ufb01tted to one \u201csystem\u201d. In the sample of Boksenberg et al. (2003), this is as large as 32 components for one given system at z = 2.438; on average, there are 4.8 \u201ccomponents\u201d per \u201csystem\u201d in this sample ranging between 1.6 < z < 4.4. The right panel of Figure 13 characterizes the nature of the Doppler width of computed C IV absorption lines using parameter \u03b7 \u2261 q 2kT mionb2. It is indeed seen that most of the lower column density C IV absorbers with log(NC IV cm2) = [12, 13] are dominated by thermal broadening. However, for higher column C IV absorbers, there appears to be roughly equal contributions to the Doppler width from thermal broadening and bulk velocity broadening. What this suggests is that lower column C IV absorbers tend to lie in quiescent regions, whereas high column ones typically reside in regions with signi\ufb01cant velocity structures. This was seen earlier in Figures 4, 5, 6. Once again, it is important to stress that, even though the relative contribution to the line width from velocity structure is moderate for most C IV lines, the most likely physical explanation for the C IV producing regions is that they were shock heated by sweeping feedback shocks originating from nearby galaxies, have cooled to about 104.5 \u2212105K and perhaps somewhat compressed in the process. Most C IV lines are far from shock fronts, whose velocity structures would otherwise make the lines signi\ufb01cantly wider. Rauch et al. (1996) suggested that the quiescence of C IV lines may be due to the adiabatic compression of gas, which would not produce large velocity gradients. We show that this explanation may be incorrect given that most of the regions producing C IV lines at z > 2 lie outside virialized regions. Rather, the quiescence is due to a combination of two things: the thermalization of previous shocks that reduces the random velocities and velocity gradients, and the narrow range of the region in physical space that produces the C IV line, which limits the velocity di\ufb00erence. The left panel of Figure 14 shows the distribution of Doppler width of computed O VI absorption lines. For the OVI absorber sample of Danforth & Shull (2008), the mean Doppler parameter at \u27e8z\u27e9= 0.06 is \u27e8bO V I \u27e9= 30\u00b116, while for our whole sample at z = 0, the mean is \u27e8bO V I \u27e922 \u00b1 13 (1\u03c3 interval is quoted in both cases). In Thom & Chen (2008a), Voigt pro\ufb01le \ufb01tting yields a mean number of \u223c1.4 \u201ccomponents\u201d in 27 absorbers along 16 lines-of\f\u2013 27 \u2013 Fig. 14.\u2014 Left panel shows the distribution of Doppler width of computed O VI absorption lines at four di\ufb00erent redshifts, z = 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNO V I cm2=[12,13],[13,14],[14,15], respectively. Right panel shows the distribution of the parameter \u03b7 at four di\ufb00erent redshifts, z = 1.5, 2.6, 4, 5, separately for three subsets of lines of column density in the range of logNO V I cm2=[12,13],[13,14],[14,15], respectively. We note that \u03b7 = 1 corresponds to a Doppler width that is 100% thermally broadened, whereas \u03b7 = 0 corresponds to a Doppler width that has no thermal contribution. sight towards QSOs, with a mean redshift of \u223c0.25 and a corresponding Doppler width and 1\u03c3 of \u27e8bO V I \u27e9= 27\u00b117. Thus, within the errorbars our results agree with both observations. A comparison with C IV lines shown Figure 13 is instructive. First, while the distributions for C IV and O VI lines of logNO V I cm2=[12,13] peak at comparable b \u223c10 km/s at z = 1.5 and z = 2.6, suggesting limited velocity contribution to the widths of both lines, the distribution for O VI lines peaks at b \u223c20 km/s at z = 4 \u22125, signi\ufb01cantly higher than that of C IV lines at the same redshifts. This is indeed to be expected: the ratios of C IV and C (fC IV ) and of O VI and O (fO V I ) have a di\ufb00erent dependence on density and temperature. At these densities and high temperatures, fC IV increases with increasing density, whereas fO V I decreases with increasing density. So If you are looking for broad lines, you will in C IV have an advantage going to high-z (where physical gas densities are higher), but not in O VI . Second, it is clear that a signi\ufb01cant larger fraction of higher column O VI lines of logN cm2=[13,15] have larger Doppler width with b \u226540 km/s at all redshifts than C IV lines, suggesting that there are signi\ufb01cantly more O VI lines that C IV lines that are in dynamically hot regions, such as around shocks where velocity gradients are high. Since these dynamically hot regions likely also have higher temperatures, collisional ionization would make a larger contribution to O VI lines than C IV lines, consistent with our earlier statements. Third, let us take a close look at \u03b7 distribution for log(NO V I cm2)=[13,14] O VI lines and compare to that of C IV in Figure 13: for O VI lines it appears that the velocity contribution to the Doppler width is highest (i.e., lowest \u03b7) at z = 2.6, whereas for C IV lines \f\u2013 28 \u2013 that occurs at z = 4, suggesting that the fraction of C IV that are in dynamically hot regions peaks at a higher redshift than that for O VI lines. This is intriguing and likely due to a combination of several factors, including the evolution of the mixture of photoionized and collisionally ionized absorbers, evolution of metal enrichment and feedback shock strengths as a function of redshift. Potentially, useful and quantitative measures may be constructed to probe feedback processes using C IV , O VI and other lines jointly. Fig. 15.\u2014 Left panel: the computed column density distribution for the C IV absorption line at z = 2.5 for runs \u201cM\u201d (\ufb01lled black circles), \u201cMR\u201d (open circles) and \u201cM2\u201d (\ufb01lled grey circles). The solid line is the best power-law \ufb01t to our simulated results from run \u201cM\u201d performed for column densities in the range [13, 14.5]. The slope of the \ufb01t is \u22121.196 \u00b1 0.028. Diamonds are observational data from Songaila (2005) and Boksenberg et al. (2003) at a mean redshift of 2.7 and 2.6, respectively, corrected for our cosmology. Right panel: the computed column density distribution for the O VI absorption line at z = 2.5 is shown as the solid line, which is the best power-law \ufb01t to our simulated results, with slope \u22121.723\u00b10.075. The circles have the same meaning as in the left panel. The observational data are drawn from Carswell et al. (2002) (squares), Bergeron & Herbert-Fort (2005) (diamonds), and Simcoe et al. (2002)(triangles) corrected for our cosmology. 3.2. C IV And O VI Absorbers As Baryonic Matter Reservoirs Having gained a good understanding of the physical nature of C IV and O VI lines, we now turn to their overall column density distributions at z = 2.5, where observational data is most accurate, shown in Figure 15. For both C IV and O VI , the results obtained from runs \u201cM\u201d, \u201c\u2019MR\u2019 and \u201cM2\u201d show some small di\ufb00erence that is smaller than the magnitude of the di\ufb00erence between di\ufb00erent observational studies and comparable to the di\ufb00erence between simulations and observations. This shows that our simulations are reasonably converged and \f\u2013 29 \u2013 not too sensitive to a factor 2 or so variation in the strength of the UV backgrond. It is noted, however, that the convergence becomes much better for clouds with column density greater than 1013, indicating that our current simulation resolution probably still somewhat underestimates the abundance of clouds with columns smaller than that. The error bars are not visible for the simulated values because they lie within the symbols plotted. Overall, we \ufb01nd the agreement of the computed distributions from the simulation to the observed ones is at the level that we could have hoped for. We believe that di\ufb00erences may be contributable in part to cosmic variance, in part to our resolution at the lower column density (as evidenced by the noticeable \ufb02attening) and in part due to di\ufb00erent methods of identifying clouds (\ufb02ux thresholding in our case versus Voigt pro\ufb01le \ufb01tting in the observed results, with the latter often producing multiple components for a single physical system). Given the fact that our simulation has essentially only one free parameter (eGSW) that has already been signi\ufb01cantly constrained by the SFR history of the universe, it is really remarkable that we are able to match the observed column density distribution of both C IV and O VI lines to within a factor of 2-3. Since the regions probed by C IV lines and O VI lines are often physically di\ufb00erent and to some extent re\ufb02ect the di\ufb00erent stages of the evolution of the feedback shocks, the fair agreement between our simulations and observations suggests that our treatment of the feedback process provides a good approximation to what happens in nature in terms of heating and enriching the IGM, and it is indirect but strong evidence that feedback from star formation plays the central role in enriching the IGM with its energy and metals. No additional, signi\ufb01cantly energetic feedback from AGN seems required to account for the enrichment history of the IGM. Therefore, it is very encouraging to note that the overall picture of the process of star formation feedback may be jointly probed by C IV , O VI lines and other diagnostics. Detailed comparisons between simulations and observations in that regard would be the next logical step to further constrain theories of overall star formation in galaxies and feedback. In Figure 16, we show the evolution of the abundance of absorbers for di\ufb00erent subsets of column densities (top panels) and the evolution of log(f(N)) (bottom panels). The number of C IV absorbers per unit redshift pathlength decreases with increasing redshift at both the low redshift interval z \u223c0 \u22122 and the high redshift interval z > 4 but stays roughly constant in the redshift interval z \u223c2 \u22124. For O VI absorbers, the number of absorbers per unit redshift pathlength decreases monotonically with increasing redshift for absorbers with column densities in the intervals logNcm2 =[12,13],[13,14]. There are substantially fewer O VI absorbers in the high column density range and their number peaks around z \u223c1 \u22122. Comparing C IV and O VI absorbers at each column density interval, we see that at logNcm2 =[14,15] C IV and O VI absorbers have comparable numbers at z \u22651, but C IV absorbers outnumber O VI absorbers by z = 0 by a factor of a few, due to an upturn in C IV absorber number versus a downturn in O VI absorber number from z = 1 to z = 0. This is probably caused by a combination of the rapidly diminished star formation activity \f\u2013 30 \u2013 Fig. 16.\u2014 Top left panel: the evolution of the abundance of C IV absorbers separately for three subsets with column density in the range logNC IV cm2=[12,13],[13,14],[14,15], respectively. Top right panel: the same for O VI absorbers. Bottom left panel: column density distribution (logf(N)) for C IV absorbers at z = 0, 0.5, 1, 2.6, 4, 5. The results for our runs \u201cM\u201d (\ufb01lled black circles), \u201cMR\u201d (open circles) and \u201cM2\u201d (\ufb01lled grey circles) are shown. Open diamonds correspond to observations (Songaila 2001) corrected for our adopted cosmology, except at z=2.6, when they correspond to Songaila (2005). At z = 2.6 and z = 4, asterisks correspond to Boksenberg et al. (2003) for 3 and 1 sightline respectively in a 0.5 redshift interval around the mean redshift. At z = 0, the observational data correspond to Danforth & Shull (2008) (triangles) and Thom & Chen (2008a) (asterisks). Bottom right panel: logf(N) for O VI absorbers at z = 0, 0.5, 1, 2.6, 4, 5. Observational data is available at redshift z = 2.5: Carswell et al. (2002) (squares), Bergeron & Herbert-Fort (2005) (diamonds), and Simcoe et al. (2002)(triangles) corrected for our cosmology. At z = 0 we compare our results to Danforth & Shull (2008) (triangles). and a lower radiation background towards z = 0, which create an unfavorable condition for producing O VI absorbers in denser environments either collisionally or by photoionization. At logNcm2 = [12, 14] O VI absorbers outnumber C IV absorbers at all redshifts. From the lower panels we observe that the slope of log(f(N)) for C IV absorbers progressively becomes steeper at high redshifts. Our results seem to be consistent with observational results from Songaila (2005) and Boksenberg et al. (2003) at redshift z = 2.6 where observational data have the highest accuracy. We attribute the discrepancies at high column density between our simulations and observations to cosmic variance: the size of our box is not large enough to host the higher column density structures. At z = 4 \u22125 the agreement is not as good, where we produce a steeper slope for f(N) than observed; this is likely in part due to cosmic variance and in part due to an underestimated UV background used. \f\u2013 31 \u2013 What fraction of the metals in the IGM is directly seen in C IV and O VI absorbers? From the column density distribution of the absorbers, we can estimate the ion baryon density of the IGM. Two di\ufb00erent methods are typically used to do so. We can estimate \u2126ion from \u2126ion = H0mion c\u03c1c P i Ni,ion P \u2206X (1) where H0 is Hubble\u2019s constant today, mion is the mass of the considered ion, \u03c1c is the critical density, Nion is the absorber column density and P \u2206X accounts for the total redshift pathlength covered by the sample of sightlines. In a \ufb02at Friedmann universe, this quantity is given by X(z) = Z z 0 dz\u2032 (1 + z\u2032)2 [\u2126M(1 + z\u2032)3 + \u2126\u039b]1/2 (2) Another possibility is to construct the column density distribution per column density interval and unit \u2206X f(N) = P i Ni,ion \u2206N P \u2206X (3) where the sum in the numerator is carried on the column densities of the absorbers present in bin i and \u2206logN= 0.3 in our case. The distribution f(N) is typically \ufb01tted by a power-law f(N) = KN \u03b1. We can then obtain \u2126ion from the \ufb01t by \u2126ion = H0mion c\u03c1c Z Nmax Nmin Nf(N)dN (4) \u2126ion = 8\u03c0Gmion 3H0c K N \u03b1+2 \u03b1 + 2|Nmax Nmin (5) Following Becker et al. (2009) and Bergeron & Herbert-Fort (2005), we will integrate \u2126ion in the interval logN= [13, 15], but the \ufb01t will be performed in the interval logN= [13, 14.5] due to incompleteness of the sample at high values of N, as we have already mentioned. Figure 17 shows the evolution of the mass density contained in the C IV (left) and O VI (right) absorption lines, respectively. Considering the observational uncertainties and cosmic variance, it is very encouraging to see the excellent agreement between our simulated results and observations over the entire redshift range z \u223c2 \u22126, where comparisons may be made. As a new \ufb01nding from our simulation, we note that a signi\ufb01cant dispersion, i.e., cosmic variance, in \u2126CIV is expected for available data samples with limited size (i.e., pathlength). In the bottom panels of Figure 17 we show the expected distribution based on our simulations for various sample sizes. We \ufb01nd that with \u2206X = 30, the variance \u03c3 = 1.4 \u00d7 10\u22128 for C IV and 1.2 \u00d7 10\u22128 for O VI ; for \u2206X = 60, \u03c3 = 1.0 \u00d7 10\u22128 for C IV and 8.5 \u00d7 10\u22129 for O VI ; with \u2206X = 160, \u03c3 = 5.7 \u00d7 10\u22129 for C IV and 4.6 \u00d7 10\u22129 for O VI . Comparing C IV and O VI lines it is seen that the total amount of mass contained in the O VI line is comparable to that in the C IV line at all redshifts within a factor of 2 or so. Note that the size for \f\u2013 32 \u2013 Fig. 17.\u2014 Top left: redshift evolution of \u2126CIV from simulations: run \u201cM\u201d (\ufb01lled black circles), run \u201cMR\u201d (open circles) and run \u201cM2\u201d (\ufb01lled grey circles). Observational data are from Songaila (2005) (open diamonds), Becker et al. (2009) (arrows as limits), Pettini et al. (2003)(open triangle), Ryan-Weber et al. (2009) (\ufb01lled star), Boksenberg et al. (2003) (\ufb01lled upright triangles), Danforth & Shull (2008) (\ufb01lled downright triangle) and Simcoe (2006)(\ufb01lled square). The dashed curve is a simple physical model to explain the evolution of \u2126CIV (see text in \u00a73.2). Top right: redshift evolution of \u2126OVI. Observational data are from Carswell et al. (2002) (open square), Bergeron & Herbert-Fort (2005) (open diamond), Simcoe et al. (2002)(open triangle), Danforth & Shull (2008)(open star), Thom & Chen (2008b)(\ufb01lled triangle) and Frank et al. (2008) (lower limit, arrow). Bottom left: di\ufb00erent curves are the expected PDFs for \u2126CIV at z=2.6, based on our simulations, for observational samples of various sizes (i.e., \u2206X values). The solid vertical line indicates the median of the simulation results, whereas the vertical dashed line is Songaila (2005) value at z=2.5. Bottom right: the expected PDFs for \u2126OVI at z=2.6. The solid vertical line indicates the median of the simulation results, whereas the vertical dashed line is Simcoe et al. (2002) value at z=2.5. the observational sample of Songaila (2005) is \u2206X \u226420. Again, this suggests that some of the discrepancies between simulations and observations, and between observations may be accounted for by cosmic variance. For example, the slightly smaller value obtained from the observational sample of Songaila (2005) for \u2126CIV is statistically consistent with simulations within 0.5\u03c3; the slightly larger value obtained from the observational sample of Simcoe et al. (2002) for \u2126OVI is statistically consistent with simulations within 1\u03c3. In agreement with observations, the mass density contained in the C IV absorption line is, within a factor of two, constant from z = 1 to z = 4 and subsequently drops by a factor of \u223c(10, 20) by z = (5, 6). Some rather subtle di\ufb00erence between O VI and C IV lines may be noted. While the metal density contained in the C IV absorption line is nearly constant from z = 1 to z = 4, that plateau for the O VI line is attained only for z = 0\u22122. Because the \f\u2013 33 \u2013 total amount of metals in the IGM has increased signi\ufb01cantly in the redshift range z = 0\u22124, it seems that the near constancy of \u2126CIV at the redshift range z = 1 \u22124 and \u2126OVI at the redshift range z = 0\u22122 does not re\ufb02ect the amount of metals in the IGM, which has already been pointed out earlier by Oppenheimer & Dav\u00b4 e (2006). This probably re\ufb02ects a \u201cselection e\ufb00ect\u201d of C IV systems of the overall metals in the IGM, which may be due to a combination of several di\ufb00erent processes, including the evolution of the mean gas density as (1+z)3, the evolution of the overdensity of the regions that produce C IV lines, the density dependence of the IGM metallicity and its evolution, the evolution of the radiation background and hierarchical build-up hence gravitational shock heating of the large-scale structure. Our results contradict previous claims that observational data point towards a near constancy of \u2126CIV with redshift (e.g., Songaila 2001, 2005; Oppenheimer & Dav\u00b4 e 2006). However, more recent results have provided evidence of a downturn in \u2126CIV towards z \u223c6. Becker et al. (2009) \ufb01nd no C IV absorbers in 4 sightlines towards z \u223c6 QSOs. They set limits on \u2126CIV and attribute the downturn to a decline at least by a factor \u223c4.4 (to 95% con\ufb01dence) in the number of C IV absorbers at z = 5.3 \u22126 as compared to z = 2 \u22124.5. The decline shown in Figure 16 is higher, at least a factor of \u223c7 for low column densities absorbers. Ryan-Weber et al. (2009) perform the most extensive survey of intergalactic metals at z > 5, looking at the sightlines of 9 QSOs. They \ufb01nd evidence of a drop by a factor \u223c3.5 in the mass density of C IV from redshift z = 4.7 to z = 5.7. In comparison, we \ufb01nd a drop by a factor \u223c1.7 in \u2126CIV in the interval z = 5 \u22126. Fig. 18.\u2014 Left panel: the fraction of metals contained in C IV (circles) and O VI (squares) lines separately in terms of the overall amount of metals in the IGM at each redshift. Right panel: the fraction of metals contained in regions probed by C IV (circles) and O VI (squares) lines, respectively, in terms of the overall amount of metals in the IGM at each redshift. The second point, perhaps the most overlooked, is that the amount of metals contained in the C IV and O VI absorption line is a very small fraction of the overall metals. The left panel of Figure 18 shows the ratios of mass density measured in the C IV (diamonds) and \f\u2013 34 \u2013 Fig. 19.\u2014 shows the C IV ratio of nC IV /nC,tot (open circles) and the O VI ratio of nO V I /nO,tot (open squares) as a function of redshift. Note that at the optimal temperature with collisional ionization, fmax for C IV and O VI is 29% at log Tmax = 5.00 and 22% at log Tmax = 5.45, respectively (Sutherland & Dopita 1993). O VI lines (triangles) over the total amount of metals in the IGM as a function of redshift. We see that the amount of mass contained in the C IV line remains at \u223c0.13% within a dispersion of 40%, and at \u223c0.13% within a dispersion of 25% for the O VI line. In the right panel of Figure 18 we show the amount of metals probed by each line as a function of redshift. Here is how we compute the metals probed by each line and use C IV as an example. For each detected C IV line, a range of spatial locations (i.e., gas cells along the line of sight) contributes to its column density (see Figures 4,5,6). Roughly speaking, the amount of metals e\ufb00ectively probed by the C IV line will be larger than the metals directly seen in the C IV line by a factor of f = nC,tot/nCIV (a similar relation for the O VI line). This ratio f for the C IV and O VI line is shown in Figure 19. One point worth noting is that for C IV there is an upturn of f from \u223c0.1 at z < 2 towards high redshift, reaching again the same value at z = 6. This is caused from a transition from more collisionally dominated C IV absorber population at z > 4 to a more photoionization dominated one at intermediate redshifts. The trend for the O VI line is much less pronounced, indicative of a dominance of collisionally ionized O VI absorbers over the entire redshift range z = 0 \u22126, with a trend that it is more so at higher redshift. From the right panel of Figure 18 we see that, within a factor of 2, the amount of metals probed by either C IV or O VI line is roughtly 2%. Combining the fact that the majority of C IV -producing regions have not collapsed and virialized (see Figure 9) and a small fraction of all metals is probed by C IV and O VI lines at all redshifts, C IV and O VI absorbers are \u201ctransients\u201d; in other words, only a small fraction of metals in the IGM get \u201clit up\u201d as the C IV or O VI line at any given time. As we demonstrated earlier, these regions that produce C IV absorption lines have a set of properties that seem to be created by a combination of \f\u2013 35 \u2013 physical processes including feedback shock heating and radiative cooling (see Figures 4,5,6). These close observations suggest that only a fraction of metals at any given time that has recently passed through shocks and cooled to an appropriate temperature shows up as C IV absorption lines. In this sense, C IV absorption lines trace the current feedback processes from star formation and how the current feedback energy and metal-enriched gas interact with the surrounding IGM. Similar statements about the transient nature could be made for the O VI line, except that the O VI line corresponds to somewhat di\ufb00erent physical states of the shocked regions: they are slightly hotter in temperature and dynamically hotter. Third, returning to Figure 17, we would like to emphasize that, consistent with recent observations (e.g., Becker et al. 2009; Ryan-Weber et al. 2009), there is indeed a sharp drop in \u2126CIV from z = 3 to z = (4, 5, 6) by a factor of \u223c(2, 10, 20), respectively. This has less to do with the evolution of the total amount of metals produced, rather it is tracing the phase of C IV gas at any given time. At redshift z \u22653, C IV lines at di\ufb00erent redshifts appear to come from regions of comparable overdensity (see Figure 9) and comparable metallicity (see Figure 10). This allows us to test a very simple physical picture for the origin of C IV lines. They are produced by regions that were shock heated earlier by feedback shocks and have cooled to the temperature of T \u223c104.5 \u2212105K when they are seen, and the duration of each C IV line in this \u201cC IV phase\u201d would then be inversely proportional to the cooling time of the gas in this phase, which is proportional to \u039b\u22121(T, Z)(1 + z)\u22123(1 + \u03b4)\u22121, where \u039b(T, Z) is cooling function at temperature T and metallicity Z and the z-dependent term is due to density evolution with redshift. Then, the total amount of metals in C IV lines, \u2126CIV, will be proportional to \u02d9 Mstar(z)\u039b(T, Z)\u22121(1+z)\u22123(1+\u03b4)\u22121, where \u02d9 Mstar(z) is the star formation rate at z. Taking \u03b4, Z and T as roughly being constant (see Figures 9, 10, 11), we have \u2126CIV \u221d\u02d9 Mstar(z)(1 + z)\u22123, which is shown as the dashed curve on the left panel of Figure 17. It provides a reasonably good \ufb01t for the actual computed evolution of \u2126CIV. 3.3. Global Metal Enrichment of the IGM and Missing Metals We now turn to present a global metal enrichment history of the IGM to supplement what is captured by the C IV and O VI absorption lines. As in Cen & Ostriker (1999b), in our analysis we divide the IGM into three components by temperature: (1) T < 105 K cold-warm gas, which is in low density regions or cooling, star forming gas, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K. One additional component (4) is the baryons that have left the IGM and been condensed into stellar objects, which we designate as \u201cstars\u201d. Figure 20 shows the evolution of these four components. The overall evolution of the four components are in good agreement with earlier \ufb01ndings (Cen & Ostriker 1999b; Dav\u00b4 e et al. 2001; Cen & Ostriker 2006) and relevant observations (e.g., Fukugita et al. 1998). In \f\u2013 36 \u2013 0 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Redshift Mass Fraction T<105K T=105\u2212107K T>107K stars Fig. 20.\u2014 shows the evolution of baryons for the four mutually exclusive components: (1) T < 105 K cold-warm gas, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K and (4) \u201cstars\u201d. particular, we see that 40 \u221250% of all baryons are in WHIM by the time z = 0, which is in excellent agreement with our previous \ufb01ndings (Cen & Ostriker 1999b; Dav\u00b4 e et al. 2001; Cen & Ostriker 2006). It is also noted that \u223c40% of the baryons at z = 0 reside in a relatively cool but di\ufb00use component with T < 105K (the triangles in Figure 20). It is likely that a signi\ufb01cant portion of this cool component at z = 0, in the form of Ly\u03b1 forest, is already seen by UV observations (e.g., Penton et al. 2004). As we noted earlier, the strength of feedback from star formation is chosen to match the observed overall star formation history. Each of the IGM components is composed of di\ufb00erent regions that have gone through distinct evolutionary paths and thus spans a wide range in density, shown in Figure 21. The distribution of the cold-warm component (triangles) is always peaked at the mean density at all redshifts, re\ufb02ecting the initial gaussian distribution of gas around the cosmic mean and indicating that the bulk of the IGM at mean density or lower has never been shock heated by either strong gravitational shocks or feedback shocks. The cold-warm gas extends to very high densities (\u2265105). It is interesting to note that the amount of cold-warm gas that could potentially feed the star formation, i.e., the cold-warm gas at density log \u03c1/\u27e8\u03c1\u27e9\u22652 \u22123, remains constant, within a factor of \u223c2, over the range redshift shown z = 0 \u22125. This is consistent with observations of the nearly non-evolving amount of gas probed by DLAs (e.g., P\u00b4 eroux et al. 2003; Zwaan et al. 2005; Rao et al. 2006; Prochaska & Wolfe 2009; Noterdaeme et al. 2009). The physical relation between this apparently non-evolving gas and the precipitous drop of star formation rate at z < 1 is currently unclear. The distribution of the WHIM also appears to peak at a constant overdensity of about 10 times the mean density. This is rather intriguing. In order to properly interpret this \f\u2013 37 \u2013 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 3 4 5 6 7 8 9 log \u03c1/<\u03c1> log (dM/dlog \u03c1) z=0 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 3 4 5 6 7 8 9 log \u03c1/<\u03c1> log (dM/dlog \u03c1) z=1 T<105K \u22123 \u22122 \u22121 0 1 2 3 4 5 6 3 4 5 6 7 8 9 log \u03c1/<\u03c1> log (dM/dlog \u03c1) z=3 T=105\u2212107K \u22123 \u22122 \u22121 0 1 2 3 4 5 6 3 4 5 6 7 8 9 log \u03c1/<\u03c1> log (dM/dlog \u03c1) z=5 T>107K Fig. 21.\u2014 shows the mass distribution of the three IGM components (1) cold-warm gas at T < 105 K, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K as a function of overdensity at four di\ufb00erent redshifts z = 0, 1, 3, 5. Note that the area under each curve is proportional to the mass contained. interesting phenomenon it is useful to understand the heating sources of WHIM. There are two primary heating sources for WHIM: shocks due to the collapse of large-scale structure and GSW produced shocks. Earlier works have already shown that gravitational shock heating due to the formation of large-scale structure dominates the energy input for heating up and thus turning about 50% of the IGM into WHIM by z = 0 (Cen & Ostriker 1999b; Dav\u00b4 e et al. 2001; Cen & Ostriker 2006). It is, however, expected that heating due to hydrodynamic shocks emanating from galactic superwinds become increasingly more important at higher redshifts. This is because the amount of energy from gravitational collapse of large-scale structure as well as the resulting shock velocity decreases steeply towards higher redshift. The reason for this is simple: in the standard cosmological model the amount of power is peaked at a wavelength of \u223c300Mpc/h and drops steeply towards small scales. To quantify the relative contribution of GSW in turning the IGM into WHIM, we compare the simulation \f\u2013 38 \u2013 with GSW feedback to that without GSW feedback (run N in Table 1). Then, we make the simple assertion that the di\ufb00erence in the amount of WHIM between the two simulations is due to GSW. Figure 22 shows the fraction of WHIM that is produced (cumulatively) by GSW as a function of redshift. Consistent with previous results (Cen & Ostriker 1999b, 2006), the contribution from GSW to heating up WHIM by z = 0 is subdominant at 10 \u221220%. This relatively small contribution to WHIM from GSW can be understood based on simple energetics estimates. But we see the GSW fraction increases rapidly with increasing redshift. At redshift z = 1.5 the GSW fraction is about 50%, then reaching 70% at z = 3 and 95% at z = 5. Thus, we see the primary heating source of WHIM at z > 1.5 is GSW, whereas gravitational shocks due to structure formation are mostly responsible for heating the WHIM at z < 1.5. 0 1 2 3 4 5 6 0 10 20 30 40 50 60 70 80 90 100 Redshift WHIM fraction due to GSW (%) Fig. 22.\u2014 shows the (cumulative) fraction of WHIM that is produced by GSW as a function of redshift. From Figure 21 it seems clear that WHIM does not distinguish between gravitational shocks and feedback shocks. In both cases shocks have largely stopped at overdensity of about 10. Let us try to understand why that happened. First we note that the shocks originate approximately from the central regions of \ufb01laments, where pancakes collapse and shock for the case of gravitational shocks and galaxies are generally located for the case of GSW shocks. For gas shock heated to 105K the shock velocity is roughly 70 km/s. With that velocity the shock will be able to travel roughly 700(1 + z)\u22121kpc comoving over the Hubble time at any redshift. Therefore, one should expect to see shocks have reached a few hundred kpc comoving at any redshift, which are about one to a few times the virial \f\u2013 39 \u2013 radius of typical large galaxies, which in turn correspond an overdensity in the vicinity of 10 and are thus in good agreement with simulation results. Some shocks penetrate deeper into the IGM, especially along directions with lower densities and steeper density gradients, as seen in Figure 3; but the amount of mass e\ufb00ected in these low density regions is small, corresponding to the sharp drop of WHIM mass at the low density end (Figure 21). This last point is best corroborated by the distribution of the hot gas at high redshift (z = 3, 5), in the bottom two panels of Figure 21. There we see a small amount of hot gas heated up by GSW shocks is indeed produced in regions of density lower than the mean density and traces a larger amount of WHIM gas that is also produced there. At z \u22641, some comparable, small amount of hot gas is still produced at low density regions. But the vast majority of hot X-ray emitting gas is now residing in the deep potential wells of X-ray clusters of galaxies, when the cluster scale turns nonlinear and collapses. 0 1 2 3 4 5 6 0 10 20 30 40 50 60 Redshift Metals Mass Fraction (%) T<3x104K T=3x104\u2212105K T=105\u2212107K T>107K in stars Fig. 23.\u2014 shows the evolution of fractions of all metals produced that are contained in each of the \ufb01ve components, as a function of redshift: (1C) T < 3 \u00d7 104 K cold gas, (1W) T = 3 \u00d7 104 \u2212105 K warm gas, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K, (4) \u201cstars\u201d. Having obtained an overview of the thermal history of the IGM, we now turn to the metal story. We will \ufb01rst focus our attention on the WHIM here, because that is where most of the energy and metal exchanges between galaxies and the IGM take place, as shown in Figure 21. Observationally, integrating the observed star formation rate history from high redshift down to z = 2.5 suggests that the vast majority (possibly \u226580%) of cosmic metals at z \u223c2.5 appear to be missing (e.g., Pagel 1999; Pettini 1999). Note that this conclusion is insensitive to the choice of IMF, since both UV light and metals are, to zeroth order, produced by the same massive stars. Metals that have been accounted for in the estimates include those in stars of Lyman break galaxies (LBG), damped Lyman alpha systems (DLAs) and Ly\u03b1 forest, i.e., cold-warm gas and stars. Given the dominant heating of WHIM by \f\u2013 40 \u2013 GSW, one may immediately ask: Could a signi\ufb01cant fraction of metals that accompanies the GSW energy be heated up and in a phase like WHIM that is di\ufb00erent from those where metals have been inventoried? To better address this open question, we further break down the IGM component (1) (T < 105 K cold-warm gas) into two sub-components with (1C) (T < 3\u00d7104K cold gas) and (1W) (T = 3\u00d7104 \u2212105K warm gas). The purpose of this \ufb01ner division is to separate out the cold gas (1C), which can be more appropriately identi\ufb01ed with Ly\u03b1 forest clouds and DLAs. The results are shown in Figure 23. We see that about one third of all metals produced by z = 0 is locked up in stars, decreasing monotonically towards high redshift, dropping to about 10% by z = 5. The fraction of metals in the hot X-ray emitting component is at about 10% level at z = 0, plummeting to about 2% at z = 2 and slowly rising back to about 6% at z = 6. It is likely that the metal fraction in the hot X-ray component at z < 1 be somewhat underestimated given the relatively moderate simulation boxsize. The remaining metals are in the general photoionized Ly\u03b1 forest and the WHIM. At z = 6 the Ly\u03b1 forest (T < 3 \u00d7 104K, open triangles) contains about 43% of all metals, while WHIM (T = 105 \u2212107K, open circles), and warm IGM (T = 3 \u00d7 104 \u2212105K, solid triangles) contain 39% and 7%, respectively. But the fraction of metals in the Ly\u03b1 forest decreases steadily with time and becomes a minor component by z = 0 at < 3%. Most of the metals is seen to be contained in the WHIM at all times below redshift \ufb01ve at 50 \u221260%, peaking at \u223c60% at redshift z \u223c2. In total, the amount of metals contained in the IGM with temperature T > 3 \u00d7 104 constitutes about 2/3 of all metals produced by z = 2.5. Metals in this temperature range were not accounted for in the quoted observational inventory at z = 2.5. Thus, it seems probable that the missing metals problem at z = 2\u22123 can be largely recti\ufb01ed, if one counts the metals in the IGM at T > 3 \u00d7 104K. By now we have learned that a large amount of metals could be hidden in the WHIM of temperature 105 \u2212107K spanning a wide range in density. Since the metallicity is a strong function of density, it is still unclear the location of the WHIM that dominates the missing metals. Figure 24 shows the mean metallicity of the three IGM components as a function of overdensity at four di\ufb00erent redshifts. It is evident that within each IGM component there is a wide range in metallicity that is a non-trivial function of overdensity. Let us examine their behaviors in detail. For all three IGM components there is a strong correlation between the mean metallicity and overdensity at overdensity \u03b4 \u226510 and they converge at the highest density. While the metallicity of the cold-warm gas at the high density end remains at about solar at high density, its mean metallicity at the low overdensity drops rapidly with increasing redshift. For example, at \u03b4 = 10, the mean metallicity is (-2, -2.5, -3, -4) in solar units at z = (0, 1, 3, 5). One may notice that all three distributions exhibit a minimum metallicity at some intermediate density range, \u03b4 = 0.1 \u221210 for the cold warm-gas, \u03b4 = 10 for the WHIM and \u03b4 = 1 \u2212100 for hot gas (only at z = 0 \u22121). This is entirely in agreement with the physical picture that we described earlier for the GSW shock propagation through the IGM. \f\u2013 41 \u2013 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 log \u03c1/<\u03c1> [Z/Zsun] z=0 T<105K T=105\u2212107K T>107K \u22123 \u22122 \u22121 0 1 2 3 4 5 6 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 log \u03c1/<\u03c1> [Z/Zsun] z=1 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 \u22123.5 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 log \u03c1/<\u03c1> [Z/Zsun] z=3 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 \u22123.5 \u22123 \u22122.5 \u22122 \u22121.5 \u22121 \u22120.5 0 0.5 log \u03c1/<\u03c1> [Z/Zsun] z=5 Fig. 24.\u2014 shows the metallicity of the three IGM components (1) cold-warm gas at T < 105 K, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K as a function of overdensity at four di\ufb00erent redshifts z = 0, 1, 3, 5. Figure 24 con\ufb01rms that the transformation of cold gas to WHIM roughly stops at \u03b4 = 10. Additional metal-enriched gas is further transported along some directions, such as those perpendicular to the \ufb01laments, to very low density regions and enrich these regions to higher metallicity (due to a negligible amount of pre-existing gas there). The behavior of cold and hot components at the low density end can be understood in the same way as the WHIM. The metallicity of hot gas at the centers of clusters of galaxies (at overdensity \u03c1/\u27e8\u03c1\u27e9\u2265500) appear to stay in narrow range around [Z/ Z\u2299] \u223c\u22120.5 over the redshift range z = 0 \u22121, consistent with observations (e.g., Arnaud et al. 1994; Mushotzky et al. 1996; Tamura et al. 1996; Mushotzky & Loewenstein 1997). There is some indication of a still higher metallicity towards higher density regions, which may be in agreement with observations (e.g., Iwasawa et al. 2001). The metallicity of the WHIM at the peak of its mass distribution (\u03c1/\u27e8\u03c1\u27e9\u223c10) at z = 0 is [Z/ Z\u2299] \u223c\u22121, in good agreement with observations (e.g., Danforth & Shull 2005). We \ufb01nd that the following formula \ufb01ts well the metallicity of the WHIM as a function of overdensity \u03c1/\u27e8\u03c1\u27e9at the redshift range z = 0 \u22123: [Z/Z\u2299]WHIM = \u22121.2 \u22120.08z + (0.3 + 0.12z1/3)(log \u03c1/\u27e8\u03c1\u27e9\u22121), (6) which are shown as the straight lines in the three panels of Figure 24. \f\u2013 42 \u2013 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 log \u03c1/<\u03c1> log (dMZ/dlog \u03c1) z=0 T<105K T=105\u2212107K T>107K \u22123 \u22122 \u22121 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 log \u03c1/<\u03c1> log (dMZ/dlog \u03c1) z=1 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 log \u03c1/<\u03c1> log (dMZ/dlog \u03c1) z=3 \u22123 \u22122 \u22121 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 log \u03c1/<\u03c1> [Z/Zsun] z=5 Fig. 25.\u2014 shows the distributions of metals mass for the three IGM components (1) cold-warm gas at T < 105 K, (2) WHIM at 107 K> T > 105 K, (3) Hot X-ray emitting gas at T > 107K as a function of overdensity at four di\ufb00erent redshifts z = 0, 1, 3, 5. Note that the area under each curve is proportional to the metals mass contained. We now examine directly the distribution of metal mass as a function of density for each IGM component, shown in Figure 25. A very interesting result is that at high redshift (z = 3, 5) the metals mass in the WHIM tends to peak at a somewhat lower overdensity than that for the overall WHIM mass, thanks to the upturn of metallicity of the WHIM at low overdensity end. Speci\ufb01cally, at z = 3 \u22125 it appears that the metals mass peaks at \u03b4 \u223c2, whereas the total WHIM mass peaks at \u03b4 \u223c10. This trend is reversed at lower redshift; for example, at z = 0 the metals in WHIM is now broadly peaked at \u03b4 \u223c100, while the WHIM mass peaks at \u03b4 \u223c10. This reversal is likely due to accretion of metal-enriched gas onto high density regions during recent formation of large-scale structures. Quantitatively, we \ufb01nd that, at z = 2.5, only about 15% of the metals in warm and WHIM gas is located within virialized regions. About 73% of the metals in warm and WHIM gas resides in the IGM with \u03b4 = 1 \u2212100, with the remaining 12% in underdense regions. This con\ufb01rms an earlier expectation that some of the missing metals may be in the hot halos of galaxies (e.g., \f\u2013 43 \u2013 Pettini 1999; Ferrara et al. 2005); but that accounts for only a small fraction of the total missing metals. Combining with our earlier statements on missing metals at z = 2 \u22123, our \ufb01nding on missing metals is that most of the missing metals are in the warm and WHIM gas with moderate overdensity broadly distributed between \u03b4 \u223c1 \u221210. 0 1 2 3 4 5 6 \u22124 \u22123 \u22122 \u22121 0 Redshift [Z/Zsun] (25%, 50%, 75%) \u03c1/<\u03c1>=1 \u03c1/<\u03c1>=10 \u03c1/<\u03c1>=100 \u03c1/<\u03c1>=1000 Fig. 26.\u2014 shows metallicity evolution as a function of redshift at four \ufb01xed densities, \u03c1/\u27e8\u03c1\u27e9= 1, 10, 100, 1000. For each density there are three curves, corresponding to (25%, 50%, 75%) percentiles. The open squares and open circles are the observed median metallicity evolution at overdensity equal to 10 and 1, respectively (Schaye et al. 2003). Note that the metallicity [Z/ Z\u2299] = \u22124 is a \ufb02oor value. Finally, our attention is turned to the cold-warm component, which displays a dramatic trough at the mean density. Physically, it suggests that GSW does not a\ufb00ect bulk of the IGM. Comparisons with observations are useful here to shed light on this dramatic behavior. We note that the mean metallicity at overdensity \u03c1/\u27e8\u03c1\u27e9< 10 drops quickly below [Z/ Z\u2299] = \u22123. The typical Ly\u03b1 forest clouds of column density 1013 \u22121014 cm\u22122 arise in these moderate density regions. Our simulations suggest that most of these clouds are not expected to be enriched to a level higher than [Z/ Z\u2299] = \u22123, which appear to be in agreement with direct metallicity measurements of Ly\u03b1 forest clouds (e.g., Tytler & Fan 1994; Lu et al. 1998). However, our results are at variance with recent measurements of metallicity in these moderate density regions using POD method in the sense that the observed metallicity seems to far exceed what we obtain in our simulations. To illustrate the disagreement we cast the information presented in Figures 21 and 24 into a di\ufb00erent form in Figure 26, where we show the evolution of metallicity as a function of redshift at four \ufb01xed densities, \f\u2013 44 \u2013 \u03c1/\u27e8\u03c1\u27e9= 1, 10, 100, 1000, for the ease of comparison. If one compares the middle solid square curve (the median metallicity at overdensity 10 from our simulations) and the open squares curve (the median metallicity at overdensity 10 from observations, Schaye et al. (2003)), the middle solid dots curve (the median metallicity at overdensity 1 from our simulations) and the open circles curve (the median metallicity at overdensity 1 from observations, Schaye et al. (2003)), the disagreement is clear and dramatic. We predict that the metallicity in regions with overdensity less than about 10 generally increases quite rapidly with decreasing redshift, whereas the observationally inferred trend goes in the opposite direction with a mild rate of change. Is our simulation incomplete or are the observations misinterpreted? Recall from Figure 9 that the typical overdensity for low column C IV lines is about 10, comparable to that of Ly\u03b1 forest clouds. But that is a mere coincidence: the two types of absorbers are generally not co-located in physical space. If we go back to Figures 4,5,6 and study the temperature (second rows) and metallicity (third row), we see there is a strong spatial correlation between temperature and metallicity; regions where a signi\ufb01cant amount of C IV reside tend to have an elevated temperature that exceeds 2\u00d7104K, whereas the metallicity in lower temperature regions, where HI reside in abundance to give rise Ly\u03b1 forest clouds, seems extremely low. As we noted earlier, the regions with elevated temperature and CIV lines have a width that corresponds to one to several hundred km/s. Interestingly, these regions also typically have peculiar velocities of several hundred km/s (fourth row from top of Figures 4,5,6). As a result, there should be some overlap in velocity space between some C IV lines and Ly\u03b1 forest lines, even when they are signi\ufb01cantly displaced in physical space. This overlap may \u201cdi\ufb00use\u201d, in velocity space, some of the metals in regions that producing C IV lines into the Ly\u03b1 forest lines, causing an apparent, moderate metallicity level in Ly\u03b1 forest, as inferred by Schaye et al. (2003)), when a method such as POD is employed. A closer look at the left panel of Figure 10 indicates that typical C IV absorbers show a decrease of metallicity with decreasing redshift in the range 2 \u22125: roughly [Z/ Z\u2299] = [\u22122.0, \u22121.5], [\u22122.3, 1.4], [\u22122.6, \u22121.5] at z = 5, 4, 2.6. This is in accord with the observed weak trend of increasing metallicity with increasing redshift, which otherwise is extremely di\ufb03cult to understand in the context of the standard cosmological model. Needless to say, the O VI lines located in regions that are spatially close to C IV lines will also \u201cdi\ufb00use\u201d into the Ly\u03b1 forest in velocity space. The fact that O VI lines tend to have a higher metallicity, about [Z/ Z\u2299] = 0.2 to 0.4, than the C IV lines over the redshift range of z \u223c2 \u22124 (comparing the left and right panels of Figure 10) and there are more O VI lines than C IV lines (comparing the left and right panels of Figure 10) would suggest that one may expect that the apparent oxygen abundance in the Ly\u03b1 forest inferred from POD should be higher than that of C IV lines. This is indeed the case: Aguirre et al. (2008) found that [O/C] = 0.66+0.06 \u22120.2 . We argue that this provides independent, supporting evidence for our explanation that is selfconsistent and physically plausible. Alternatively, the IGM may be enriched to the observed \f\u2013 45 \u2013 level by \ufb01rst generation, Pop III galaxies that are not properly captured in our simulations. To further test our \u201cdi\ufb00usion\u201d hypothesis, we have computed the cross-correlation between Ly\u03b1, C IV and O VI spectra and taken the mean along all lines of sight at z = 2.6 for two cases: run \u201cM\u201d of our simulations with and without the e\ufb00ect of peculiar velocities taken into account. We present in Figure 27 the following function: f(\u2206v) \u2261\u03bep,HI x ion(\u2206v) \u03be0,HI x ion(\u2206v) \u22121 (7) where \u03bep,HI x ion(v) is the cross-correlation function for the spectrum of HI and the corresponding ion averaged over all lines of sight and symmetrized for positive and negative velocity lags at z = 2.6. \u03be0,HI x ion(v) is the same function computed in the case where there are no peculiar velocities. Figure 27 shows that in the case of no peculiar velocity, the crosscorrelation between Ly\u03b1 and C IV , and Ly\u03b1 and O VI , is weaker than in the case where peculiar velocities are considered. This is compelling evidence that peculiar velocities e\ufb00ects could arti\ufb01cially di\ufb00use metals into the Ly\u03b1 forest. Fig. 27.\u2014 Comparison of the cross-correlation functions of C IV and O VI with and without peculiar velocities. The function plotted is f(\u2206v), de\ufb01ned in the text. Values greater than 0 imply there is a stronger correlation between the ion and the Ly\u03b1 spectrum in the case where peculiar velocities are taken into account. 4." + }, + { + "url": "http://arxiv.org/abs/0907.0735v3", + "title": "Probing the Epoch of Reionization with the Lyman Alpha Forest at z~4-5", + "abstract": "The inhomogeneous cosmological reionization process leaves tangible imprints\nin the intergalactic medium down to z=4-5. The Lyman-alpha forest flux power\nspectrum provides a potentially powerful probe of the epoch of reionization.\nWith the existing SDSS I/II quasar sample we show that two cosmological\nreionization scenarios, one completing reionization at z=6 and the other at\nz=9, can be distinguished at ~7 sigma level by utilizing Lyman alpha forest\nabsorption spectra at z=4.5+-0.5, in the absence of other physical processes\nthat may also affect the Lyman alpha flux power spectrum. The redshift range\nz=4-5 may provide the best window, because there is still enough transmitted\nflux and quasars to measure precise statistics of the flux fluctuations, and\nthe IGM still retains a significant amount of memory of reionization.", + "authors": "Renyue Cen, Patrick McDonald, Hy Trac, Abraham Loeb", + "published": "2009-07-04", + "updated": "2009-07-09", + "primary_cat": "astro-ph.CO", + "cats": [ + "astro-ph.CO", + "astro-ph.GA" + ], + "main_content": "INTRODUCTION The history of cosmological reionization is presently primarily constrained by the cosmic microwave background observations of WMAP (Wilkinson Microwave Anisotropy Probe) (Dunkley et al. 2009) and the SDSS (Sloan Digital Sky Survey) quasar absorption spectra. The former gives an integral constraint, strongly suggesting that cosmological reionization may well be underway at z \u223c12, while the latter provides a solid anchor point at z \u223c6 when the universe became largely transparent to Lyman limit photons (e.g., Fan et al. 2001; Becker et al. 2001; Cen & McDonald 2002; Fan et al. 2006b). At z \u22656.3 the lower bound on the neutral hydrogen fraction, x, of the IGM provided by SDSS observations is, however, fairly loose at x \u2265 0.01. Thus, exactly when most of the neutral hydrogen became reionized is yet unknown and there are many possible scenarios that could meet the current observational constraints (e.g., Barkana & Loeb 2001; Cen 2003; Haiman & Holder 2003; Fan et al. 2006a; Wyithe & Cen 2007; Becker et al. 2007). The process of inhomogeneous cosmological reionization leaves quanti\ufb01able and signi\ufb01cant imprints on the thermal evolution of the IGM (Trac et al. 2008). In this Letter, we show that the Ly\u03b1 forest \ufb02ux spectrum at moderate redshift z = 4.5 \u00b1 0.5 sensitively depends on and hence provides a very powerful probe of the epoch of reionization. 2. REIONIZATION MODELS We use a hybrid code to accurately compute the reionization process, which consists of a high-resolution Nbody code, a shock-capturing TVD hydro code and a raytracing radiative transfer (of Lyman limit photons) code. The reader is referred to Trac et al. (2008) for more details. We use the best \ufb01t WMAP 5-year cosmological parameters: \u2126m = 0.28, \u2126\u039b = 0.72, \u2126b = 0.046, h = 0.70, \u03c38 = 0.82, and ns = 0.96 (Komatsu et al. 2009). We use 29 billion dark matter particles on an e\ufb00ective mesh with 11, 5203 cells in a comoving box of 100 h\u22121Mpc, yielding a particle mass resolution of 2.68 \u00d7 106 h\u22121M\u2299allowing us to resolve all atomic cooling dark matter halos. A total of N = 15363 gas cells of size 65kpc/h are used and we trace \ufb01ve frequency bins at > 13.6 eV with the ray-tracing code. The star formation rate is controlled by the halo formation history. We adjust the ionizing photon escape fraction to arrive at two models, where reionization is completed early (z \u223c9) and late (z \u223c6), respectively; note that the halo formation histories in the two models are identical. 3. RESULTS Previous studies (e.g., Furlanetto et al. 2004; Iliev et al. 2006; Lee et al. 2008) have shown that the reionization process proceeds in an inside-out fashion, where regions around high density peaks get reionized \ufb01rst. H II regions initially surround isolated galaxies that formed in high density peaks. With time these H II regions expand and lower density (void) regions are eventually engulfed by the expanding H II regions stemming from high density peaks. Consequently, the redshift of reionization of each individual spatial point, zreion, is highly correlated with the underlying large-scale density \ufb01eld, with the positive correlation extending down to scales \u223c 1 h\u22121Mpc, as we have shown earlier (Trac et al. 2008). Once an expanding region is photo-ionized and photo-heated, it would cool subsequently due to adiabatic expansion and other cooling processes (primarily Compton cooling at high redshift), countered by photoheating of residual recombining hydrogen atoms (on the time scale of recombination) (e.g., Theuns et al. 2002; Hui & Haiman 2003). As a result, the strong correlation between zreion and the underlying large-scale density is manifested in a strong anti-correlation between the temperature and the underlying large-scale density \ufb01eld. Speci\ufb01cally, di\ufb00erent \f2 Cen, McDonald, Trac, & Loeb Fig. 1.\u2014 Top panels show the log of the ratio of gas temperature from the simulation to that prescribed by a \ufb01xed EoS at z = 4, for the early (left) and late (right) reionization model, respectively. We use EoS formula T = T0(\u03c1/\u03c10)0.62, where T0 is the temperature at mean density \u03c10 in each model. The slice shown has a size (100 h\u22121Mpc)2 with a thickness equal to two hydro cells (130 h\u22121kpc). The distribution of \ufb02ux transmission, F(early)= exp(\u2212\u03c4(early)), for the late reionization model is shown in the bottom left. The \ufb02ux di\ufb00erence between the two models: F(late)\u2212F(early)= exp(\u2212\u03c4(late)) \u2212exp(\u2212\u03c4(early)) is shown in the bottom right panel. regions of the same low densities \u03b4 \u2264a few (without large-scale smoothing in this case) would display a large, long-range-correlated, dispersion in temperature, immediately following the completion of reionization (e.g., Trac et al. 2008). (Note that virialized regions are not a\ufb00ected and do not retain any information of reionization in this regard.) Both the anti-correlation between temperature and the underlying large-scale density and the consequent temperature dispersion at a \ufb01xed density weaken as time progresses and the temperature-density relation asymptotically approaches a so-called equation-of-state (EoS), a one-to-one mapping from IGM density to temperature (Hui & Gnedin 1997), with T = T0(\u03c1/\u03c10)0.62 in the latetime limit. However, at the redshift range z = 4.5 \u00b1 0.5, the IGM has not had enough time to have completely relaxed to this state prescribed by the EoS such that quantitatively signi\ufb01cant deviations from a deterministic EoS exist, if the universe was reionized, say, at zri \u223c6 \u22128. The deviations from a simple temperature-density relation are larger for smaller zri at a given observed redshift. In Fig. 1 we show the log of the ratio of gas temperature from the simulation to that prescribed by the asymptotic EoS at z = 4 in a slice of size (100 h\u22121Mpc)2 with a thickness equal to two hydro cells (130 h\u22121kpc), for the early (top left panel) and late (top right panel) reionization model, respectively. The \ufb01elds have been smoothed on cells of comoving length 130 h\u22121kpc. The small reddish/yellowish regions seen in the top left panel correspond to virialized regions, for which the plotted \fImprint of Inhomogeneous Hydrogen Reionization 3 ratio does not contain useful information. But these regions show clearly the location of ionizing sources. We see striking di\ufb00erences in temperature distributions between the two reionization models with respect to their respective asymptotic EoS values. In the early reionization model (top left panel) most of the regions have blue color (i.e., the ratio equal to \u223c1) and appear to have mostly relaxed to the state predicted by the asymptotic EoS, while some low density regions in the voids still display yellowish color with a temperature that is higher than that of the asymptotic EoS by 30 \u221250%. On the other hand, in the late reionization simulation (top right panel), while regions just outside the shock-heated \ufb01laments and halos (bluish color) have largely relaxed to the asymptotic EoS, regions of comparable local densities in the voids are much hotter than that of the asymptotic EoS, by a factor of 1.5 \u22122.5. Because the neutral hydrogen fraction in regions of moderate density is determined by the balance between photoionization rate and recombination rate, the latter of which is a function of temperature, the two di\ufb00erent temperature distributions in the two reionization models result in di\ufb00erent large-scale neutral hydrogen distribution. In the bottom left panel of Fig 1 we show the expected \ufb02ux transmission, F(early)= exp(\u2212\u03c4(early)), for the early reionization model, where \u03c4(early) is the Ly\u03b1 optical depth computed based on the distribution of neutral hydrogen density, gas peculiar velocity and temperature at z = 4 in the early reionization model. In computing the neutral hydrogen fraction we have used a uniform background radiation \ufb01eld with its amplitude adjusted such that both models yield the same mean transmitted \ufb02ux of < F >= 0.43 at z = 4, as observed (Fan et al. 2006b). In the bottom right panel the \ufb02ux di\ufb00erence between the two models, F(late)\u2212F(early), is shown, where it is clearly seen that the transmitted Ly\u03b1 \ufb02ux is significantly a\ufb00ected by the temperature di\ufb00erence at z = 4, resulting in fractional di\ufb00erence in the transmitted \ufb02ux in the voids between the two models of \u223c15% (blue regions). Speci\ufb01cally, there is more transmitted \ufb02ux in the void regions in the late reionization model, compensated by comparably reduced transmitted \ufb02ux in high density regions. It is noted that, at z \u22654, the majority of transmitted Ly\u03b1 \ufb02ux comes from the lowest density regions of \u03b4 \u2264a few. Fig. 2 shows the ratio of \ufb02ux power spectrum in the late reionization model to that in the early reionization model at z = 4 (black solid) and z = 5 (black dashed). It appears that the large-scale anti-correlation between density and deviations from a single EoS in the late reionization model leads to a signi\ufb01cant amount of extra power in the \ufb02ux spectrum (speci\ufb01cally, relatively high temperatures in late-reionizing under-dense regions lead them to produce even less absorption than they otherwise would). The di\ufb00erence between the \ufb02ux power spectra of the two reionization models increases with scale, reaching 20% at k = 0.001(km/s)\u22121 at z = 4; the di\ufb00erence is still larger at z = 5 (\u223c30%), as expected, due to still larger di\ufb00erence in the temperature hence \ufb02ux transmission between the two reionization models. The black error bars indicate the statistical errors expected with the full SDSS I/II sample (completed, but not yet fully analyzed), With Fig. 2.\u2014 Black solid and dashed curves are the ratio of \ufb02ux power spectrum in the late reionization model to that in the early reionization model at z = 4 and z = 5, respectively. Also shown as the two green curves are the corresponding ratios produced by replacing the real temperature in each simulation by that prescribed by the EoS given density (the same EoS in both simulations). The black error bars are the error one can expect from the full SDSS I/II sample plus existing high resolution data. The error bars will be approximately uncorrelated. A formal analysis of the 16 data points indicates that the two reionization models can be di\ufb00erentiated at 7\u03c3 level. z \u223c4 SDSS I/II data plus existing high resolution data, one can distinguish formally between these two reionization models at 7\u03c3 level. However, we note that the statistical di\ufb00erences between the two models are unmarginalized, i.e., not taking into account other physical e\ufb00ects that a\ufb00ect the Ly\u03b1 \ufb02ux power spectrum determination (e.g, McDonald et al. 2005). Therefore, the quoted statistical signi\ufb01cance only serves as an indication of the potential power of this statistics. For comparison, the corresponding \ufb02ux power spectrum ratios at z = 4 and z = 5, if both models follow the same EoS (given density), are shown (green curves) in Fig. 2. In this case, aside from the relatively small di\ufb00erence on small scales due to cumulative dynamical a\ufb00ects on the gas density by the di\ufb00erence in the gas pressure histories, the two models have identical \ufb02ux power spectrum on large scales. This clearly demonstrates that the large di\ufb00erence in the \ufb02ux power spectra between the two reionization models (black curves in Fig. 2) is a result of large di\ufb00erences in the contemporaneous temperature distributions. 4. DISCUSSION This e\ufb00ect of inhomogeneous reionization on the \ufb02ux power spectrum was explored earlier by Lai et al. (2006) at z = 3, based on a semi-analytic model. Their focus is on z = 3 and found that, on large scales, k \u223c 0.001(km/s)\u22121, temperature \ufb02uctuations lead to an increase in the z \u223c3 \ufb02ux power spectrum by at most 10%. Our focus here is at higher redshifts z = 4 \u22125 and the e\ufb00ects, not surprisingly, are larger and potentially more \f4 Cen, McDonald, Trac, & Loeb discriminating. A \ufb02uctuating radiation background, produced largely by radiation from sparsely distributed quasars but also by galaxies, can a\ufb00ect the \ufb02ux power spectrum (Meiksin & White 2004; Croft 2004; McDonald et al. 2005). Larger \ufb02uctuations in the radiation background give rise to larger amplitudes of the \ufb02ux power spectrum at large scales (e.g. McDonald et al. 2005, Figures 6,7 therein). This enhancement of the \ufb02ux power spectrum on large scales due to a \ufb02uctuating radiation background will be in addition to what is caused by the gas temperature \ufb02uctuations shown here, if QSOs were dominant. The radiation contribution from stars may be more dominant at the redshift range of concern here (e.g., Faucher-Giguere et al. 2009). Star formation is known to be biased and hence higher density regions, on average, tend to have higher radiation \ufb01eld than lower density regions. Thus, the two e\ufb00ects due to a \ufb02uctuating radiation background and an inhomogeneous reionization process may be partially degenerate or have a tendency to cancel each other\u2019s contribution, although there is a possibility that the radiation \ufb02uctuations may be relatively modest (e.g., Mesinger & Furlanetto 2009). A more careful modeling of the contribution from quasars as well as radiation sinks (such as Lyman limit systems) is required in a comprehensive modeling. The purpose of this Letter is to demonstrate that, if the e\ufb00ects on the Ly\u03b1 \ufb02ux power spectrum determination due to the epoch of reionization were the only relevant ones, then a precise measure of the \ufb02ux power spectrum with the full SDSS I/II data will be able to place a very tight constraint on the epoch of reionization. However, a detailed comparison between models and SDSS I/II observations requires a full analysis of all astrophysical/cosmological processes that may a\ufb00ect the determination of the \ufb02ux power spectrum and some of them may be degenerate to varying degrees (McDonald et al. 2005), including \ufb02uctuating radiation \ufb01eld, damped Ly\u03b1 systems, galaxy formation feedback, initial photoheating temperature (i.e., related to IMF of high redshift galaxies), X-ray heating, He II reionization, among others, before its statistical potential can be precisely marginalized and quanti\ufb01ed. We will perform such an analysis in a future study. 5." + } + ], + "Vahe Petrosian": [ + { + "url": "http://arxiv.org/abs/1205.2136v1", + "title": "Stochastic Acceleration by Turbulence", + "abstract": "The subject of this paper is stochastic acceleration by plasma turbulence, a\nprocess akin to the original model proposed by Fermi. We review the relative\nmerits of different acceleration models, in particular the so called first\norder Fermi acceleration by shocks and second order Fermi by stochastic\nprocesses, and point out that plasma waves or turbulence play an important role\nin all mechanisms of acceleration. Thus, stochastic acceleration by turbulence\nis active in most situations. We also show that it is the most efficient\nmechanism of acceleration of relatively cool non relativistic thermal\nbackground plasma particles. In addition, it can preferentially accelerate\nelectrons relative to protons as is needed in many astrophysical radiating\nsources, where usually there are no indications of presence of shocks. We also\npoint out that a hybrid acceleration mechanism consisting of initial\nacceleration by turbulence of background particles followed by a second stage\nacceleration by a shock has many attractive features. It is demonstrated that\nthe above scenarios can account for many signatures of the accelerated\nelectrons, protons and other ions, in particular $^3$He and $^4$He, seen\ndirectly as Solar Energetic Particles and through the radiation they produce in\nsolar flares.", + "authors": "Vahe Petrosian", + "published": "2012-05-10", + "updated": "2012-05-10", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE", + "astro-ph.SR" + ], + "main_content": "Introduction The presence of energetic particles in the universe has been know for over a century as cosmic rays (CRs) and for a comparable time as the agents producing non-thermal electromagnetic radiation from long wave radio to gamma-rays. In spite of accumulation of considerable data on spectral and other characteristics of these particles, the exact mechanisms of their production remain controversial. Although the possible scenarios of acceleration have been narrowed down, there are many uncertainties about the details of individual mechanisms. Nowadays the agents commonly used for acceleration of particles in astrophysical plasmas can be classi\ufb01ed in three categories, namely static electric \ufb01elds (parallel to magnetic \ufb01elds), shocks and turbulence. As we will try to show there are several lines of argument indicating that turbulence plays an important role in all these scenarios. In addition, because of the large values of ordinary and magnetic Reynolds numbers, most \ufb02ows in \f\u2013 2 \u2013 astrophysical plasmas are expected to give rise to turbulence. The generation and evolution (cascade and damping) of plasma turbulence in astrophysical sources is an important aspect of particle acceleration that will be dealt with in other papers of this proceedings. Similarly, electric \ufb01eld and shock accelerations will be discussed by other authors in this proceedings as well. In this paper we discuss particle acceleration by turbulence or plasma waves which is commonly referred to as Stochastic Acceleration (SA for short). Fermi (1949) was the \ufb01rst author to propose SA as a model for production of CRs, whereby charged particles of velocity v, scattering with a rate Dsc in random collisions with moving magnetized clouds of average speed u, gain energy at a rate Dsc(u/v)2 mainly because energy gaining head on collisions are more numerous than energy losing trailing ones. Nowadays this class of models are often called Second Order Fermi process. Soon after, several authors proposed plasma waves or magnetohydrodynamics (MHD) turbulence as the scattering agents (see e.g. Sturrock 1966; Kulsrud & Ferarrri 1971, and references therein).1 In a later paper Fermi (1954) proposed acceleration of particles scattering back and forth between two ends of a contracting magnetic bottle, where the particles gain energy at every scattering so that the acceleration rate is equal to Dsc(u/v), that is linearly with the velocity ratio, hence the name First Order Fermi. For particle velocities v \u226bu this is a much faster rate. It is also well known that a particle crossing a convergent \ufb02ow, e.g. a shock with velocity ush, gains momentum \u03b4p \u223cp(ush/v). For this reason acceleration by shocks is also referred to as \ufb01rst order Fermi acceleration. However, as described below, this a somewhat of a misnomer because the actual rate of acceleration is proportional to the square of the velocity ratio. Ever since late 1970\u2019s when several authors (Krymsky 1977; Axford 1978; Bell 1978; Blandford & Ostriker 1978) demonstrated that a simple version of this process can reproduce the observed CR spectrum, shock acceleration has been the most commonly invoked process. However, as we will discuss below, in the last couple of decades there has been a renewed interest in SA by turbulence, specially for electrons in radiation producing astrophysical sources from Solar \ufb02ares to clusters of galaxies. In the next section we review the relative merits of di\ufb00erent acceleration model and in \u00a73 we give the general formalism used in the SA and other models. In \u00a74 we will describe application of the SA model to solar \ufb02ares. A brief summary and conclusions are presented in \u00a75. 1For a brief history and more references to other works see Melrose (2009). \f\u2013 3 \u2013 2. Acceleration Models and Turbulence We are interested in comparing the various ways of production of energetic particlesstarting from a relatively \u201ccool\u201d background plasma usually having a thermal or Maxwellian distribution with density n and temperature T.2 Clearly this must be the \ufb01rst stage of any acceleration process, where the Coulomb collisions, with mean free path lCoul = 9 \u00d7 107 cm (T/107 K)2(1010 cm\u22123/n), may be an important, if not the dominant energy loss and particle scattering process. In fact a thermal spectrum requires a high rate for these collisions involving both electrons and protons (and heavier ions).3 Thus, the \ufb01rst hurdle that any acceleration mechanism must overcome is this loss process. Particles energized at this stage may escape the acceleration region with an energy dependent escape time Tesc (E) favoring escape of the higher energy particles, thus resulting in a population of nonthermal particles, which are observed directly or through the radiation they produce. The escaping particles may be re-accelerated by other mechanisms possibly in collisionless surroundings with size L < lCoul. On the other hand, in a closed system, i.e. when Tesc is larger than the dynamical timescale of the system then, in general, it is di\ufb03cult to produce a substantial nonthermal tail. As shown in Petrosian & East (2008) irrespective of the rate or energy dependence of the acceleration process a substantial, if not the bulk, of the energy input goes into heating of the plasma rather than producing a nonthermal electron tail.4 The most commonly acceleration mechanisms used for analysis of astrophysical sources are the following: 2.1. Electric Field Acceleration Static electric \ufb01elds E parallel to magnetic \ufb01elds can accelerate a particle with charge e and velocity v = c\u03b2 with the energy gain rate of \u02d9 E = eEv\u2225. If we de\ufb01ne the Dreicer \ufb01eld ED \u2261kT/(elCoul) \u221dn/T (\u223c10\u22125 V/cm for solar \ufb02are conditions) that results in an energy gain of \u223ckT per mean free path, then the energy change over a distance L is given by \u2206E mec2 = E ED mec2 kT nL 2.5 \u00d7 1023 cm\u22122. (1) 2For the purpose of demonstration, in what follows we use numerical values appropriate for solar \ufb02ares. 3From here on, unless speci\ufb01ed otherwise we will refer to protons and heavier ions collectively as protons. 4We have carried out similar analysis of heating vs acceleration of protons and obtain similar results but on a longer timescale (Kang & Petrosian, in preparation). \f\u2013 4 \u2013 Thus, sub-Dreicer \ufb01elds can accelerate electrons to relativistic energies only if they extend over large column depths or N = nL > 4 \u00d7 1020 cm\u22122 (T/107 K). Since the rate of energy gain per unit length is independent of particle mass (because as de\ufb01ned ED \u221dm2 ec4), for acceleration of proton to relativistic regime (E \u223cmpc2) we need a column depth of \u223c 1024 cm\u22122 (T/107 K). For example, in solar \ufb02are coronal loops, with T \u223c107 K and column depth \u223c1020 cm\u22122, particles can be accelerated up to only 10\u2019s of keV, far below the required 10\u2019s of MeV electrons or > GeV protons. Column depths and temperatures in astrophysical sources (e.g. N \u223c1022 cm\u22122 and T \u223c108 K for intra cluster medium; N \u223c1021 cm\u22122 and T \u223c104 K in typical galactic HII regions, etc) are also not su\ufb03cient for acceleration of electrons or protons to relativistic energies. Super-Dreicer \ufb01elds can accelerate particles to higher energies but since now the acceleration rate is higher than Coulomb energy loss rate this can lead to runaway particles and an unstable bump-in-tail distribution which will give rise to turbulence. In addition, it is di\ufb03cult to sustain large scale electric \ufb01elds in a highly conducting ionized plasma unless the resistivity is anomalously high (Tsuneta 1985; Holman 1985). After the pioneering work by Speiser (1970) it was assumed that the electric \ufb01elds induced by reconnection are the agents of acceleration (see also Litvinenko 1996, 2003; La Rosa et al. 2006) but recent particle-in-cell (PIC) and MHD simulations of reconnection (Drake et al. 2006; see also Cassak et al. 2006; Zenitani & Hoshino 2005) present a more complicated picture and show that turbulence may be an important ingredient. Thus, electric \ufb01elds cannot be the sole agent of acceleration, but they may produce turbulence, which can accelerate particles and possibly enhance the reconnection rate, as suggested by Lazarian & Vishniac (1999). 2.2. Fermi Acceleration As mentioned above there are two types of Fermi acceleration, \ufb01rst order in a shock and second order in a turbulent plasma. As also indicated above the former has been invoked often in astrophysical situations primarily because it is believed to be a faster mechanism of acceleration and the environment surrounding a supernova shock seems well suited for production of CR protons. However, there are many shortcomings in the original elegant models developed in late 1970\u2019s for this mechanism. Since then a great deal of work has gone into the the development of this model and in addressing its shortcomings. These include injection of seed particles, losses (specially for electrons), and escape and nonlinear e\ufb00ects (see e.g. Drury 1983; Blandford & Eichler 1987; Jones & Ellison 1991; Malkov & Drury 2001; Diamond & Markov 2007; Beresnyak et al. 2009). More importantly, a shock by itself cannot accelerate particles and requires scattering agents that can cause the repeated passages across the shock front, especially for a parallel shock with magnetic \ufb01eld parallel to \ufb02ow \f\u2013 5 \u2013 velocity. The most likely agent is turbulence and the rate of acceleration is governed again by the scattering rate by turbulence, and the acceleration rate, \u02d9 E/E \u223c\u02d9 p/p \u221dDsc(ush/v)2 (see below), is no longer \ufb01rst order in the velocity ratio. Although there are indications that magnetic \ufb01eld and turbulence may be generated by the upstream accelerated particles (see e.g. Bell 1978), many details of the microphysics remain unsolved. Two stream (or another plasma) instability is a possible mechanism for generation of turbulence but there are indication that this may be suppressed in a turbulent medium (Yan & Lazarian 2002). Exact determination of the scattering rate requires knowledge of the intensity and spectrum of the turbulence which determine Dsc but are essentially unknown. Usually Bohm di\ufb00usion is assumed.5 Second order Fermi or SA process, on the other hand, occurs always at some level because turbulence in addition to scattering can also accelerate particles directly. As is well known, relativistic particles in weakly magnetized plasma (e.g. that in a supernova shock) are scattered at a faster rate than the rate of acceleration by turbulence, so that acceleration by a shock is deemed to be faster. This, however, is not always the case, specially for acceleration of electrons in radiating sources. As we will discuss in more detail below, Pryadko & Petrosian (1997; PP97) showed that at low energies and/or in strongly magnetized plasmas the acceleration or energy di\ufb00usion rate by turbulence exceeds the scattering rate and therefore exceeds the acceleration rate by a shock. Thus, under these circumstances the main objection of slowness of SA does not apply. For example, in the case of solar \ufb02ares Hamilton & Petrosian (1992) show that SA by a modest level of whistler waves can accelerate the background particles to high energies within the desired time (see also Miller and Reames 1996). We can conclude then that irrespective of which process of acceleration is at work, turbulence always has a major role. Moreover, as we will see in the next section, in practice, i.e. mathematically, there is little di\ufb00erence between \ufb01rst and second order Fermi acceleration (see e.g. Jones 1994). There are, of course, other acceleration processes similar to those discussed above on the microphysics level but phenomenologically di\ufb00erent. One such process is that proposed by Drake et al. (2006) occurring via the interactions of particles with the \u201dislands\u201d produced in their PIC simulations. Another is the process proposed by Fisk & Glockler (2010) for acceleration in the solar wind. These will be discussed in other sections of these proceedings. 5First order Fermi acceleration may also occur in the converging \ufb02ow in the reconnection region (de Gouvia del Pinto & Lazarian 2005; see also Lazarian\u2019s contribution here. \f\u2013 6 \u2013 3. BASIC EQUATIONS Interactions of particles with turbulence, which is a common ingredient of all acceleration processes, are dominated by many weak rather than few strong scattering events. In this case the Fokker-Planck formalism provides the best description of the particle kinetics which can have di\ufb00erent forms depending on circumstances. The basic equations here are well known and have been described in many papers in the past. In this section we brie\ufb02y review these equations and point out two important features of plasma wave-particle interactions not broadly known or acknowledged. These features have important e\ufb00ects on the relative importance of \ufb01rst and second order Fermi acceleration, and on the relative acceleration rates of electrons vs protons, 3He vs 4He and other ions. 3.1. Particle Kinetic Equations Most astrophysical plasmas are strongly magnetized so that the gyro-radius of particles (with mass m), rg = 1.7 \u00d7 103 cm \u03b2\u03b3(G/B\u22a5)(m/me), is much smaller than the scale of the spatial variation of the \ufb01eld. In this case particles are tied to the magnetic \ufb01eld lines and instead of dealing with temporal evolution of the distribution of energetic particles in six dimensions (3 space, 3 momentum) one deals with the gyro-phase averaged distribution which depends only on three variables; spatial coordinate s along the \ufb01eld lines, the momentum p, the pitch angle or its cosine \u00b5, and of course also time.6 Then, the evolution of the particle distribution, f(t, s, p, \u00b5), can be described by the Fokker-Planck equation as particles undergo stochastic scattering and acceleration by interaction with plasma turbulence (with di\ufb00usion coe\ufb03cients Dpp, D\u00b5\u00b5 and Dp\u00b5 = D\u00b5) and su\ufb00er losses (with rate \u02d9 pL) due to other interactions with the plasma particles and \ufb01elds. The particles may also gain energy in presence of shocks or large scale electric \ufb01elds (with the rate \u02d9 pG): \u2202f \u2202t + v\u00b5\u2202f \u2202s = 1 p2 \u2202 \u2202pp2 \u0014 Dpp \u2202f \u2202p + Dp\u00b5 \u2202f \u2202\u00b5 \u0015 + \u2202 \u2202\u00b5 \u0014 D\u00b5\u00b5 \u2202f \u2202\u00b5 + D\u00b5p \u2202f \u2202p \u0015 \u22121 p2 \u2202 \u2202p(p2 \u02d9 pf) + \u02d9 S. (2) Here \u02d9 p = \u02d9 pG \u2212\u02d9 pL is the net momentum change rate and \u02d9 S is a source term, which could be the background thermal plasma or some injected spectrum of particles. The e\ufb00ect of the magnetic \ufb01eld convergence or divergence can be accounted for by adding c\u03b2dlnB ds \u2202 \u2202\u00b5 \u0010 (1\u2212\u00b52) 2 f \u0011 to the right hand side. And if there are large scale \ufb02ows with velocity u and spatial gradient \u2202u \u2202s along the \ufb01eld lines, e.g. around a shock front, their e\ufb00ects can be accounted for by 6Here we use p and \u00b5, or particle energy (in mc2 units) E = \u03b3 \u22121 and \u00b5, instead of p\u2225and p\u22a5. \f\u2013 7 \u2013 adding the term 1 3 \u2202u \u2202s 1 p2 \u2202 \u2202p \u0000p3f \u0001 \u2212\u2202u \u2202s f (3) to the right hand side. In general, this complete equation is rarely used to determine the particle distribution f(t, s, p, \u00b5). Instead the following approximations are used to make it more tractable. Pitch-angle isotropy: If the pitch angle di\ufb00usion rate is high so that the scattering time \u03c4sc \u223c1/D\u00b5\u00b5 is shorter than all other time scales (\u03c4di\ufb00\u223cp2/Dpp, \u03c4cross = L/v, \u03c4L = p/ \u02d9 pL, \u03c4ac = p/ \u02d9 pG, etc, where L is the size of the interaction region), then the pitch angle distribution of the particles will be nearly isotropic. If we de\ufb01ne F(t, s, p) \u22611 2 Z 1 \u22121 d\u00b5f(t, s, p, \u00b5), \u02d9 Q(t, s, p) \u22611 2 Z 1 \u22121 d\u00b5 \u02d9 S(t, s, p, \u00b5), (4) then the kinetic equation simpli\ufb01es to the Di\ufb00usion-Convection Equation (see, e.g. Kirk et al. 1988; Dung & Petrosian 1994) \u2202F \u2202t = \u2202 \u2202s\u03bass \u2202F \u2202s + 1 p2 \u2202 \u2202p \u0012 p4\u03bapp \u2202F \u2202p \u2212p2\u27e8\u02d9 p\u27e9F \u0013 + p\u2202\u03basp \u2202s \u2202F \u2202p \u2212 \u0012 1 p2 \u2202F \u2202s \u2202 \u2202p(p3\u03basp) \u0013 + \u02d9 Q(s, t, p), (5) where \u27e8...\u27e9implies pitch angle averaged values and the three transport coe\ufb03cients are related to the di\ufb00usion coe\ufb03cients as \u03bass = (v2/8) Z 1 \u22121 d\u00b5(1 \u2212\u00b52)2/D\u00b5\u00b5 , (6) \u03basp = v/(4p) Z 1 \u22121 d\u00b5(1 \u2212\u00b52)D\u00b5p/D\u00b5\u00b5 , (7) \u03bapp = 1/(2p2) Z 1 \u22121 d\u00b5(Dpp \u2212D2 \u00b5p/D\u00b5\u00b5) . (8) This approximation is generally valid for high energy particles interacting with Alfven waves in a plasma with relatively low magnetization, i.e. Alfven velocity vA = p (B2/(4\u03c0\u03c1) \u226av, where B is the magnetic \ufb01eld strength and \u03c1 is the gas density,7 so that R1 \u2261Dpp/(p2D\u00b5\u00b5) = \u03c4sc /\u03c4di\ufb00\u223c(vA/v)2 \u226a1. (9) However, as mentioned above, PP97 have shown that, in certain situations (low energies and high magnetic \ufb01elds), the energy di\ufb00usion rate \u223cDpp/p2 may exceed the pitch angel 7Note that this is the common de\ufb01nition of Alfven velocity which could exceed speed of light. However the actual phase velocity of Alfven waves is equal to vA/ p 1 + v2 A/c2 < c. \f\u2013 8 \u2013 di\ufb00usion rate \u223cD\u00b5\u00b5 so that R1 \u226b1, invalidating the above approximation. In this case, the momentum di\ufb00usion is the dominant term and the kinetic equation (2) again simpli\ufb01es to \u2202f \u00b5 \u2202t + v\u00b5\u2202f \u00b5 \u2202s = 1 p2 \u2202 \u2202p \u0012 p2D\u00b5 pp \u2202f \u00b5 \u2202p \u2212p2 \u02d9 pf \u00b5 \u0013 + \u02d9 S\u00b5, (10) where now one must include the \u00b5 dependence of all terms and of the distribution function. In general, there will be signi\ufb01cant acceleration only if the energy di\ufb00usion and acceleration times are shorter than the crossing time \u03c4cross . In addition, if the scattering time \u03c4sc (\u00b5) \u223c 1/D\u00b5\u00b5, though now longer than momentum di\ufb00usion time, is also shorter than \u03c4cross = l/v, and/or if the other coe\ufb03cients (most importantly those related to acceleration) vary slowly with \u00b5, then the particle distribution will again be nearly isotropic8 and we can use pitch angle averaged distribution F(t, s, p) and coe\ufb03cients \u27e8Dpp\u27e9= 1 2 Z 1 \u22121 d\u00b5Dpp(\u00b5), (11) and (similarly)\u27e8D\u00b5\u00b5\u27e9, \u27e8\u02d9 pG\u27e9and \u27e8\u02d9 pL\u27e9. As described in the caption of Figure 1 (right) use of these averaged quantities is justi\ufb01ed. In addition, now the term v\u00b5\u2202f \u00b5/\u2202s should be replaced by the spatial di\ufb00usion term \u2202 \u2202s\u03bass \u2202F \u2202s . Thus, we can ignore \u00b5 dependences, in which case the two equations are almost identical: They have di\ufb00erent spatial dependence terms (e.g. terms involving \u2202\u03basp/\u2202s; \u2202F/\u2202s), but more importantly, have two di\ufb00erent momentum di\ufb00usion forms; one for the case R1 < 1 in Equation (8) and the second for R1 > 1 given by Equation (11). Spatial Homogeneity: A second simpli\ufb01cation, which applies to both R1 < 1 and R1 > 1 cases, can be used if the acceleration region is homogeneous (i.e. \u2202/\u2202s = 0), or if one deals with a spatially unresolved source where one is interested in spatially integrated equations. In this case it is convenient to de\ufb01ne N(t, E)dE = R dV [4\u03c0p2F(t, s, p)dp] and replace the spatial di\ufb00usion term (or the advection term in Equation [10]) plus other terms involving \u2202F/\u2202s by an escape term with an escape time Tesc(E) de\ufb01ned by Z dV 4\u03c0p2 \u2202 \u2202s \u0012 \u03bass \u2202F \u2202s \u2212F 1 p2 \u2202 \u2202p(p3\u03basp) \u0013 = N(t, E) Tesc (E). (12) Then we obtain the well-known equation \u2202N \u2202t = \u2202 \u2202E \u0012 DEE \u2202 \u2202E N \u0013 \u2212\u2202 \u2202E \u0010 [A(E) \u2212\u02d9 EL]N \u0011 \u2212N Tesc + \u02d9 S, with Tesc = \u03c4cross +\u03c4cross 2 \u03c4sc , (13) 8Note also that Coulomb scatterings with pitch angle di\ufb00usion rate DCoul \u00b5\u00b5 \u221dn/(\u03b23\u03b32) can also contribute to the scattering rate, specially at low energies, and help to isotropize the pitch angle distribution. \f\u2013 9 \u2013 where \u02d9 EL = p \u02d9 pL/\u03b3 is the energy loss rate. This clearly is an approximation with the primary assumption being that the transport coe\ufb03cients vary slowly spatially, e.g. \u2202\u03basp/\u2202s \u226a \u27e8\u03basp\u27e9/L), R 4\u03c0p2dp\u03baiFdV \u223c\u27e8\u03bai\u27e9NdE, etc. Note that here \u02d9 S(t, E) and N(E, t)/Tesc represent the rates of injection and escape of particles in and out of the acceleration site, and that Tesc as de\ufb01ned can account for the spatial di\ufb00usion term in equation (5) when \u03c4cross /\u03c4sc \u226b1 and the advection term in equation (10) when of \u03c4cross /\u03c4sc \u226a1. This equation is fairly general and can handle di\ufb00erent acceleration scenarios. For example for the SA by turbulence the energy di\ufb00usion and the direct acceleration coe\ufb03cients are related as9 DEE = v2 \u00af Dpp and A(E) = (DEE/E)[(2\u03b32 \u22121)/(\u03b32 + \u03b3)], (14) where \u00af Dpp is equal to p2\u03bapp for the isotropic case (Equation 5) and is equal to \u27e8Dpp\u27e9for equation (10). As stressed above turbulence is present in all acceleration scenarios so these energy di\ufb00usion and the direct acceleration rates are the minimum rates. However if there are other di\ufb00usion or acceleration mechanisms we should add their contribution. For example, if the acceleration volume contains a converging \ufb02ow (as in a shock) with velocity u, then there will be additional direct acceleration rate Au(E) = p\u27e8\u2202u/\u2202s\u27e9.10 Or, at low energies and for a high density plasma, as mentioned above, one should add the e\ufb00ects of pitch angle and energy di\ufb00usion due to Coulomb collisions. Finally, for completeness we mention that for most astrophysical situations the primary contribution to the loss rate for energetic electrons comes from Coulomb collisions at low energies (which as stated at the outset are essential in establishing a thermal distribution) and synchrotron and inverse Compton at high energies, and for protons from elastic Coulomb collisions and inelastic strong interactions with background protons and other ions. In certain cases there may be catastrophic losses through which particles are taken out of the system. In summary then, it turns out that this most commonly used transport equation in astrophysical problems is a good approximation at all energies and all degrees of magnetization for spatially unresolved sources. 9Sometimes the energy di\ufb00usion term on the right hand side of equation (13) is written as \u22022 \u2202E2 (DEEN) in which case we need to add dDEE/dE to the right hand side of the direct acceleration term A(E). 10In this case the term \u2212uF should be added inside the large parenthesis in Equation (12). \f\u2013 10 \u2013 3.2. Two Important Features There are two noteworthy aspects to the formalism described above. 1. The \ufb01rst feature is related to the fact that the relative rates of the momentum and pitch angle di\ufb00usion, and hence the ratio R1, vary with plasma conditions and particle energy. This has two important consequences. The \ufb01rst is that both rates increase with decreasing energy and/or increasing level of turbulence (see Equation [17] below). As a result SA of low energy thermal particles in magnetized plasma, the type usually encountered in astrophysical radiating sources, is not slow as the second order name would imply. In deed, in recent years this has been recognized and SA has found application in many sources involving acceleration of low energy background electrons. The second has to do with the change of the ratio R1. In Figure 1 (left, from PP97) we show contour maps of this ratio in the electron energy and degree of magnetization space represented by the parameter \u03b1 = \u03c9p/\u2126\u221d\u221an/B, the ratio of plasma to gyro frequency. On the middle panel we show variation of R1 (from Petrosian & Liu, 2004, PL04) with energy for several values of \u00b5 for both electrons and protons interacting with parallel propagating plasma waves. These \ufb01gures show the regions of the phase space where R1 > 1 and where the SA by turbulence is the dominant process. Let us consider a simple non relativistic hydrodynamic shock or a parallel shock with magnetic \ufb01eld parallel to the shock \ufb02ow velocity.11 The SA rate is \u223c\u00af Dpp/p2 while the shock acceleration rate is proportional to fractional energy gain per crossing \u03b4p/p \u223cush/v divided by average crossing time \u03b4t \u223c\u03bass/vush \u223c(v/ush)D\u22121 \u00b5\u00b5 (Krymsky et al. 1979; Lagage & Cesarsky 1983; Drury 1983) so that the shock acceleration rate Ash \u223cD\u00b5\u00b5(ush/v)2 is no longer a \ufb01rst order mechanism, and the ratio ASA/Ash \u223cR1(v/ush)2 > R1. This means that when R1 > 1, which is the case at low energies, the SA is the more dominant of the two mechanisms. Thus, the following picture seems to emerge. When a particle crosses the shock to the downstream turbulent region its interactions with plasma waves increase its energy substantially before it has a chance to cross the shock. Only when its energy has increased to a su\ufb03ciently large value to make the ratio R1 < 1, then it can undergo repeated passage across the shock,12 Only 11For perpendicular shock one gets similar result with added e\ufb00ect of the ratio \u03ba\u2225/\u03ba\u22a5of the di\ufb00usion parallel and perpendicular to the \ufb01eld lines (see e.g. Giacalone 2005a & 2005b and references therein). 12This, of course requires turbulence in the upstream region as well whose origin is not well understood. Possible generation by some instability due to accelerated particles has been suggested and some details have been worked out by Lee (2005), but this problem is not fully resolved as observations of shocks in the solar wind do not always show presence of any turbulence in the upstream region. \f\u2013 11 \u2013 when its energy has increased to a su\ufb03ciently large value to make a R1 < 1, then it can undergo repeated passage across the shock, hence beginning a second stage of acceleration by the shock. Note also that even at high energies and interactions with low frequency fast modes with phase velocity equal to Alfven velocity when R1 \u223c(vA/v)2 is less than one, the ratio of the SA to shock acceleration rates ASA/Ash \u223c(vA/ush)2 \u223c1/(\u03b2pM2), (15) which could be greater than one for low beta plasmas (\u03b2p = 2(uSound/vA)2 < 1) and for shocks with low Mach number M = ush/uSound. Thus, we can think of the acceleration as a hybrid mechanism with turbulence providing the initial rise in energy of background plasma till they become energetic enough to be accelerated also by the shock during which SA continues and may not be negligible. This sort of behavior can be seen in the recent PIC simulation by Sironi & Spitkovsky (2009) where test particles appear to gain energy gradually in the downstream region till they cross the shock and get a jump in energy (see their Figure 8). This hybrid scenario, in a way, also solves the long standing \u201cinjection problem\u201d in shock acceleration requiring injection of high energy (i.e pre accelerated) particles, especially for perpendicular shocks. Fig. 1.\u2014 Left: Contours of the ratio R1in the electron energy and parameter \u03b1 = 2.3(100 G/B) p (n/1010cm\u22123) for \u00b5 = 0. The red line shows the R1 = 1 contour for \u00b5 = 0.3. Regions below R1 = 1 curves are when acceleration by SA becomes dominant (from PP97). Middle: Variation with energy of the ratio R1 for di\ufb00erent pitch angles of electrons and protons Note that the acceleration rate can exceed scattering up to \u223c100 keV for most pitch angles (From PL04). Right: Variations of timescales associated with rates of pitch angle di\ufb00usion (green), the simple momentum di\ufb00usion (black), and the momentum di\ufb00usion for the isotropic case with coe\ufb03cient \u03bapp (red). The latter is considerably longer than the others at energy ranges when resonance with one mode is dominant. This e\ufb00ect is much more pronounced for protons than electrons. Note also that the shapes of D\u00b5\u00b5 and Dpp/p2 associated curves are similar for di\ufb00erent pitch angles making use of pitch angled averaged quantities a reasonable approximation. 2. The second interesting feature has to do with the di\ufb00erence between the energy di\ufb00usion and acceleration rates in the two limiting cases (Equations 5 and 10) derived above. For a resonant interaction with plasma waves in general the three Fokker-Planck di\ufb00usion \f\u2013 12 \u2013 coe\ufb03cients obey the following simple relations (see e.g. PL04 for parallel and Pryadko & Petrosian 1999, for perpendicular propagating waves): (Dpp/p2) : (D\u00b5p/p) : D\u00b5\u00b5 = [x2 j] : [xj(1 \u2212\u00b5xj)] : [(1 \u2212\u00b5xj)2] with xj = (vph,j/v)2 (16) where vph = \u03c9/k is the phase velocity of the plasma mode (with frequency \u03c9 and wave vector k) in resonance with the particle of momentum p and pitch angle cosine \u00b5. This implies that for interaction with a single wave the acceleration rate as given in Equation (8), would be zero.13 However, in general interactions with many waves contribute to each coe\ufb03cient so that this rate is never zero, but if the interaction with one mode is dominant then the rate becomes much smaller than normal di\ufb00usion rate \u27e8Dpp/p2\u27e9. Figure 1 (right) shows the energy dependence at various values of \u00b5 of the acceleration (and scattering) times based on the two forms of the acceleration rate (black vs red curves). As evident there is considerable di\ufb00erence between the two rates specially at low energies, and the di\ufb00erences are more pronounced for protons compared to electrons. As discussed below, this is important for the relative SA rates of di\ufb00erent species in particular for electrons vs protons. We will also show that a similar process a\ufb00ects the relative acceleration of 3He and 4He in solar \ufb02ares. 4. Stochastic Acceleration in Solar Flares Over the past several decades there has been considerable discussion of \ufb01rst vs second order acceleration (see e.g. Drury 1983) and the role of turbulence in the latter (see e.g. Hall & Sturrock 1967; Kulsrud & Ferarrri 1971). However the success of shock acceleration in producing the observed CR spectrum has relegated SA by turbulence to be a less important process even though the importance of the role played by turbulence in scattering of the energetic particles is fully appreciated. As shown above, there are many similarities between the two processes, and in some cases SA by turbulence may be the dominant process. This fact has been recognized in more recent times and there has been renewed activity in application of the SA model to several astrophysical sources. Most prominent among these is the Solar Flare which is the most developed and will be discussed in more detail below. But SA has been applied to nonthermal emission from accretion disks around black holes: e.g. Sgr A* black hole in the center of the milky way (Liu et al, 2004, 2006a and 2006b), other active galactic nuclei (Stawartz & Petrosian 2008), stellar size black holes (Li & Miller 1997), gamma-ray bursts (Lazarian et al. 2003), to supernovae shocks (Scott & Chevalier 1975; Cowsik & Sarkar 1984; Fan et al. 2009; Virtanen & Vainio 2005), to giants radio 13This can be ascertained by plugging in for the di\ufb00usion coe\ufb03cients in Equation (8) the expressions in Equation (16). \f\u2013 13 \u2013 galaxy lobes (Lacombe 1977; Achterberg 1979; Eilek 1979), and to intra cluster medium of clusters of galaxies (Petrosian 2001, Brunetti & Lazarian 2007). 4.1. Basic Scenario for Solar Flares The complete development of a solar \ufb02are involves many phases. After a complex pre\ufb02are build up of magnetic \ufb01elds, the \ufb01rst phase is the reconnection and the process of the energy release. The \ufb01nal consequences of this released energy are the observed radiations from radio to gamma-rays, Solar Energetic Particles (SEPs), and Coronal Mass Ejections (CMEs). The basic scenario for these processes, as depicted by the cartoon in Figure 2 (left), can be summarized as follows: Even though it is generally agreed that the \ufb02are energy comes from the annihilation of magnetic \ufb01elds via reconnection, the exact mechanisms of the release and dissipation of this energy remains controversial. Dissipation can occur via Plasma Heating, Particle Acceleration or Plasma Turbulence. As stated above turbulence is a necessary ingredient for acceleration and, as we will outline below, there is considerable observational evidence favoring SA by turbulence. Thus we believe that most of the magnetic energy is converted into turbulence near or above the top of a coronal loop, which we refer to as the acceleration site or the loop top (LT) region. The turbulence undergoes nonlinear wave-wave interactions causing a dissipationless cascade to smaller scales. The wave-particle interaction results in damping of the turbulence, heating of the plasma and acceleration of particles. The accelerated particles are somewhat trapped at the LT because of their short mean free path due to scattering by turbulence which enhances their radiation intensity there. Eventually these particles escape the turbulent LT region. Some escape along open \ufb01eld lines and may undergo further scattering and acceleration by a CME shock during their transport to the Earth where they are detected as SEPs.14 Most of the particles travel down the legs of the loop and produce the observed \ufb02are radiation; microwaves via synchrotron, hard X-rays (HXR) via bremsstrahlung produced by electrons, and gamma-rays via nuclear line excitations and decay of pions (primarily \u03c00) produced by protons, along the loop, but primarily at its foot points (FPs). However most of the energy of the accelerated particles is dissipated in the chromosphere and below via inelastic Coulomb collisions. This deposited energy causes heating and evaporation of the plasma up into coronal loops, which then produces the bulk of the \ufb02are radiation in the forms of thermal soft X-rays and optical photons. The evaporation changes the the density and temperature in the corona which can a\ufb00ect the reconnection, energy release and acceleration processes. 14The escaping electrons may also produce type-III and other radio radiation. \f\u2013 14 \u2013 4.2. Some Relevant Observations of Flares Observations of solar \ufb02ares are reviewed by Raymond et al. and some theoretical aspects, namely those on global aspects of energy release and acceleration are reviewed by Cargill et al. in this proceedings. Here we discuss the confrontation between the accelerations models and observations. A successful model must account for all observations. However, some observation are more critical than others. Here we focus on the following three separate observed characteristics, explanation of which we believe constitutes as minimum requirement for models. 1. Radiative signatures of electrons, primarily from \u02da observations. 2. Relative acceleration of electrons and protons. 3. Isotopic abundance enhancements in SEPs, especially that of 3He, and the variation of abundance ratio and spectra of 3He and 4He. We have made extensive comparisons of the SA model with these observations and \ufb01nd numerous signatures of electrons and protons that support the model. Below we present a brief description of the above observations and how the SA model can account for them. 4.2.1. Radiative Signatures of Electrons Radiation by accelerated electrons produce wealth of observation, and we clearly cannot address them all here. Instead we focus on a few critical observations, mainly from RHESSI, that are related to the acceleration process. \u2022 One observation which we believe provides the most compelling and direct evidence for the presence of turbulence in the LT (acceleration) site is the observation by Yohkoh (Masuda et al. 1994) showing a distinct impulsive HXR emission from the LT in addition to the usual FP sources. This apparently is present in almost all Yohkoh \ufb02ares (Petrosian et al. 2002) and analysis of\u02da flares has con\ufb01rmed this picture (Liu et al. 2003; Krucker & Lin 2008) for essentially all limb \ufb02ares. The fact that we see LT and FP emission but little or none from the legs of the loop requires lingering of electrons in the LT region for times longer than the crossing time \u03c4cross = L/v. This requires an enhanced scattering near the LT region. Petrosian & Donaghy (1999) show that Coulomb scattering cannot be the agent for this trapping because then the electrons will also lose most of their energy in the LT region on the same timescale and never reach the FPs. The most likely scattering agent is turbulence. This turbulence can also accelerate the electrons. There may be other acceleration at work too, but as described above, at low energies and for solar \ufb02are conditions (low \u03b2 plasma) SA \f\u2013 15 \u2013 by turbulence is the most e\ufb00ective process. \u2022 In rare cases, when the FP sources are weak or occulted, one can see a double LT source (Figure 2 middle, from Liu et al. 2008; see also Sui & Holman, 2003) as expected in the model depicted on the left. This simple picture also predicts a gradual rise of the LT source accompanied with a continuous increase in separation of the FPs as the reconnection proceeds and larger closed loops are formed, a feature that has been seen at other wavelength but it is usually di\ufb03cult to see in HXRs in weak \ufb02ares because of low signal to noise ratio and in most strong \ufb02ares because they tend to have complicated loop structures. The Nov. 3, 2003 X-class limb \ufb02are consisting of a single loop provided a good opportunity to see this feature in HXRs. The right panel of Figure 2 (from Liu et al. 2004) clearly shows this behavior. \u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001\u0001 Thick\u2212target footpoints Escaping particles Looptop source site emission Turbulence acceleration region, Coronal X\u2212ray Reconnection Energy outflows Fig. 2.\u2014 Left: A schematic representation of the reconnecting \ufb01eld forming closed loops and coronal open \ufb01eld lines. The red foam represents PWT. Middle: Image of the \ufb02are of April 30, 2002, with occulted FPs showing two distinct coronal sources as expected from the model in the left. The curves representing the magnetic lines (added by hand) show the occulted FPs below the limb (red line) (from Liu et al. 2008). Right: Temporal evolution of LT and FP HXR sources of the Nov. 3, 2003 \ufb02are. The symbols indicate the source centroids and the colors show the time with a 20 sec interval, starting from black (09:46:20 UT) and ending at red (10:01:00 UT) with contours for the last time. The curves connecting schematically the FPs and the LT sources for di\ufb00erent times show the expected evolution for the model at the left (from Liu et al. 2004). \u2022 Another important observation by\u02da is the relative spectra of LT and FP sources. The LT source is often dominated by a very hot thermal type emission with a relatively soft tail, while the FP sources consist of harder power laws with little or no thermal part as shown by the example in Figure 3 (left). These are exactly the kind of spectra that come out from models of SA by turbulence shown in the right panel of Figure 3.15 Figure 3 (left) also shows a forward \ufb01t of the observed spectra to those obtained from SA model with the speci\ufb01ed acceleration parameters. 15It should, however, be noted that most generic acceleration models accelerating particles from a hot plasma and with scattering provided by turbulence will produce a hot quasi-thermal plus a nonthermal tail (Petrosian & East 2008). \f\u2013 16 \u2013 \u2022 Flares during the impulsive phase often show a soft-hard-soft temporal evolution and sometimes a slower than expected (assuming only losses) temperature decline in the thermal decay phase (McTiernan et al. 1993). The results in Figure 3 (right) agree with these evolutionary aspects as well. As can be seen the spectra electrons accelerated by turbulence get harder with increasing value of the wave-particle interaction rate parameter \u03c4 \u22121 p = (\u03c0/2)\u2126fturb(q \u22121)(ckmin/\u2126e)q\u22121, (17) where fturb = (\u03b4B/B)2 is the ratio of the turbulence to magnetic \ufb01eld energy density with wave energy spectral index of q for wave vectors k > kmin. Thus, as the level of turbulence or fturb increases and decreases during the impulsive phase, we go from a thermal to softhard-soft nonthermal and back to a thermal phase. \u2022 In several RHESSI limb \ufb02ares Jiang et al. (2006) \ufb01nd that during the decay phase the LT source continues to be con\ufb01ned (not extend to the FPs), and that the observed energy decay rate is much lower than the Spitzer (1962) conduction rate. These observations require suppression of the conduction and a continuous input of energy during the decay phase. A low level of lingering turbulence can be the agent for both. Fig. 3.\u2014 Left: A \ufb01t to the peak time total (black), FPs (green) and LT (red) spectra of a 9/20/2002 \ufb02are observed by RHESSI. The dashed and dotted lines are bremsstrahlung spectra by electrons whose spectrum is calculated using Equations (13) with the indicated acceleration parameters, showing the presence of a quasi-thermal (LT) and a nonthermal component. The solid line gives the sum of the two. The blue dashes indicate the level of the background radiation (Liu et al. 2003). Right: The dependence on \u03c4 \u22121 p \u221dfturb of the accelerated electron spectrum E2N(E) at the LT (dotted) and the e\ufb00ective thick target spectrum (E2/ \u02d9 EL) R \u221e E dE\u2032N(E\u2032)/Tesc (E\u2032) at the FPs (solid). Higher levels of turbulence produce harder spectra and more acceleration than heating (from PL04). \f\u2013 17 \u2013 4.2.2. Relative Acceleration Rates of Electrons and Protons Flares are generally recognized based on radiative and other (heating and evaporation) signatures of the accelerated electrons. As one would expect protons will also be subjected to same acceleration mechanisms. The radiative signatures of protons are (i) narrow gamma-ray lines in the 1-7 MeV range arising from de-excitation of nuclei of ions excited by accelerated protons (or viceversa, which produce broader lines), and (ii) > 70 MeV continuum emission from decay of pions produced in p \u2212p interaction (mainly from decay of \u03c00\u2019s). Gamma-rays are generally observed in large and gradual \ufb02ares, but this is partly due to relatively lower sensitivity of past gamma-ray detectors compared to HXR detectors. However, Fermi with its superior sensitivity is beginning to observe gamma-rays from modest M-class \ufb02ares (see Ackermann et al. 2012). The SA model has been the working hypothesis for acceleration of protons as well. (See pioneering works by Ramaty (1979) and collaborators; e.g. Murphy et al. 1987). There have been considerable discussions of relative energies of populations of accelerated electrons and protons but most recent analysis of HXR and gamma-ray emission of a large sample of \ufb02ares by Shih et al. (2009) show a good correlation but with relatively broad dispersion (see Figures 3 of Raymond et al. in this proceedings. As this \ufb01gure shows the mean value of the ratio of energy in accelerated electrons (obtained from HXR \ufb02uxes) to that of protons (deduced from gamma-ray \ufb02uxes) is larger than 1 while this ratio in galactic CRs presumably accelerated in supernova shocks is much smaller than one. In Figure 4 (left) we show the same distribution from the above mentioned \ufb01gure (in blue) along with the distribution of the same ratio we obtained from observations of SEP electrons and protons (for the same energy ranges, taken from data compiled by Cliver & Ling 2007, in red). We will return to the di\ufb00erences between the two histograms below. But for now we focus on the fact that the acceleration mechanism operating in \ufb02ares seems to put more energy in electrons over protons compared with the mechanism responsible for acceleration of SEPs and galactic CRs. In shock acceleration models (with Alfvenic turbulence as the source of scattering) one would expect a more e\ufb03cient acceleration of protons compared to electrons. This may indicate that a di\ufb00erent mechanisms is at work in solar \ufb02ares. As shown below this is another evidence that SA (rather than a shock) is the dominant acceleration mechanism in \ufb02ares. In PL04 we address this problem by looking at the details of the SA by parallel propagating waves of a thermal population of electrons and protons. We \ufb01nd in general that for typical \ufb02are conditions this mechanism favors acceleration of electrons than protons. This is because electrons and protons undergo resonant interactions with di\ufb00erent plasma modes of di\ufb00erent wave vectors or frequencies. In general protons have fewer resonances than electrons \f\u2013 18 \u2013 Fig. 4.\u2014 Left: Normalized distribution of the electron to proton energy \ufb02ux ratios for \ufb02ares derived from the data in Shih et al, (2009, blue) and SEPs from data in Cliver & Ling (2007, red) showing preference for acceleration of electrons in \ufb02ares. Middle: Scattering (dashed) and acceleration (solid; given by Equation [8]) time scales in unit of \u03c4p for protons (blue) and electrons (red) showing a large barrier or long acceleration time for protons in the 0.1 to 10 MeV range (dashed region) due to the dominance of a single resonance mode. This barrier disappears at low energies once \u03c4sc becomes longer than momentum di\ufb00usion time p2/Dpp, and at high energies where there is more than one dominant resonant mode (from PL04). Right: Spectra of accelerated electrons (red) and protons (blue) at the acceleration site (dotted) and the e\ufb00ective thick target FP spectra (E2/ \u02d9 EL) R \u221e E dE\u2032N(E\u2032)/Tesc (E\u2032) (dashed) for given values of \u03c4p and \u03b1 \u221d\u221an/B and for temperature kT = 1.5 keV which produces more nonthermal electrons compared to protons in their respective observed ranges shown by the dashed areas (from PL04). so that it is more likely that they will have one dominant resonant mode. In this case, as described in the second of Two Important Features in \u00a73.2, and shown in Figure 1 (right), the rate of acceleration given by equation (8) becomes very small hindering the acceleration of protons. Figure 4 (middle) shows presence of a large barrier against acceleration of proton, manifested by a large acceleration timescale, resulting in acceleration of fewer protons in the energy range from few MeV to GeV range needed for production of gamma-rays (lines and continuum). As mentioned above, the acceleration rate by a shock also depends on the rate of wave-particle interactions. However, in this case it depends on the spatial di\ufb00usion coe\ufb03cient \u03bass which in turn depends only on D\u00b5\u00b5 and not the combination of the di\ufb00usion coe\ufb03cient in equation (8) which a\ufb00ects the SA rate. The right panel of Figure 4 shows the accelerated electron and proton spectra (multiplied by square of energy) and the e\ufb00ective thick-target spectra for the speci\ufb01ed values of the basic acceleration parameters (namely density, magnetic \ufb01eld, temperature and level and spectrum of turbulence represented by the parameter \u03c4p) which produce more nonthermal electrons than protons in their respective observes ranges shown by the hashed areas. However, as shown in PL04, the ratio of electron to proton acceleration varies considerably with the basic acceleration parameters such as \u03c4p, \u03b1 \u221d\u221an/B and temperature of the injected particles. In fact, as can be seen from comparison of spectra shown in the left and middle pane of Figure 5 with that in Figure 4 left a small change in \u03b1 can produce a \f\u2013 19 \u2013 large change in the ratio of the energies of the accelerated electrons to protons explaining the broad distributions seen in the left panel of Figure 4. Similarly, as evident fro spectra shown in the right panel of Figure 5 higher temperature of background particles also favor the acceleration of protons. One consequence of these is that the proton acceleration will be more e\ufb03cient in larger (and most likely lower B value) loops and at late phases, when evaporation increases the temperature and density n (hence the value of parameter \u03b1).16 This can explain the di\ufb00erence in centroids of HXR and gamma-ray emission seen by \u02da as described by Raymond et al. in these proceedings. It should be emphasized, however, that here we have concentrated on the ratios of accelerated electrons and protons in the energy ranges that produce the observed HXRs and gamma-rays (electrons > 10\u2019s of keV, protons > 10 MeV). But as can be seen from above \ufb01gures, in general, there are considerable number of accelerated protons at lower energies which do not produce detectable radiation. We will discuss the role of these particles below. In summary, the SA model can account for various di\ufb00erences seen in acceleration of electrons and protons. Fig. 5.\u2014 Same as the left panel of Figure 4 but for di\ufb00erent values of \u03c4p, \u03b1 \u221d\u221an/B and kT showing large variations in the relative accelerations of electrons vs protons. Left: \u03c4 \u22121 p = 70 s\u22121, \u03b1 = 0.98, kT = 1.5 keV. Middle: \u03c4 \u22121 p = 90 s\u22121, \u03b1 = 1.13, kT = 1.5 keV. Right: \u03c4 \u22121 p = 70 s\u22121, \u03b1 = 0.98, kT = 3.0 keV. As evident higher densities, higher temperatures and lower magnetic \ufb01elds favor acceleration of protons vs electrons and viceversa. Note also that the quasi-thermal proton component of less than one MeV, which do not produce gamma-rays, can escape and possibly re-accelerated by a CME shock, and be observed as SEPs. This can explain the shift to the right of the SEP (red) distribution shown in the left panel of Figure 4. (From PL04.) 16Note that a similar di\ufb00erence was also predicted by Miller & Roberts (1995) based on other grounds. \f\u2013 20 \u2013 4.2.3. SEP Spectra and Abundances It is commonly believed that the observed relative abundances of ions in SEPs favor the SA model (e.g. Mason et al. 1986; Mazur et al. 1995). More recent observations and modelings have con\ufb01rmed this picture (see Mason et al. 2000, 2002; Reames et al. 1994, 1997; Ng & Reames 1994; Miller 2002). One of the most vexing problem of SEPs has been the enhancement of 3He. Observations show a wide range of 3He to 4He ratios; ranging from photospheric values in gradual-strong \ufb02ares to values several thousand times larger in impulsive-weak \ufb02ares. It should be emphasized that there are not two distinct classes (impulsive and gradual) with a well de\ufb01ned bimodal distribution. Rather, as indicated by observations (Ho et al. 2005), there is a broad continuum of events as shown in the left panel of Figure 6, going from weak, short duration (impulsive) events with strong enrichments at one end to long (gradual), strong and normal abundances events at the other extreme end. It was recognized early that the unusual charge to mass ratio of 3He could be the cause here. However, these early works did not provide a satisfactory quantitative explanation.17 With a more complete treatment of 3He and 4He acceleration, Liu, Petrosian & Mason 2004 and 2006 (LPM04, LPM06) have demonstrated that SA can indeed explain the extreme enhancement of 3He and can also reproduce the observed 3He and 4He spectra for high enrichment cases. The reason for success of our approach is in a way similar to our treatment of election versus proton acceleration, where inclusion of resonance interactions with multiple wave modes gives rise to the di\ufb00erent rates. In case of 3He and 4He we also \ufb01nd that, once the e\ufb00ects of ionized He (\u03b1 particles) are included in the description of the dispersion relation, the low energy 3He ions have more resonances than 4He ions, as a result of which they are more readily accelerated than 4He. In LPM04 and LPM06 it was shown that with this model we can obtain an excellent \ufb01t to the observed spectra of several weak shorter duration events for both 4He and 3He (which show the characteristic convex spectral shapes) for reasonable plasma and acceleration parameters. An example of an excellent \ufb01t (with \u03c4 \u22121 p = 190, \u03b1 = 0.5) is shown in the right panel of Figure 6 with the solid lines. On this plot we also show two other sets of spectra for two di\ufb00erent values of the SA rate parameter \u03c4 \u22121 p , which is proportional to the level of turbulence (see Equation 17) and perhaps to the overall strength of the \ufb02are. As can be seen in all cases most of the 3He are accelerated into the observed range, owing to the high e\ufb03ciency of their acceleration, while this is true for 4He only for large values of \u03c4 \u22121 p , i.e. for high levels of turbulence. For small levels there is a barrier for 4He, similar to that for protons in Figure 4, so that most of the 4He appear 17For a brief review of these earlier works see Petrosian (2008). \f\u2013 21 \u2013 as a low energy bump (a quasi-thermal component), with only a small fraction reaching the observed range. However, the number of 4He ions in this range increases rapidly with increasing value of \u03c4 \u22121 p . This also explains the wide range and the trend of the \ufb02uence ratios. As shown in the right panel of this \ufb01gure the model predicted ratio decreases with increasing levels of turbulence and hence increasing 4He \ufb02uence. This trend is independent of the other SA model parameters (like the parameter \u03b1 as can be seen in this \ufb01gure). It also turns out that this model can reproduce the observed distributions of \ufb02uences of both ions (see Petrosian et al. 2009). Fig. 6.\u2014 Left: Observed variation of Ratio of \ufb02uences of 3He and 4He vs the \ufb02uence of 4He showing a wide range of 3He enhancement decreasing with increasing event \ufb02uence. This also shows a broader distribution of 4He than 3He \ufb02uences (from Ho et al. 2005). Middle: Model \ufb01t to 3He and 4He spectra of Sep. 30, 1999 event observed by ACE showing an excellent \ufb01t (solid lines) for \u03b1 = 1 and the speci\ufb01ed values of the rate parameter \u03c4 \u22121 p or the level of turbulence (\u03c4p0 = 0.0055). Also shown are two sets of model spectra with di\ufb00erent levels of turbulence (or \u03c4 \u22121 p ). Note that in all three cases almost all of the 3He are accelerated into a nonthermal component while at lower levels of turbulence most of 4He form a low energy bump in the unobserved range with a smaller high energy tail. But with increasing level of turbulence more 4He ions are accelerated into the observed range (from Petrosian et al. 2009). Right: Model calculation of the variation of the accelerated 3He to 4He \ufb02uence ratio (at E = 1 MeV/nucleon) with level of turbulence or \u03c4 \u22121 p for three values of \u03b1 \u221d\u221an/B (note that \u03b1 = 1 for the black line). Note also that the trend and the range of the ratio mimics the observation shown in the left panel (from LPM06). It should, however, be emphasized that even though we obtain the observed low ratio of \ufb02uences for stronger events, the spectra of 4He obtained for these events (e.g. the long dashed line in Figure 6, right) do not agree with the observations (we discuss a possible remedy for this in the next section). Nevertheless, this is a signi\ufb01cant breakthrough in understanding of SEPs. There is also an increasing enhancement with increasing mass of the ion. Possible explanations of these and other aspects of the enrichments can be found in a recent review by Petrosian (2008; e.g. Figure10). Clearly more work is required for a complete description of all observed characteristics of SEPs, some of which may require another mechanism of acceleration as discussed next \f\u2013 22 \u2013 4.3. The Role of CME Shocks We have shown that the SA model can reproduce many of the observed features but there are several aspect that need re\ufb01nements or introduction other processes. For example, we have seen that even though this model can explain the predominance of the accelerated electrons at the \ufb02are site and the broad range of the accelerated electron to proton \ufb02ux ratios it cannot account for the di\ufb00erence between the relative rates of accelerations of electrons and protons as deduced by the radiations they produced at the \ufb02are site and that observed in SEPs which favor proton acceleration. We have also indicated we can account for the varied observed spectra and the broad range of the isotopic enrichments in particular that of the 3He in weaker, more impulsive \ufb02ares, while the 4He model spectra for high \ufb02uence-long duration events seems quite di\ufb00erent than that observed in such events. The model spectra are softer and unlike the observed (broken) power laws. Thus, the spectra of the accelerated particles coming out of the \ufb02are site must be modi\ufb01ed by a secondary process to agree with observations. As is well known, gradual strong \ufb02ares are associated with CMEs which has led to the idea that the shock produced by the CME can be responsible for acceleration of SEPs. However, as it usually the case with shock acceleration, the question of seed particles is uncertain here as well. It is unlikely that the cold background particles in high corona are the seeds. Tylka & Lee (2006) in their phenomenological study of acceleration by a shock were able to produce the observed spectra of the SEP assuming a \u201csuprathermal\u201d seed population, extending to higher energies for perpendicular shocks. It then seems natural to assume that the seed particles are \ufb02are accelerated ions which are then re-accelerated by the shock. The 4He spectra of high \ufb02uence events shown in Figure 6 (right) have this kind of characteristics making them good candidates for re-acceleration. This leads us to the following scenario. The \ufb02are site acceleration is the \ufb01rst and primary stage of acceleration and is common to all events. A second phase acceleration can occur in the CME shocks which could modify the spectrum of SEPs escaping the \ufb02are site. The possibility that there may be two acceleration mechanisms at work is not a new idea. The new aspect of this scenario is that we have a hybrid acceleration model, where the seeds of the second stage (re)acceleration by the CME shock are the \ufb02are site particles accelerated by turbulence.18 As shown above, the spectrum of particles escaping the \ufb02are site consists of two components: a low energy quasi-thermal component (which is below the observable energy range of 0.1 to 10 MeV/nucleon) and a higher energy nonthermal component with high 3He 18In fact there is evidence that energetic (> 0.1 MeV/nucleon) ions are \u201cpresent upstream of all interplanetary shocks\u201d (Desai et al. 2003). \f\u2013 23 \u2013 enrichment. This scenario has the attractive feature that even though the nonthermal tails may be highly enriched (as in impulsive \ufb02ares), the total (quasi-thermal plus nonthermal) number of seed particles injected into the CME shock can have essentially normal abundances and harder (power-law) spectra. Thus, for events near the gradual end both components are re-accelerated leading to a near normal abundances. In Figure 7 we compare the SEP observations of the gradual 18 Jan. 2000 event with small isotopic enhancement (points) with the re-accelerated spectra (solid lines) obtained using the \ufb02are accelerated spectra (dashed lines) as the source term \u02d9 Q(E) in Equation (13) that are re-acceleration, with an addition of direct acceleration term Ash(E) = A0E2, presumably due to a CME shock, with of Ash \u223cASA at 0.1 MeV/nucleon. The resultant spectra are harder and closer to normal abundance ratio. We hasten to add that these are preliminary explorations and the agreement, though not perfect, is very good considering that we have used a simple power law acceleration rate. A better agreement can be achieved with a more realistic form for the acceleration rate. A similar scenario can also explain the di\ufb00erences between the distributions of electron to proton energy \ufb02ux ratio deduced from observations at the \ufb02are site and from SEPs shown in Figure 4 (left). As discussed in \u00a74.2.2, in general SA at the \ufb02are site is more e\ufb03cient in acceleration of electrons than protons that have energies capable of producing the observed gamma-rays. However, as also emphasized in \u00a74.2.2, just as is the case for 4He, \ufb02are accelerated protons have a substantial low energy component which can be re-accelerated at a CME shock to yield a smaller electron to proton ratio in SEPs as compared to \ufb02ares. 4.4. Testing the Model In a new and possibly far reaching work (Petrosian & Chen 2010), we have initiated determination of some of the important parameters of the kinetic equation (13) (left) directly from the observed data instead of the usual forward \ufb01tting method, like that shown in Figure 3, where one assumes values and energy dependences for these parameters, calculates the particle distribution and its radiative spectrum, and then \ufb01ts to the data (see e.g. Park et al. 1997). In our new work, using the recently developed regularized inversion technique by Piana et al. (2003 and 2007), we are able to determine the energy dependence of the escape and scattering times due to turbulence as shown in Figure 7 (right). An important aspect of this result is that the escape time increases with energy (and as a result the scattering time decreases) relatively rapidly in disagreement with the expected behavior for SA by parallel propagating plasma waves with a Kolmogorov type spectrum (see PP97 or PL04). The observed behavior requires a steeper than Kolmogorov spectrum so that we are dealing \f\u2013 24 \u2013 with wave vector values in the steep damping range. This discrepancy could also be an indication that the simple SA model used in above mentioned papers requires modi\ufb01cation. For example, inclusion of the e\ufb00ects of the convergence of the magnetic \ufb01elds in the LT region, as can be seen in Figure 2 (left), can produce an escape time similar to the timescale for Coulomb collisions which increases with energy. Or our simpli\ufb01ed description of the escape time as de\ufb01ned in Equation (13) may require modi\ufb01cation. Another possibility is that the out\ufb02ows from the reconnection region may produce standing shocks that can modify the acceleration rate relative to the scattering rate. These are interesting possibilities that eie intend be explored in future works. Fig. 7.\u2014 Left: Comparison with observed 3He and 4He spectra of event on 18 Jan 2000 with our proposed model, where the \ufb02are site accelerated spectra (dashed lines) for an intermediate value of \u03c4 \u22121 p (see Figure 6) are used for re acceleration by a combined shock and SA processes. Middle: Variation with energy of the escape and scattering times due to turbulence obtained directly from RHESSI data using inversion technique for the Nov. 3, 2003 \ufb02are. In contrast to the model calculated Tesc in PL04, the observed time scale increases with energy (from Petrosian & Chen 2010). 5. Summary and" + }, + { + "url": "http://arxiv.org/abs/1002.2673v1", + "title": "Derivation of Stochastic Acceleration Model Characteristics for Solar Flares From RHESSI Hard X-Ray Observations", + "abstract": "The model of stochastic acceleration of particles by turbulence has been\nsuccessful in explaining many observed features of solar flares. Here we\ndemonstrate a new method to obtain the accelerated electron spectrum and\nimportant acceleration model parameters from the high resolution hard X-ray\nobservations provided by the Reuven Ramaty High Energy Solar Spectroscopic\nImager (RHESSI). In our model, electrons accelerated at or very near the loop\ntop produce thin target bremsstrahlung emission there and then escape downward\nproducing thick target emission at the loop footpoints. Based on the electron\nflux spectral images obtained by the regularized inversion of the RHESSI count\nvisibilities, we derive several important parameters for the acceleration\nmodel. We apply this procedure to the 2003 November 03 solar flare, which shows\na loop top source up to 100--150 keV in hard X-ray with a relatively flat\nspectrum in addition to two footpoint sources. The results imply presence of\nstrong scattering and a high density of turbulence energy with a steep spectrum\nin the acceleration region.", + "authors": "Vahe Petrosian, Qingrong Chen", + "published": "2010-02-13", + "updated": "2010-02-13", + "primary_cat": "astro-ph.SR", + "cats": [ + "astro-ph.SR" + ], + "main_content": "INTRODUCTION It is well established that the impulsive phase hard X-ray (HXR) emission of solar \ufb02ares is produced by bremsstrahlung of nonthermal electrons spiraling down the \ufb02are loop while losing energy primarily via elastic Coulomb collisions (Brown 1971; Hudson 1972; Petrosian 1973). Thus, HXR observations provide the most direct information on the spectrum of the radiating electrons and perhaps on the mechanism responsible for their acceleration. The common practice to extract this information has been to use the parametric forward \ufb01tting of HXR spectra to emission by an assumed spectrum, usually a power-law with breaks and cuto\ufb00s (or plus a thermal component), of the radiating or accelerated electrons (e.g. Holman et al. 2003). A more direct connection was established between the observations and the acceleration process \ufb01rst by Hamilton & Petrosian (1992), \ufb01tting to high spectral resolution but narrow band observations (Lin & Schwartz 1987), and later by Park et al. (1997), \ufb01tting to broad band observations (e.g. Marschh\u00a8 auser et al. 1994; Dingus et al. 1994). This was done in the framework of stochastic acceleration (SA) by plasma waves or turbulence. However, it is preferable to obtain the X-ray radiating electron spectrum nonparametrically by some inversion techniques \ufb01rst attempted by Johns & Lin (1992). Recently, Piana et al. (2003) and Kontar et al. (2004) applied regularized inversion techniques to obtain the radiating electron \ufb02ux spectra from the spatially integrated photon spectra observed by RHESSI (Lin et al. 2002). This is an important advance but it gives the spectrum of the e\ufb00ective radiating electrons summed over the whole \ufb02are loop, but not the spectrum of the accelerated electrons. This di\ufb00erence arises because high spatial resolution observations, \ufb01rst from Yohkoh (Masuda et al. 1 Also Department of Applied Physics, Stanford University. 1994; Petrosian et al. 2002) and now from RHESSI (e.g. Liu et al. 2003), have shown that, essentially for all \ufb02ares, in addition to the emission from the loop footpoints (FPs) (e.g. Hoyng et al. 1981), there is substantial HXR emission from a region near the loop top (LT). Thus, the total radiating electron spectrum is a complex combination of the accelerated electrons at the LT and those present in the FPs after having been modi\ufb01ed by transport e\ufb00ects. It is therefore clear that separate inversion of the LT and FP photon spectra to electron spectra would provide more direct information on the acceleration mechanism. More recently, Piana et al. (2007) have applied the regularized inversion technique to the RHESSI data in the Fourier domain (Hurford et al. 2002) to obtain electron \ufb02ux spectral images. The goal of this letter is to demonstrate that with the resulting spatially resolved electron \ufb02ux spectra at the LT and FPs one can begin to constrain the acceleration model parameters directly. In the next section we present a brief review of the relation between the derived electron \ufb02ux images and the characteristics of the SA model and in \u00a73 we apply this relation to a \ufb02are observed by RHESSI. A brief summary and our conclusion are presented in \u00a74. 2. ACCELERATION AND RADIATION The observations of distinct LT and FP HXR emissions, with little or no emission from the legs of the loop, point to the LT as the acceleration site and require enhanced scattering of electrons in the LT. Petrosian & Donaghy (1999) showed that the most likely scattering agent is turbulence which can also accelerate particles stochastically. In fact SA of the background thermal plasma has been the leading mechanism for acceleration of electrons (e.g. Hamilton & Petrosian 1992; Miller et al. 1996; Park et al. 1997; Petrosian & Liu 2004; Grigis & Benz 2006; Bykov & Fleishman 2009) \f2 Petrosian & Chen and ions (e.g. Ramaty 1979; Mason et al. 1986; Mazur et al. 1995; Liu et al. 2004, 2006; Petrosian et al. 2009), and is the most developed model in terms of comparing with observations. 2.1. Particle Kinetic Equation In this model one assumes that turbulence is produced at or near the LT region (with background electron density nLT, volume V , and size L). In presence of a su\ufb03ciently high density of turbulence the scattering can result in a mean scattering length or time (\u03c4scat) smaller than L or the crossing time (\u03c4cross = L/v), leading to a nearly isotropic pitch angle distribution (Petrosian & Liu 2004). The general Fokker-Planck equation for the density spectrum N(E) of the accelerated electrons, averaged over the turbulent acceleration region, simpli\ufb01es to \u2202N \u2202t = \u22022 \u2202E2 [DEEN] \u2212\u2202 \u2202E h\u0010 A(E) \u2212\u02d9 EL(E) \u0011 N i \u2212 N Tesc(E) + \u02d9 Q(E), (1) where DEE(E) and A(E) are the di\ufb00usion rate and direct acceleration rate by turbulence2, respectively, \u02d9 EL is the electron energy loss rate, and \u02d9 Q(E) and N(E)/Tesc(E) describe the rate of injection of (thermal) particles and escape of the accelerated particles from the acceleration region. For electrons of energies below \u223c1 MeV, which are of interest here, Coulomb collisions3 dominate the energy loss rate, \u02d9 EL = \u02d9 ECoul = 4\u03c0r2 0mec3nLT ln \u039b/\u03b2, (2) where ln \u039b is the Coulomb logarithm taken to be 20 for solar \ufb02are conditions. Following Petrosian & Liu (2004), we approximate the escape time as Tesc(E) \u2243 \u03c4cross(1 + \u03c4cross/\u03c4scat), which smoothly connects the two limiting cases of \u03c4cross/\u03c4scat \u226b1 and \u226a1. The mean scattering time is related to the pitch angle di\ufb00usion rates (Dung & Petrosian 1994; Pryadko & Petrosian 1997) due to both Coulomb collisions (DCoul \u00b5\u00b5 and \u03c4turb scat ) and turbulence (Dturb \u00b5\u00b5 and \u03c4 Coul scat ) as \u03c4scat(E) = 1 8 Z 1 \u22121 (1 \u2212\u00b52)2 DCoul \u00b5\u00b5 (\u00b5, E) + Dturb \u00b5\u00b5 (\u00b5, E)d\u00b5. (3) Similarly we can de\ufb01ne the scattering times \u03c4 Coul scat and \u03c4 turb scat for each process alone. For Coulomb collisions, DCoul \u00b5\u00b5 = 2(1\u2212\u00b52) \u03b3+1 \u02d9 ECoul E . For turbulence, Dturb \u00b5\u00b5 , like DEE, depends on the spectrum of turbulence and on the background plasma density, composition, temperature, and magnetic \ufb01eld (see Schlickeiser 1989; Dung & Petrosian 1994; Pryadko & Petrosian 1997, 1998, 1999; Petrosian & Liu 2004). Since these coe\ufb03cients determine the spectrum of the accelerated electrons, one can then constrain some aspects of the acceleration mechanism if an accurate spectrum of the electrons can be derived from observations. 2 For stochastic acceleration, A(E) = DEE\u03b6(E)/E + dDEE/dE, where \u03b6(E) = (2 \u2212\u03b3\u22122)/(1 + \u03b3\u22121), \u03b3 = 1 + E/mec2 = 1/ p 1 \u2212\u03b22 is the Lorentz factor, and v = c\u03b2 is the electron velocity. 3 At higher energies, synchrotron loss must be included in \u02d9 EL. 2.2. LT and FP Spectra The accelerated electrons in the (LT) acceleration region with a \ufb02ux spectrum FLT(E) = vN(E) produce thin target bremsstrahlung emissivity (photons s\u22121 keV\u22121) JLT(\u01eb) = nLTV Z \u221e \u01eb FLT(E)\u03c3(\u01eb, E)dE, (4) where \u03c3(\u01eb, E) is the angle-averaged bremsstrahlung cross section (Koch & Motz 1959). The escaping electrons with \ufb02ux F0(E) = N(E)L/Tesc produce thick target bremsstrahlung emissivity (coming mostly from the FPs) (see Petrosian 1973; Park et al. 1997), JFP(\u01eb) = nV Z \u221e \u01eb FFP(E)\u03c3(\u01eb, E)dE, (5) where n is the density and FFP is the e\ufb00ective radiating electron \ufb02ux spectrum at the FPs, FFP(E) = vNFP = v(E) \u02d9 EL(n) Z \u221e E N(E\u2032) Tesc(E\u2032)dE\u2032. (6) Since \u02d9 EL \u221dn, the FP photon spectrum is independent of density. In what follows we evaluate equations (5) and (6) using the LT density nLT. 2.3. Acceleration Model Parameters Regularized inversion of RHESSI count visibilities gives the electron visibilities (Piana et al. 2007), which can then be used to construct images of electron \ufb02ux (multiplied by column density). From these images, we extract the spatially resolved spectra, FLT(E) at the LT and FFP(E) at the FPs. Thus we can obtain the accelerated electron spectrum N(E) at the thin target LT. Also from di\ufb00erentiation of equation (6) we derive the escape time as Tesc = \u2212N(E)/ d dE(FFP \u02d9 EL/v), and by converting the denominator to a logarithm derivative we get Tesc(E) = \u03c4L(E)(FLT/FFP) \u03b4FP(E) + 2/(\u03b3 + \u03b32) \u2261\u03c4L(E)\u03be(E), (7) where the FP index \u03b4FP(E) = \u2212d ln FFP d ln E , 1/(\u03b3 + \u03b32) = \u2212d ln v(E) d ln E , and \u03c4L(E) = E/ \u02d9 EL is the Coulomb loss time at the LT. The function \u03be(E) is an observable quantity representing the ratio Tesc/\u03c4L. In the above derivation, we have used the relativistic form of electron velocity v(E). Given Tesc(E), from its relation to \u03c4cross and \u03c4scat, we obtain the mean scattering time as \u03c4scat \u2243\u03c4 2 cross/(Tesc \u2212 \u03c4cross), which is valid for Tesc > \u03c4cross. Disentanglement of \u03c4 turb scat from \u03c4scat is complicated (eq. [3]) at energies when turbulence and Coulomb collisions contribute equally to \u03c4scat. However, if turbulence dominates the pitch angle di\ufb00usion, then to the \ufb01rst order we can write \u03c4 turb scat \u2243\u03c4scat(1 + \u03c4scat/\u03c4 Coul scat ), and obtain some average value of Dturb \u00b5\u00b5 . Furthermore, given N(E) we can in principle determine the other Fokker-Planck coe\ufb03cients, namely A(E) and DEE (see eq. [1]). Therefore we can reach a consistent picture of the acceleration process due to turbulence and begin to make inroads into the spectrum and the nature of turbulence itself. \fElectron Acceleration in Solar Flares 3 Figure 1. Electron \ufb02ux spectral images (with 8 keV bin width above 34 keV and 2 keV bin width at lower energies) up to 250 keV in the 2003 November 03 \ufb02are during the nonthermal peak as reconstructed from two sets of the regularized electron visibilities by the MEM NJIT algorithm (Schmahl et al. 2007). The images show one LT and two FP sources above 34 keV and a loop structure at lower energies. Three circles are used to extract the LT and FP electron \ufb02ux spectra above 34 keV (see Figure 2). Figure 2. Top: Electron power spectra NLTE2F (E) for the LT (square), the two FPs summed (diamond), and all three sources (LT + FPs, cross) in the 2003 November 03 \ufb02are. The LT spectrum can be \ufb01tted by a power-law, and the summed FP and total spectra by a broken power-law. Also note that the southern FP spectrum (downward triangular) is \ufb02atter than the northern FP spectrum (upward triangular), mostly above \u223c90 keV by \u223c0.3 powers of energy, consistent with their asymmetric locations with respect to the LT. Bottom: Escape time (\ufb01lled circle) and turbulence scattering time (\ufb01lled triangular) in the (LT) acceleration region. The escape time can be well \ufb01tted by either a power-law or a broken power-law (dash dot) increasing with energy, and the turbulence scattering time by a power-law rapidly decreasing with energy. Also shown are the crossing, Coulomb scattering, and the mean scattering (open triangular) times. The reduced chi-squares for all the \ufb01ttings are below or around 1. 3. APPLICATION: THE 2003 NOVEMBER 03 FLARE As a \ufb01rst demonstration, we apply our new procedure to the 2003 November 03 solar \ufb02are (X3.9 class) during the nonthermal peak, in which we \ufb01nd a hard LT source (extending above 100 keV in HXR) distinct from the thermal loop in addition to two FP sources4. In Figure 1 we show the electron \ufb02ux images up to 250 keV, which also show a loop at low energies and one LT and two FPs at higher energies. In Figure 2 top panel we show the electron spectra NLTE2F(E), where NLT = nLTL is the LT column density. The LT \ufb02ux spectrum can be \ufb01tted by a power-law with an index \u03b4LT = 3.0. The summed FP \ufb02ux spectrum can be better \ufb01tted by a broken powerlaw with the indexes \u03b41 = 2.1 and \u03b42 = 2.8 below and above the break energy Eb = 91 \u00b1 3 keV. It is clear that the total radiating electron spectrum di\ufb00ers signi\ufb01cantly from the (LT) accelerated electron spectrum. Given the above LT and FP electron \ufb02ux spectra we derive the energy dependence of the escape time (eq. [7]). The LT density can be estimated as nLT \u2243 p EM/L3 \u2243 0.5 \u00d7 1011 cm\u22123, where the LT size L \u2243109 cm is obtained from the LT angular size, and the emission mea4 Q. Chen & V. Petrosian (2010a, in preparation) present HXR observations of this \ufb02are and argue that the high energy LT source should not be an artifact of the pulse pileup e\ufb00ect. sure EM \u22430.2 \u00d7 1049 cm\u22123 is obtained from spectral \ufb01tting of the LT thermal emission. As in Figure 2 bottom panel, the escape time increases slowly with energy and can be \ufb01tted by either a power-law, Tesc(E) = 0.3 s \u0012 E 100 keV \u0013\u03ba , \u03ba = 0.83 \u00b1 0.10, (8) or a broken power-law with a break at Eb = 118 \u00b1 37 keV, and the indexes \u03ba1 = 0.62 \u00b1 0.23 and \u03ba2 = 1.09 \u00b1 0.25. The fact that the escape time should be longer than the crossing time yields an upper limit on NLT, which is satis\ufb01ed by the above LT density and size. We then calculate the mean scattering time in the LT region. Except at the lowest energy, the Coulomb contribution is small so that the scattering time thus calculated can be attributed to turbulence. The scattering time due to turbulence (see \u00a72.3) can be \ufb01tted by a power-law above \u223c40 keV, \u03c4 turb scat = 0.016 s \u0012 E 100 keV \u0013\u2212\u03bb , \u03bb = 1.90 \u00b1 0.14. (9) 4. SUMMARY AND DISCUSSION In this paper we describe a new method to directly obtain the model parameters for stochastic acceleration \f4 Petrosian & Chen of particles by turbulence in solar \ufb02ares from regularized inversion of the high resolution RHESSI HXR data (Piana et al. 2007). We have argued that particle acceleration takes place at or near the LT region. The accelerated electrons produce thin target emission at the LT and then escape downward to the dense FP region undergoing Coulomb collisions and producing thick target emission. In this model the LT and FP electron spectra are connected by the escape process from the LT region (eq. [6]), thus allowing us to determine the energy dependence of the escape time. Our method has the advantage that one can now constrain the model parameters uniquely rather than just satisfying the consistency between the model and the data as commonly done by forward \ufb01tting routines. This method can be applied to \ufb02ares with simultaneous HXR emission from the LT and FP sources. We have applied our method to the 2003 November 03 \ufb02are, in which we can obtain the electron \ufb02ux images for both the LT and FPs up to 250 keV. The LT accelerated electron \ufb02ux spectrum can be \ufb01tted by a power-law and the e\ufb00ective radiating \ufb02ux spectrum at the FPs is better \ufb01tted by a broken power-law. From these spectra we derive the energy variation of the escape time and the scattering time. As seen in Figure 2, the turbulence scattering time is relatively short and decreases with energy. A short scattering time may arise from a high energy density of turbulence (Eturb), with the exact relationship depending also on the magnetic \ufb01eld (B), and the spectral index (q) and minimum wave number (kmin) of turbulence. A high level of turbulence also implies e\ufb03cient acceleration which generally means a \ufb02at spectrum for the accelerated electrons, which is the case for the current \ufb02are. The energy dependences of \u03c4 turb scat and DEE are also a function of these characteristics of turbulence; at high energies they are determined primarily by the spectral index of turbulence (see Dung & Petrosian 1994; Pryadko & Petrosian 1997, 1998, 1999; Liu et al. 2006). For the usually assumed Kolmogorov (q = 5/3) or Iroshnikov-Kraichnan (q = 3/2) turbulence spectra, one expects the scattering time to increase with energy as E2\u2212q, which translates into an escape time varying roughly as Tesc \u221d1/ \u221a E at high (but non-relativistic) energies. The energy dependences of Tesc and \u03c4 turb scat obtained here require a steeper turbulence spectrum (q > 3) at high wave numbers. Such a steep spectrum can be present beyond the inertial range where damping is important (e.g. Jiang et al. 2009). The electron energies and the wave-particle resonance condition determine the wave vector of the accelerating plasma waves. This relation depends primarily on the plasma parameter \u03b1 \u221d\u221an/B (e.g. Petrosian & Liu 2004). Thus, given the magnetic \ufb01eld and plasma density we can determine the wave vectors for transition from the inertial to the damping ranges of turbulence. It should, however, be emphasized that the results obtained here may not be representative of typical \ufb02ares. More commonly \ufb02ares have much softer LT emission, which would give an escape time decreasing (and scattering time increasing) with energy, consistent with a low level and a \ufb02at spectrum of turbulence. The exact relation between the derived quantities (N(E), Tesc, and \u03c4turb scat ) and the turbulence characteristics (Eturb, B, q, etc.) is complicated and depends on the angle of propagation of the plasma waves with respect to magnetic \ufb01eld and other plasma conditions. In future, we will apply these procedures to more \ufb02ares (Q. Chen & V. Petrosian, 2010b, in preparation) and deal with these relations explicitly. We thank the referee for helpful comments. We thank Anna Maria Massone and Gordon Hurford for providing the visibility inversion code and valuable discussions about data analysis, and Siming Liu and Wei Liu for various discussions. RHESSI is a NASA small explorer mission. This work is supported by NSF grant ATM0648750 and NASA grant NNX10AC06G." + }, + { + "url": "http://arxiv.org/abs/0909.5051v1", + "title": "Gamma-Ray Bursts as Cosmological Tools", + "abstract": "In recent years there has been considerable activity in using gamma-ray\nbursts as cosmological probes for determining global cosmological parameters\ncomplementing results from type Ia supernovae and other methods. This requires\na characteristics of the source to be a standard candle. We show that contrary\nto earlier indications the accumulated data speak against this possibility.\nAnother method would be to use correlation between a distance dependent and a\ndistance independent variable to measure distance and determine cosmological\nparameters as is done using Cepheid variables and to some extent Type Ia\nsupernovae. Many papers have dealt with the use of so called Amati relation,\nfirst predicted by Lloyd, Petrosian and Mallozzi, or the Ghirlanda relation for\nthis purpose. We have argued that these procedure involve many unjustified\nassumptions which if not true could invalidate the results. In particular, we\npoint out that many evolutionary effects can affect the final outcome. In\nparticular, we demonstrate that the existing data from Swift and other earlier\nsatellites show that the gamma-ray burst may have undergone luminosity\nevolution. Similar evolution may be present for other variables such as the\npeak photon energy of the total radiated energy. Another out come of our\nanalysis is determination of the luminosity function and the comoving rate\nevolution of gamma-ray bursts which does not seem to agree with the cosmic star\nformation rate. We caution however, that the above result are preliminary and\nincludes primarily the effect of detection threshold. Other selection effects,\nperhaps less important than this, are also known to be present and must be\naccounted for. We intend to address these issues in future publications.", + "authors": "Vahe Petrosian, Aurelien Bouvier, Felix Ryde", + "published": "2009-09-28", + "updated": "2009-09-28", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION The change in our understanding of gamma-ray bursts (GRBs) in less than a decade has been unprecedented. We have gone from groping for ways to determine their distances (from solar system to cosmological scales) to attempts to use them as cosmological probes. Observations by instruments on board a series of satellites starting with BeppoSAX and continuing with HETE, INTEGRAL and Swift, have been the primary source of this change. The higher spatial resolution of these instruments has allowed the measurement of redshifts of many well-localized GRBs, which has in turn led to several attempts to discover some emission characteristics which appears to be a \u201cstandard candle\u201d (SC for short), or shows a well de\ufb01ned correlation (with a small dispersion) with another distance independent measurable characteristic. One can use such relations to determine the distances to GRBs in a manner analogous to the use of the Cepheid variables. Example of this are the lag-luminosity and variability-luminosity relations (Norris et al 2000, Norris 2002, Fenimore & RamirezRuiz 2000, Reichart et al 2001) which were exploited for determining some cosmological aspects of these sources (Lloyd, Fryer & Ramirez-Ruiz 2002, Kocevski & Liang 2006) using the methods developed by Efron & Petrosian (1992, 1994, 1999). More recently there has been a \ufb02urry of activity dealing with the observed relation between the peak energy Ep of the \u03bdF\u03bd spectrum and the total (isotropic) gamma-ray energy output Eiso (Ep \u221dE\u03b7 iso, \u03b7 \u223c0.6) predicted by Lloyd et al. (2000, LPM00) and established to be the case by Amati et al. (2002) (see also Lamb et al. 2004, Attiea et al 2004). Similarly Ghirlanda et al. (2004a) has shown a correlation with smaller dispersion between Ep and the beaming corrected energy E\u03b3 (Ep \u221dE\u03b7\u2032 \u03b3 , \u03b7\u2032 \u223c0.7) where: E\u03b3 = Eiso \u00d7 (1 \u2212cos(\u03b8jet)) 2 (1) and \u03b8jet is the half width of the jet. These are followed by many attempt to use these relations for determining cosmological parameters (Dai et al. 2004,Ghirlanda et al. 2004b, 2005 [Gea05], Friedman & Bloom 2005, [FB05]). There are, however, many uncertainties associated with the claimed relations and even more with the suggested cosmological tests. The purpose of this paper is to investigate the utility of GRBs as cosmological tools either as SCs or via some correlation. In the next section we review the past and current status of the \ufb01rst possibility and in \u00a73 we discuss the energy-spectrum correlation and whether it can be used for cosmological model parameter determination. Finally in \u00a74, we address the question of cosmological luminosity and rate density evolution of GRB based on the existing sample with known redshifts. 2. Standard Candle? The simplest method of determining cosmological parameters is through SCs. Type Ia supernovae (SNIa) are a good example of this. But currently their observations are limited to relatively nearby universe (redshift z < 2). Galaxies and active galactic nuclei (or AGNs), on the other hand, can be seen to much higher redshifts (z > 6) but are not good SCs. GRBs \f\u2013 3 \u2013 are observed to similar redshifts and can be detected to even higher redshifts by current instruments. So that if there were SCs they can complement the SNIa results. In general GRBs show considerable dispersion in their intrinsic characteristics. The \ufb01rst indication that GRBs might be SCs came from Frail et al. (2001) observation showing that for a sample of 17 GRBs the dispersion of the distribution of E\u03b3 is signi\ufb01cantly smaller than that of Eiso. The determination of E\u03b3 requires a well de\ufb01ned light curve with a distinct steepening. The jet opening \u03b8jet depends primarily on the time of the steepening and the bulk Lorentz factor, but its exact value is model dependent and depends also weakly on Eiso and the density of the background medium \u03b8jet = 0.101 radian \u00d7 \u0012 tbreak 1 day \u00133/8 \u0010 \u03b7 0.2 \u00111/8 \u0010 n 10 cm\u22123 \u00111/8 \u00121 + z 2 \u0013\u22123/8 \u0012 Eiso 1053 ergs \u0013\u22121/8 , (2) In Figure 1 we show the distribution of Eiso and E\u03b3 for 25 pre-Swift GRBs (mostly compiled in FB05). As evident, there is little di\ufb00erence between the two distributions (except for their mean values) and neither characteristics is anywhere close to being a SC. We have calculated the jet angle \u03b8jet (from equation 2) for 25 pre-Swift GRBs with relatively well de\ufb01ned tbreak. Assuming a gamma-ray e\ufb03ciency \u03b7 = 0.2 and a value of circum burst density estimated from broadband modeling of the lightcurve when available (otherwise we use the default value of n = 10 cm\u22123). Unfortunately fewer than expected Swift GRBs have optical light curves and their Xray light curves show considerable structure (several breaks and \ufb02aring activity) with several GRBs showing no sign of jet-break or beaming (Nousek et al. 2006). This has brought the whole idea of jet breaks and calculating E\u03b3 into question. The upshot of this is that Swift E\u03b3, like Eiso also has a broad distribution extending over two decades. Thus, any cosmological use of GRBs must include the e\ufb00ects of the breadth of the distributions1. 3. Correlations When addressing the correlation between any two variables, one should distinguish between a one-to-one relation and a statistical correlation. In general the correlation between two variables (say, Ep and Eiso) can be described by a bi-variate distribution \u03c8(Ep, Eiso). If this is a separable function, \u03c8(Ep, Eiso) = \u03c6(Eiso)\u03b6(Ep), then the two variables are said to be uncorrelated. A correlation is present if some characteristic (say the mean value) of one variable depends on the other: e.g. \u27e8Ep\u27e9= g(Eiso). Only in the absence of dispersion there 1It should be also noted that there is an observational bias in favor of detecting smaller jet angles (i.e. earlier jet-breaks), so that the population as a whole (including those with very late jet-breaks) will have even a broader distribution. Note also that a SC E\u03b3 means that the Ghirlanda relation would have an index \u03b7\u2032 \u223c0 \f\u2013 4 \u2013 Fig. 1.\u2014 Distribution of Eiso and E\u03b3 for 25 pre-Swift GRBs with evidence for a jet-break and beaming. E\u03b3 distribution is shifted by about two orders of magnitudes compared to Eiso distribution due to the beaming factor correction. However the dispersion of the two distribution are very similar (\u03c3Eiso = 0.68, \u03c3E\u03b3 = 0.52) and broad indicating that GRBs cannot be assumed to be SCs \f\u2013 5 \u2013 will be a one-to-one relation; \u03b6(Ep) = \u03b4(Ep \u2212g(Eiso)). In general, the determination of the exact nature of the correlation is complicated by the fact that the extant data su\ufb00ers from many observational selection biases and truncations. An obvious bias is that most sample are limited to GRBs with peak \ufb02uxes above some threshold. There are also biases in the determination of Eobs p (see e.g. Lloyd & Petrosian 1999; LP99). The methods devised by Efron & Petrosian (1992, 1994) are particularly suitable for determination of correlations in such complexly truncated data. The \ufb01rst indication of a correlation between the energetics and spectrum of GRBs came from Mallozzi et al (1995), who reported a correlation between observed peak \ufb02ux fp and Eobs p . A more comprehensive analysis by LPM00, using the above mentioned methods, showed that a similar correlation also exist between the observed total energy \ufb02uence Ftot and Eobs p . Both these quantities depend on the redshift z \u2261Z \u22121; Ftot = Eiso/(4\u03c0d2 mZ), and Eobs p = Ep/Z, (3) Here dm is the metric distance, and for a \ufb02at universe dm(Z) = (c/H0) Z Z 1 dZ\u2032(\u2126(Z\u2032))\u22121/2, with \u2126(Z) = \u03c1(Z)/\u03c10, (4) describing the evolution of the total energy density \u03c1(z) of all substance (visible and dark matter, radiation, dark energy or the cosmological constant). LPM00 also showed that the correlation expected from these interrelationships is not su\ufb03cient to account for the observed correlation, and that there must be an intrinsic correlation between Ep and Eiso. Without knowledge of redshifts LPM00 predicted the relation Eiso \u221dE0.5 p which is very close 0.5, which is very similar to the so-called Amati relation obtained for GRBs with known redshifts. However, it should be emphasized that the LPM00 result implies a statistical correlation and not a one-to-one relation needed for using GRBs as a reliable distance indicator. Nakar & Piran (2004, 2005) and Band & Preece (2005) have shown convincingly that the claimed tight one-to-one relations cannot be valid for all GRBs. We believe that the small dispersion seen in GRBs with known redshifts is due to selection e\ufb00ects arising in the localization and redshift determination processes: e.g., these GRBs may represent the upper envelope of the distribution. A recent analysis by Ghirlanda et al. (2005) using pseudo-redshift shows a much broader dispersion (as in LPM00). The claimed tighter Ghirlanda relation, could be due to additional correlation between the jet opening angle \u03b8jet and Eiso, Ep, or both. However, as mentioned above the picture of jet break and measurements of \u03b8jet and E\u03b3 is a confusing state in view of Swift observation. We have reanalyzed the existing data and determined the parameters of the Amati and Ghirlanda relations. In Figure 2 we show the the Eiso (and E\u03b3 excluding some outliers) vs Ep for all GRBs with known redshifts (and \u03b8jet). We compute best power law \ufb01t for both these correlations and we describe the dispersion around it by the standard deviation. For the Ep \u2212Eiso we \ufb01nd: Ep \u221dE\u03b7 iso, \u03b7 \u223c0.328 \u00b1 0.036 and \u03c3iso = 0.286. For Ep \u2212E\u03b3 correlation we \ufb01nd: Ep \u221dE\u03b7\u2032 \u03b3 , \u03b7\u2032 \u223c0.555 \u00b10.089 and \u03c3\u03b3 = 0.209. We \ufb01nd that additional data has reduced the signi\ufb01cance of the correlations or has increased the dispersions (compare our values of \u03c3iso \u223c.... and \u03c3\u03b3 \u223c.... obtained by Amati et al. and Ghirlanda et al. ). This is contrary to \f\u2013 6 \u2013 what one would expect for a sample with a true correlation. From this we conclude that, as predicted by LPM00, there is a strong correlation between Ep and Eiso (or E\u03b3), but for the population GRBs as a whole both variables have a broad distribution and most GRBs do not obey the tight relations claimed earlier. 3.1. Correlations and Cosmology Attempts to use observations of extragalactic sources for cosmological studies have shown us that extreme care is required. All observational biases must be accounted for and theoretical ideas tested self-consistently, avoiding circular arguments. This is especially true for GRBs at this stage of our ignorance about the basic processes involved in their creation, energizing, particle acceleration and radiation production. Here we outline some of the di\ufb03culties and how one may address and possibly overcome them. Let us assume that there exists a one-to-one but unknown relation between Eiso and Ep, Eiso = E0f(Ep/E0), and that we have a measure of Ftot and z. Here E0 and E0 are some constants, and for convenience we have de\ufb01ned f(x) which is the inverse of the function g introduced above. The From equations (3) and (4) we can write Z Z 1 dZ\u2032[\u2126(Z\u2032)]\u22121/2 = f(Eobs p Z)/E0) ZFtot/F0 !1/2 with F0 = E0 4\u03c0(c/H0)2. (5) For general equations of state P = wi\u03c1, \u2126(Z) = P i \u2126iZ3(1+wi). The aim of any cosmological test is to determine the values of di\ufb00erent \u2126i and their evolutions (e.g. changes in wi) . If we make the somewhat questionable assumption of complete absence of cosmological evolutions of Eiso, Ep and the function f(x), then this equation involves two unknown functions \u2126(z) and f(x). In principle, if the forms of these functions are known, then one can rely on some kind of minimum \u03c72 method to determine the parameters of both functions, assuming that there is su\ufb03cient data to overcome the degeneracies inherent in dealing with large number of parameters. By now the parametrization of \u2126(Z) has become standard. However, the form and parameter values of f(x) is based on poorly understood data and theory, and currently requires an assumed cosmological model. Using the form (e.g. the power law used by Gea05) derived based on an assumed cosmological model to carry out such a test is strictly speaking circular. (It is even more circular to \ufb01x the value of parameters, in this case the index \u03b7, obtained in one cosmological model to test others as done by Dai et al. 2004). Even though di\ufb00erent models yield results with small di\ufb00erences, this does not justify the use of circular logic. The di\ufb00erences sought in the \ufb01nal test using equation 5 will be of the same order. The situation is even more di\ufb03cult because as stressed above the correlation is not a simple oneto-one relation but is a statistical one. Finally, the most important unknown which plagues all cosmological tests using discrete sources is the possibility of the existence of an a priory unknown evolution in one or all of the relevant characteristics. For example, the intrinsic luminosity Liso might su\ufb00er large evolution which we refer to as luminosity evolution. The value of Ep can also be subject to selection e\ufb00ects, or the correlation function f(x) may evolve \f\u2013 7 \u2013 Fig. 2.\u2014 Ep \u2212Eiso and Ep \u2212E\u03b3 correlations. The 43 gray circles are all the bursts from our sample that had good enough spectral observations to \ufb01nd the energy peak of the \u03bdF\u03bd spectrum, and the 21 black diamonds are a subset of those bursts with a jet break found in their optical lightcurve. Solid lines are the best \ufb01t we \ufb01nd for the two correlations (Ep \u221dE\u03b7 iso, \u03b7 \u223c0.328 \u00b1 0.036 and Ep \u221dE\u03b7\u2032 \u03b3 , \u03b7\u2032 \u223c0.555 \u00b1 0.089) and dashed lines are the best \ufb01t found by Amati et al. 2003 with 20 bursts (Ep \u221dE0.35 iso ) and Ghirlanda et al. 2004 with 16 bursts (Ep \u221dE0.70 \u03b3 ). \f\u2013 8 \u2013 with redshift, i.e. \u03b7 = \u03b7(z). For such a general case we are dealing with 4 unknown ??? of the two above. Moreover the rate function of GRBs most likely is not a constant and can in\ufb02uence the result s with a broad distribution. We address some of these questions now. 4. GRB Evolutions For a better understanding of GRBs themselves and the possibility of their use for cosmological tests we need to know whether characteristics such as Eiso, \u03b8jet, Ep, the correlation function f(x) and the occurrence rate \u02d9 \u03c1GRB (number of GRBs per unit co-moving volume and time) change with time or Z. For example to use the Ep \u2212Eiso correlation for cosmological purposes, one need to \ufb01rst establish the existence of the correlation and determine its form locally (low redshift).One then has to rely on a theory or non-circular observations to show that either this relation does not evolve or if it does how it evolves.The existing GRB data is not su\ufb03cient for such a test. In factthere seems to be some evidence that there is evolution. Lie??? has shown by subdividing the data into 4 z-bins, he obtained di\ufb00erent index \u03b7 which change signi\ufb01cantly, rendering previous use of this relation for cosmological test invalid. This emphasizes the need for a solid understanding of the evolution of all GRB characteristics. Two of the most important characteristics are the energy generation Eiso and the rate of GRBs. These are also two characteristics which can be determined more readily and ??? with higher uncertainty. In what follows we address these two questions. We will use all GRBs with known Z irrespective of whether we know the jet angle \u03b8jet because this gives us a larger sample and because in view of new Swift observations (Nousek et al. 2006) the determination of the latter does not seem to be straightforward. Also since it is often easier to determine the peak \ufb02ux fp rather than the \ufb02uence threshold, in what follows we will use the peak bolometric luminosity Lp = 4\u03c0d2 mZ2fp instead of Eiso = R L(t)dt. 4.1. Evolution with Pseudoredshifts Before considering GRBs with known redshifts we brie\ufb02y mention that there has been two indications of strong evolutionary trends from use of pseudo redshifts based on the socalled luminosity-variability and lag-luminosity correlations (Lloyd et al. 2002, Kocevski & Liang 2006) using the methods developed by Efron & Petrosian (1992, 1994). These works show existence of a relatively strong luminosity evolution L(z) = L0Z\u03b1 (\u03b1 = 1.4 \u00b1 0.5, 1.7 \u00b1 0.3) from which one can determine a GRB formation rate which also varies with redshifts and can be compared with other cosmological rates such as the star formation rate. \f\u2013 9 \u2013 4.2. Evolution with Measured Redshifts 4.2.1. Description of the Data We have compiled the most complete list of GRBs with known redshift. Since the launch of the Swift satellite, this list has become signi\ufb01cantly larger. We include only bursts with good redshift determination meaning that GRBs with only upper or lower limits on their redshift are not in our sample. On total, our sample contains 86 bursts, triggered by 4 di\ufb00erent instruments: BATSE on board CGRO (7 bursts), BeppoSAX (14), HETE2 (13), and Swift (52). For each burst we collected \ufb02uence and peak \ufb02ux in the energy bandpass of the triggering instrument, as well as the duration of the burst. When available we have also collected spectral information namely the parameters that de\ufb01ne the Band function; the energy peak (Eobs p ) as well as the low (\u03b1) and high (\u03b2) energy indexes of the \u03bdF\u03bd spectrum. When a good spectral analysis was not available, we took as default values the mean of the BATSE distributions based on large number of bursts: < \u03b1 >= \u22121.0, < \u03b2 >= \u22122.3 and < Eobs p >= 250 keV. For non-Swift bursts, all this information was mostly extracted from FB05 data set to which we added results from recent spectral analysis released in GCN. For Swift GRBs, redshift, duration, \ufb02uence and peak \ufb02ux were compiled from the Swift Information webpage (http://swift.gsfc.nasa.gov/docs/swift/archive) and spectral information have been retrieved from GCN releases and we have also looked at spectral analysis ourselves for some of them. We assumed the following cosmological model: \u2126M = 0.3, \u2126\u03bb = 0.7, H0 = 70 km s\u22121 Mpc\u22121 in order to determine the intrinsic properties (e.g. Eiso see eq. [3]). Eiso is here calculated for a rest-frame bandpass [20,2000] keV. Note that K-correction due to the shift of the photons into the instrument bandpass has been properly taken into account for the Eiso calculation (see Bloom et al. 2001). From this, we calculate the average isotropic-equivalent luminosity as: Liso = Eiso T90 when T90 is the duration of the burst that includes 90% of the total counts. Because di\ufb00erent instruments have been used to collect this information, the sample is very heterogeneous and su\ufb00ers from various selection and truncation e\ufb00ects that vary from burst to burst. The most simple of these truncation e\ufb00ects is due to the limiting sensitivity of the instruments. A GRB trigger will occur when the peak \ufb02ux of the burst exceeds the average background variation by a few sigmas (depending on the setting of the instrument). In an attempt to carefully take into account this e\ufb00ect into our study, we used the analysis carried out by Lamb et al. 2005 for pre-Swift instruments. In this analysis, they computed the sensitivity for each instrument depending on the spectral parameters of the bursts. Therefore, for each speci\ufb01c burst of our data set, from its spectral parameters it is possible to determine the limiting photon \ufb02ux of the instrument (for speci\ufb01c GRB with its \f\u2013 10 \u2013 Fig. 3.\u2014 Isotropic average luminosity versus redshift for all bursts in our sample (86). Different symbols represent bursts observed by di\ufb00erent instruments: BATSE(7), Beppo-SAX (14), HETE-2 (13), Swift (52). For all non-Swift burst, a vertical line is plotted representing the range of isotropic luminosity in which the burst would still have been observable by the instrument keeping all its others parameters \ufb01xed. Using the work of Lamb & al. 2005, the limiting luminosity is taken to be only dependent on the energy peak Ep of the bursts. For Swift bursts, a conservative threshold \ufb02ux of 0.8 ergs s\u22121 cm\u22122 has been chosen. This limit is shown as a dashed line. \f\u2013 11 \u2013 speci\ufb01c spectral parameters). From these, we can easily compute the limiting peak \ufb02ux fp,lim for our burst (assuming the Band function for our spectrum). Finally, we can determine the detection threshold of the observed energy \ufb02uence Fobs. This lower limit Fobs,lim is obtained via the simple relationship (Lee & Petrosian 1996): Fobs Fobs,lim = fp fp,lim Using the same reasoning we can obtain the limiting values for the intrinsic quantities meaning the intrinsic values that a given burst needs to have in order to be detected: Eiso Eiso,lim = Liso Liso,lim = fp fp,lim Those limiting average luminosities for each bursts of our sample are represented in Figure 3. This analysis was not carried out for BAT instrument on board Swift therefore we used a conservative threshold of 0.8 erg s\u22121 cm\u22122 for all of Swift bursts. 4.2.2. Analysis and results We now describe our determination of luminosity and density rate evolution of the parent population of our GRB sample. Our analysis is based on the work done by Efron & Petrosian (1992, 1994). We refer the reader to these two papers for details. We will here simply describe the most important steps of the analysis and what it allows us to infer on our data sample. This method has been developed in order to take into account e\ufb00ects of data truncation and selection bias on a heterogeneous sample from di\ufb00erent instruments with di\ufb00erent sensitivities as described above and shown in Figure 3. The method corrects for this bias by applying a proper rankings to di\ufb00erent subset of our sample. The \ufb01rst step is to compute the degree of correlation between the isotropic luminosity and redshift. For that we use the specialized version of Kendell\u2019s \u03c4 statistics. The parameter \u03c4 represents the degree of correlation found for the entire sample with proper accounting for the data truncation. \u03c4 = 0 means no correlation is found between the two parameters being inspected (luminosity and redshift in our case). Any other speci\ufb01c value \u03c40 implies presence of a correlation with a signi\ufb01cance of \u03c40\u03c3. With this statistic method in place, we can calculate the parametrization that best describe the luminosity evolution. To establish a functional form of the luminosityredshift correlation, we assume a power law luminosity evolution: L(z) = L0Z\u03b1. We then remove this dependency from the observed luminosity: L\u2032 \u2192Lobserved/(1 + z)\u03b1 and calculate the Kendell\u2019s \u03c4 statistics as a function of \u03b1. Figure 4 shows the variation of \u03c4 with \u03b1. Once the parametric form for the luminosity evolution have been determined, this nonparametric maximum likelihood techniques can be used to determine the cumulative distribution for luminosity and redshift (see Efron & Petrosian 1994) say \u03a6(L) and \u03c3(z), which gives the relative number of bursts under a certain redshift z. From this last function, we can \f\u2013 12 \u2013 easily draw the comoving rate density \u02d9 n(z), which is the number of GRB per unit comoving volume and unit time: \u02d9 n(Z) = d\u03c3(Z) dZ Z dV/dZ (6) where the factor Z is to take the time dilatation into account. Note that this method do not provide any constrain on the normalization of any of the quantity mentioned above. Normalization will therefore be set arbitrarily on all our \ufb01gures representing these functions. We \ufb01nd a 3.68 \u03c3 evidence for luminosity evolution (see the \u03c4 value at the onset of Figure 4 when \u03b1 = 0). From this \ufb01gure, we can also infer that \u03b1 = 2.21 for value obtained when \u03c4 = 0 gives the best description of the luminosity evolution for the assumed form has a one sigma range of [1.75, 2.74]. Constrain on the \u03b1 parameter is not very tight o\ufb00course since the size of our sample is still limited. While current satellites accumulate more data, we will be able to increase our data set and further constrain this parameter in the future. Fig. 4.\u2014 Variation of the \u03c4 parameter with the power law index \u03b1 of the luminosity evolution. \u03c4 = 0 means no correlation which gives the best value of \u03b1 = 2.21 for the assumed power law form with a one sigma range of 1.75 to 2.74. \f\u2013 13 \u2013 Fig. 5.\u2014 The cumulative luminosity distribution \u03a6(L) (left panel) and the cumulative redshift distribution \u03c3(z) (right panel). Fit to the cumulative density rate is represented with a point line. The cumulative functions are both shown in Figure 5 and the estimated comoving density rate is shown by the jagged curve in Figure 6. Most of the high frequency variation is not real and is due to taking the derivative of a noisy curve (\u03c3(Z)). The dashed line was obtained by \ufb01tting the cumulative density distribution by the following parametrized function: \u03c3(Z) \u221d (Z/Z0)p1 (1 + Z/Z0)p1\u2212p2 (7) with the following values for the parameters: Z0 = 1.8, p1 = 7.1, and p2 = 0.95 These results are still very preliminary as more data become available, accuracy of the density function will increase constraining further the evolution rate of long bursts. By tackling this problem for the \ufb01rst time we hope to set the ground for further analysis in the future. The behavior of the comoving density rate for our sample of long bursts is quite peculiar with a signi\ufb01cant rate increase happening at low redshifts. This e\ufb00ect might be due to some selection e\ufb00ects that we have not included in our analysis. For instance, it might be a consequence of the fact that instruments detect more easily low-redshifts host galaxies and therefore create a bias toward low redshifts GRBs. Another interesting feature is the steady increase we obtain in the GRB rate at high redshifts (z > 3). Figure 7 compares the estimated comoving rate evolution with di\ufb00erent models of Star Formation Rates (SFRs I, II, III). For comparison with Star Formation history, we used three di\ufb00erent models taken from the literature: \f\u2013 14 \u2013 Fig. 6.\u2014 The comoving rate density \u02d9 n(z). The dashed line was obtained by \ufb01tting the cumulative density distribution by the parametrized smooth function of equation 7. We also show comparison of the density rate (from Figure 6) with three di\ufb00erent SFR scenarios taken from literature. No SFR scenario seems to match the density rate deduced from our analysis. \f\u2013 15 \u2013 Fig. 7.\u2014 Comparison of the comoving density rate evolution of the total sample with that of several sub-sample where we impose three di\ufb00erent luminosity thresholds: Liso > 1049, > 1050, and > 8 \u00d7 1050 ergs.s\u22121. We also looked at a low luminosity population where we imposed of maximal luminosity of 8 \u00d7 1050 ergs.s\u22121. Each sub-sample is subject to the same analysis and has provided di\ufb00erent luminosity evolution as evident from the di\ufb00erent values of \u03b1. As expected, the rate at low redshift decreases with increasing values of threshold. \f\u2013 16 \u2013 Steidel et al. 1999: \u02d9 n = 0.16h70 e3.4z e3.4z + 22M\u2299yr\u22121Mpc\u22123 (8) Porciani & Madau 2000: \u02d9 n = 0.22h70 e3.05z\u22120.4 e2.93z + 15M\u2299yr\u22121Mpc\u22123 (9) Cole et al. 2001: \u02d9 n = (a + bz)h70 1 + (z/c)d M\u2299yr\u22121Mpc\u22123 (10) with (a, b, c, d) = (0.0166, 0.1848, 1.9474, 2.6316) As evident, no SFR scenario seems to match the density rate evolution deduced from our analysis, specially at low redshift. How much of this di\ufb00erence is real and how much is due to other selection e\ufb00ects that we have not quanti\ufb01ed is unclear. Because of increasing di\ufb03culty of identifying the host galaxy with increasing redshift one would expect some bias against detection of high redshift bursts. But the largest densities for SFR seems to be in the intermediate redshift range. An other possibility is that there may exist subclasses of GRBs such as low or high luminosity classes. In order to test this eventuality we have de\ufb01ned several subsets of our total GRB sample carried out the above analysis for each subsamples, determining a new luminosity evolution (a new \u03b1) and then proceeding to obtain \u02d9 n(z) from the smooth function \ufb01tting \u03c3(z). We impose di\ufb00erent luminosity thresholds for the di\ufb00erent subsamples. Three di\ufb00erent threshold have been chosen: Liso > 1049 ergs.s\u22121, Liso > 1050 ergs.s\u22121, and Liso > 8\u00d71050 ergs.s\u22121. We also looked at a low luminosity population where we imposed of maximal luminosity of 1050 ergs.s\u22121. Figures 6 and 7 compare the new rates with that of the total sample. As expected high luminosity samples contribute less to the rate at low redshifts. But the general trend and the di\ufb00erences with SFR are essentially still present. However the method and framework we presented would be a very valuable tool when enough data has been accumulated. 5. Summary and" + }, + { + "url": "http://arxiv.org/abs/0810.1753v1", + "title": "Relative Spectra and Distributions of Fluences of 3He and 4He in Solar Energetic Particles", + "abstract": "Solar Energetic Particles (SEPs) show a rich variety of spectra and relative\nabundances of many ionic species and their isotopes. A long standing puzzle has\nbeen the extreme enrichments of 3He ions. The most extreme enrichments are\nobserved in low fluence, the so-called impulsive, events which are believed to\nbe produced at the flare site in the solar corona with little scattering and\nacceleration during transport to the Earth. In two earlier papers (Liu et al.\n2004 and 2006) we showed how such extreme enrichments can result in the model\ndeveloped by Petrosian and Liu (2004), where ions are accelerated\nstochastically by plasma waves or turbulence. In this paper we address the\nrelative distributions of the fluences of 3He and 4He ions presented by Ho et\nal. (2005) which show that while the distribution of 4He fluence like many\nother extensive characteristics of solar flare, is fairly broad, the 3He\nfluence is limited to a narrow range. Moreover, the ratio of the fluences shows\na strong correlation with the 4He fluence. One of the predictions of our model\nwas presence of steep variation of the fluence ratio with the level of\nturbulence or the rate of acceleration. We show here that this feature of the\nmodel can reproduce the observed distribution of the fluences with very few\nfree parameters. The primary reason for the success of the model in both fronts\nis because fully ionized 3He ion, with its unique charge to mass ratio, can\nresonantly interact with more plasma modes and accelerate more readily than\n4He. Essentially in most flares, all background 3He ions are accelerated to few\nMeV/nucleon range, while this happens for 4He ions only in very strong events.\nA much smaller fraction of 4He ions reach such energies in weaker events.", + "authors": "Vahe Petrosian, Yan Wei Jiang, Siming Liu, George C. Ho, Glenn, M. Mason", + "published": "2008-10-09", + "updated": "2008-10-09", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "INTRODUCTION Solar \ufb02ares are excellent particle accelerators. Some of these particles on open \ufb01eld lines are observed as solar energetic particles (SEPs) at one AU or produce type III and other radio radiation. Those on closed \ufb01eld lines can be observed by the radiation they produce as they interact with solar plasma and \ufb01elds. Electrons produce nonthermal bremsstrahlung and synchrotron photons in the hard X-ray and microwave range, while protons (and other ions) excite nuclear lines in the 1 to 7 MeV range or may produce higher energy gamma-rays via \u03c00 production and its decay. It appears that stochastic acceleration (SA) of particles by plasma waves or turbulence plays an important role in production of high energy particles and consequent plasma heating in solar \ufb02ares (e.g., Ramaty 1979; M\u00a8 obius et al. 1980, 1982; Hamilton & Petrosian 1992; Miller et al. 1997; Petrosian & Liu 2004, hereafter PL04). This theory was applied to the acceleration of nonthermal electrons (Miller & Ramaty 1987; Hamilton & Petrosian 1992). It appears that it can produce many of the observed radiative signatures such as broad band spectral features (Park, Petrosian & Schwartz 1997; PL04) and the commonly observed hard X-ray emission from the tops of \ufb02aring loops (Masuda et al. 1994; Petrosian & Donaghy 1999). It is also commonly believed that the observed relative abundances of ions in SEPs favor a SA model (e.g. Mason et al. 1986 and Mazur et al. 1992). More recent observations have con\ufb01rmed this picture (see Mason et al. 2000, 2002, Reames et al. 1994 and 1997, and Miller 2003). One of the most vexing problem of SEPs has been the enhancement of 3He in the so-called impulsive or 3He -rich events, which sometimes can be 3 \u22124 orders of magnitude above the photospheric value1. There have been many attempts to explain this enhancement. Most of the proposed models, except the Ramaty and Kozlovsky (1974) model based on spalation (which has many problems), rely on resonant wave-particle interactions and the unique charge-to-mass ratio of 3He (see e.g. 1In addition there is charge-to-mass ratio dependent enhancement relative to the photospheric values of heavy ions in SEPs , and in few \ufb02ares gamma-ray line emissions also points to anomalous abundance pattern of the accelerated ions (Share & Murphy 1998; Hua, Ramaty & Lingenfelter 1989). We will not be dealing with these anomalies in this paper. \f\u2013 3 \u2013 Ibragimov & Kocharov 1977; Fisk 1978; Temerin & Roth 1992; Miller & Vi\u02dc nas 1993; Zhang 1995; Paesold, Kallenbach & Benz 2003). Most of these model assume presence of some particular kind of waves which preferentially heats 3He ions to a higher temperature than 4He ions, which then become seeds for subsequent acceleration by some (usually) unspeci\ufb01ed mechanism (for more detailed discussion see Petrosian 2008). None of these earlier works did a compare model spectra with observations. Fig. 1.\u2014 Left: Variation of the ratio of 3He to 4He \ufb02uences with the \ufb02uence of 4He showing a continuum of enrichments and a strong anti correlation. Middle: 3He vs 4He \ufb02uences showing a much larger range for the latter while the former seems to be limited to a small range. Note that the 3He \ufb02uences do not concentrate at the lower end which would be the case if observational threshold was a\ufb00ecting their distribution. Right: The distribution of \ufb02uences of 3He and 4He . Note that the high end of the 4He distribution may be truncated because of the threshold of the \ufb02uence ratio (missing point in the lower left triangle of the middle panel) [From Ho05]. The \ufb02uences are in units of particles/(cm2 sr MeV/nucleon). In two more recent papers Liu, Petrosian & Mason 2004 and 2006 (LPM04, LPM06) have demonstrated that a SA model by parallel propagating waves can explain both the extreme enhancement of 3He and can reproduce the observed 3He and 4He spectra. In LPM06 it was shown that the relative \ufb02uences of these ions, and to a lesser extent their spectral indexes, depend on several model parameters so that in a large sample of events one would expect some dispersion in the distributions of \ufb02uences and spectra. Ho et al. (2005; Ho05) analyzed a large sample of events and provide distributions of 3He and 4He \ufb02uences and the correlations between them. Our aim here is to explore the possibility of explaining these observations by the above mentioned dependence of the \ufb02uences on the model parameters. In particular we would like to explain the observations reproduced in Figure 1 which shows a strong anti correlation of 3He /4He ratio with 4He \ufb02uence (left panel), but shows essentially no correlation between the two \ufb02uences (middle panel). More strikingly, the 3He \ufb02uence distribution appears to be relatively narrow and follows a log-normal distribution, while 4He distribution is much broader and may have a power law distribution in the middle of the range, where the observational selection e\ufb00ects are unimportant. Often the SEPs are \f\u2013 4 \u2013 divided into two classes; impulsive-high enrichment and gradual-normal abundance classes. However, as evident from the left panel of the above \ufb01gure there is a continuum of enrichment extending over many orders of magnitude.2 In the next section we describe some of the model characteristics that can explain these observations and in \u00a73 we compare the model predictions with the observations, speci\ufb01cally the distributions of the \ufb02uences. A brief summary and conclusion is given in \u00a74. 2. MODEL CHARACTERISTICS The model used in LPM04 and LPM06 which successfully described the enrichment and spectra in several \ufb02ares has several free parameters. As usual we have the plasma parameters density n, temperature T and magnetic \ufb01eld B0. It turns out that the \ufb01nal results are insensitive to the temperature as long as it is higher than 2 \u00d7 106K (see Fig. 4 below), which is the case for \ufb02aring coronal loops. It also turns out that only a combination of density and magnetic \ufb01eld (\u221an/B0) comes into play. We express this as the ratio of plasma to gyro-frequency of electrons, \u03b1 = \u03c9pe/\u2126e which is related to the Alfv\u00b4 en velocity in unit of speed of light; \u03b2A = \u03b41/2/\u03b1, where \u03b4 = me/mp is the ratio of the electron to proton masses. So in reality we have only one e\ufb00ective free plasma parameter \u03b1 or \u03b2A. On the other hand, several parameters are required to describe the spectrum of the turbulence. Following the above papers we assume broken power laws for the two relevant modes, the proton cyclotron (PC) and He cyclotron (HeC), with an inertial range kmin < k < kmax, and similar power law indexes q and qh in and beyond the inertial range, respectively.3 The only di\ufb00erence between the two branches is that the wave numbers kmax and kmin for the PC mode are two times higher than those for the HeC mode. Finally there is the most important parameter related to the total energy density of turbulence, Etot, which determines both the rate of acceleration and, when integrated over the volume of the source region, determines the intensity or the strength of the event. This parameter is the characteristic time scale \u03c4p or its inverse the rate de\ufb01ned as (see, e.g. Pryadko & Petrosian 1997) \u03c4 \u22121 p = \u03c0 2 \u2126e \u0014 4E0 B2 0/8\u03c0 \u0015 with E0 = (q \u22121)Etot (kminc/\u2126e)1\u2212q , (1) 2The 4He distribution shows a weak sign of bi modality but this is not statistically signi\ufb01cant. In this paper we will ignore this feature. 3In LPM06 we also have an index ql describing the power law below the inertial range which is of minor consequence. For all practical purposes we can assume a sharp cuto\ufb00below kmin which means ql \u2192\u221e. \f\u2013 5 \u2013 for each mode. The factor of 4 arises from having two branches (PC and HeC) and two propagation directions of the waves (see LPM06 for details). As shown in LPM04 and LPM06 papers the main di\ufb00erence between the acceleration process of 3He and 4He is in the di\ufb00erence between their acceleration rate or timescales (\u03c4a). The other relevant timescales, namely the loss (\u03c4loss) and escape (Tesc) times are essentially identical for the two ions (e.g. see left panel of Fig. 7 of LPM06). The acceleration timescales are di\ufb00erent mainly at low energies (typically below one MeV/nucleon), where the acceleration time of 4He is a longer (by one to two orders of magnitude). As a result at these low energies the 4He acceleration time may be comparable or longer than the loss time which makes it di\ufb03cult to accelerate 4He ions. Most of 4He ions are piled up below some energy (roughly where \u03c4a = \u03c4loss) and only a few of them accelerate into the observable range (e.g. see right panel of Fig. 7 of LPM06). However, because the acceleration times scale as \u03c4p while the loss time does not, for higher level of turbulence (larger E0), the acceleration time may fall below the loss time so that 4He ions can be then accelerated more readily (see Fig. 3 below). On the other hand, essentially independent of values of any of the above parameters, the 3He acceleration time at all energies, in particular at low energies, is always far below its loss time so that in all cases (except for very high densities or very low values of \u03c4 \u22121 p ) 3He ions are accelerated easily to high energies. The relative values of the escape and acceleration times (for both ions) determine their high energy spectral cuto\ufb00s. Figure 2 shows variation with energy of acceleration times of 3He (thick lines) and 4He (thin lines) and their dependence on parameters kmin, \u03b1 and q. The remaining parameters qh and kmax only a\ufb00ect the slope of the low energy end of 4He which does not a\ufb00ect the spectra noticeably. It is evident that the general behavior of the acceleration time scales described above (consisting of a low and a high energy monotonically increasing branches with a declining transition in between) is present in all models. These features change only quantitatively and often by small amounts. As expected lowering kmin decreases the acceleration times at the high energy branch (left panel). This is because the lower kmin waves interact resonantly with higher energy ions. On the other hand, a lower value of \u03b1 (or larger Alfv\u00b4 en velocity or magnetization) decreases the times at the low energy branch (middle panel). Steeper spectra in the inertial range, produce a higher rate of acceleration (larger E0; see eq. [1]) and decrease the overall acceleration time scales (right panel) Note that in this and subsequent \ufb01gures, kmax is in units of \u2126p/c so that kmax = 2\u03b1\u03b4\u22121/2 in the labels means an actual kmax = 2\u2126p/vA = p 2\u03b2p/rg,p, where \u03b2p = 2(vth,p/vA)2 is the plasma beta, and vth,p = kBT/mp and rg,p = vth,p/\u2126p are the proton thermal velocity and gyro radius. The scale of kmax is clearly beyond the MHD regime (where the wave frequency \u03c9 = vAk \u226a\u2126p), but is below the proton gyro radius for the chosen parameters \f\u2013 6 \u2013 Fig. 2.\u2014 Dependence of the acceleration time of 4He (thin, blue) and 3He (thick, red) on kmin (left), \u03b1 (middle) and q (right). The lines are labeled with the corresponding numbers of each parameter. In each case the solid lines are for the \ufb01ducial model with \u03b1 = 0.5, kmax = 2\u03b1\u03b4\u22121/2 = 10kmin, q = 2 and qh = 4. ( p 2\u03b2p \u223c0.03). Using these acceleration rates we calculate spectra of the two ions (as in LPM04 and LPM06) for a range of parameters. Figure 3 shows three sets of spectra where we vary kmin, \u03b1 and \u03c4 \u22121 p . In each panel the sold lines are for the \ufb01ducial model (\u03b1 = 0.5, kmax = 2\u03b1\u03b4\u22121/2 = 10kmin, q = 2 and qh = 4) chosen to \ufb01t the spectra observed by ACE/ULEIS for 30 Sep. 1999 event. The spectral variations here re\ufb02ect the above described variations of the acceleration timescales. Lower kmin (or larger inertial range) yields a larger tail for both ions (left panel). Variation of \u03b1 has a similar and smaller e\ufb00ect on 3He spectra but it a\ufb00ects the 4He spectra dramatically; for \u03b1 \u223c1 essentially there is no 4He acceleration but the \u03b1 \u223c1/4 model accelerates a large number of 4He ions beyond 0.1 MeV/nucleon and into the observable range (middle panel). This e\ufb00ect is even more pronounced for increasing values of \u03c4 \u22121 p , where a factor of few increase in the general rate of acceleration (or the level of turbulence) causes a large increase of the \ufb02uence of 4He (right panel), because, as stated above, its acceleration time becomes shorter than its loss time even al low energies. All these spectra show the same general characteristic features. While most 3He ions are accelerated to high energies for essentially all model parameters appropriate for solar coronal conditions and reasonable level of turbulence, 4He ions show a characteristic lower energy bump with a nonthermal hard tail. In general, the lower energy bump is below the observation range except for low \u03b1 and high values of \u03c4 \u22121 p . Since a high level of turbulence is expected for brighter and stronger events, this means that we get smaller 3He /4He \ufb02ux or \ufb02uence ratios for brighter events. Note that the spectra in such cases may not agree with observations but this is not troublesome, because as is well established, the stronger events (the so-called gradual events) are associated with CMEs and shocks which most likely will modify the above spectra which are those of ions escaping the corona. Thus, the higher energy bumps \f\u2013 7 \u2013 in the spectra shown here should be considered as seeds for such further acceleration during the transport from the lower corona to the Earth, which becomes more likely, and is expected to change the above spectra more signi\ufb01cantly, for more energetic events. Thus if we give up the idea that there are two distinct classes of SEPs (impulsive and highly enriched and gradual and normal abundance) but that there is a continuum of events, which observations in Figure 1 show, then the above scenario implies that the main acceleration occurs in the solar corona. Subsequent interactions in CME shocks mainly modify the seed population escaping the turbulent coronal site. Fig. 3.\u2014 Dependence of the accelerated spectra of 4He and 3He on kmin (left), \u03b1 (middle) and \u03c4 \u22121 p (right; \u03c4 \u22121 p in units of \u03c4 \u22121,0 p = 0.0055 s\u22121 ). The lines are labeled with the corresponding numbers of each parameter. In each case the solid lines are for the \ufb01ducial model with \u03b1 = 0.5, kmax = 2\u03b1\u03b4\u22121/2 = 10kmin, q = 2 and qh = 4 that is chosen to \ufb01t the data point shown for the 30 Sep. 1999 event observed by ACE. Note that for a better indication what energy particles dominate the spectra in a log-log plot we plot particle energy time \ufb02uence. From the spectra we can calculate the ratio of 3He to 4He \ufb02uences for di\ufb00erent models which could be then compared with the observed ratios shown in Figure 1. Inspection of observed spectra indicate that a representative ion energy would be 1 MeV/nucleon. In Figure 4 we show the variation of this ratio with temperature (left panel) and \u03c4 \u22121 p (middle and right panels) for several values of other important parameters. As evident this ratio is most sensitive to the value of \u03c4 \u22121 p which represents the general rate of acceleration or the level of turbulence. The ratio can change from the highest observed value (\u223c30) to near photospheric value (\u223c2 \u00d7 10\u22124) for only a factor of 30 change in \u03c4 \u22121 p . It is natural to expect higher level of turbulence generation (i.e. a larger value of \u03c4 \u22121 p ) in stronger events. Therefore, this predicted correlation is in agreement with the general trend of observation shown in Figure 1 (left panel), if the strength of an event is measured by the observed \ufb02uence of 4He ions and most other ions like carbon, nitrogen and oxygen.4 This seems reasonable 4It should be noted that while the observations are for \ufb02uences integrated from 0.2 to 2.0 MeV/nucleon \f\u2013 8 \u2013 and calls for more quantitative comparison with observations and model prediction. In the next section we present one such comparison. Fig. 4.\u2014 Variation of the accelerated 3He to 4He \ufb02uence ratio (at E = 1 MeV/nucleon) with background plasma temperature T (left) and \u03c4 \u22121 p (middle and right) for several values of other speci\ufb01ed model parameters. The lines are labeled with the corresponding numbers of each parameter. In each case the open circle stands for the model that \ufb01ts spectra of the 30 Sep. 1999 event, and the solid lines are for the \ufb01ducial model with \u03b1 = 0.5, kmax = 2\u03b1\u03b4\u22121/2 = 10kmin, q = 2 and qh = 4. Note the weak dependence on the temperature for T > 2 \u00d7 106 K and a strong dependence on \u03c4 \u22121 p for all model parameters with saturates at chromospheric values of the ratio. The horizontal dot-dash line shows the highest ratio observed so far (see Fig. 1, left). 3. Distributions of Fluences We have seen that the general observed behavior of the the ratio of the \ufb02uences de\ufb01ned as R = F3/F4 is similar to the model predictions. In this section we try to put this result on a \ufb01rmer quantitative footing by considering the observed distributions of the \ufb02uences of both ions as shown in Figure 1 (right panel). Except for the minor truncation at high values of F4, the \ufb02uence of 4He , the observed distribution of F3, the \ufb02uences of 3He , seem to be almost bias free and not a\ufb00ected signi\ufb01cantly by the observational selection e\ufb00ects. For example, there are well de\ufb01ned and steep decline both at the high and low \ufb02uences away from the peak 3He value of F0 \u223c103.7 particles cm2sr(MeV/nucleon). This is not what one would expect if the data su\ufb00ered truncation due to a low observation threshold. In such a case one would observe a distribution increasing up to the threshold followed by a rapid cuto\ufb00 below it. Our model results described above also seem to predict the observed behavior. As stressed in previous section, the 3He spectra and \ufb02uxes appear to be fairly independent of model parameters because essentially under all conditions most 3He ions are accelerated and our theoretical ratios are calculated at 1 MeV/ nucleon which is near the geometric or algebraic mean of the range. \f\u2013 9 \u2013 form a characteristic concave spectrum. Thus we believe that it is safe to assume that the observed 3He distribution is a true representations of the intrinsic distribution (as produced on the Sun). This distribution can be \ufb01tted very nicely with a log-normal expression.5 If we de\ufb01ne the logs of the \ufb02uences and their ratio as LF3 \u2261ln(F3/F0), LF4 \u2261ln(F4/F0) LR \u2261lnR, (2) then from \ufb01tting the observed distribution of 3He by a log-normal form we get: \u03c83(LF3) = \u03c60 exp \u0012LF3 \u03c33 \u00132 with \u03c33 \u223c0.22, (3) which is shown on the right panel of Figure 5. Using this distribution we now derive the distribution of 4He \ufb02uences, \u03c84(LF4). For this we use the model predicted relationship between the two \ufb02uences as shown in Figure 4 above. We will use the two panels of this \ufb01gure showing the dependence of the log of the \ufb02uence ratio LR on \u03c4 \u22121 p . It turns out that most of these curves can be \ufb01tted by a simple function: ln(R/R0) = LR \u2212lnR0 = A ln(\u03c4 \u22121 p /\u03c4 \u22121 p0 ). (4) The left panel of Figure 5 shows \ufb01ts to the curves in the right panel of Figure 4 with the indicated values of the the \ufb01tting parameters A, R0 and \u03c4 \u22121 p0 (which is not the same as the \u03c4 \u22121 p,0 = 0.0055s\u22121 in Fig. 3). We shall use this relation to transfer the 3He \ufb02uences and distributions to those of 4He . For a given value of \u03c4 \u22121 p the number of events with 4He log-\ufb02uences between LF4 and LF4 + d(LF4) (i.e. \u03c84(LF4)d(LF4)) is equal to \u03c83(LF3)d(LF3), the number of events with 3He log-\ufb02uence LF3 = LF4 + LR(\u03c4 \u22121 p ) and LF3 + d(LF3), whered(LF3) = d(LF4)and LR(\u03c4 \u22121 p ) = lnR0 + A/ln(\u03c4 \u22121 p /\u03c4 \u22121 p,0). Thus we have \u03c84(LF4) = \u03c83(LF4 + LR[\u03c4 \u22121 p ]) = \u03c60 exp \u0012LF4 + LR(\u03c4 \u22121 p ) \u03c33 \u00132 (5) However, we expect not a single value for \u03c4 \u22121 p , which as stated above is a proxy for the strength of the event, but a broad distribution of events with di\ufb00erent strengths, say f(\u03c4 \u22121 p ). 5The truncation shown by the shaded area in the middle panel of Figure 1 introduces a slight bias against detection of low \ufb02uences. We estimate that, because there are fewer events at the high 4He \ufb02uence end, this means a 10 to 20% underestimation of the distribution of the three lowest values of the 3He histogram (right panel, Fig. 1). We will ignore this small correction, whose main e\ufb00ect is to increase the value of \u03c33 by a small amount. \f\u2013 10 \u2013 Fig. 5.\u2014 Left: A simple analytic \ufb01t (curves) to the model relations (points) between the \ufb02uence ratio and the acceleration rate or event strength represented by \u03c4 \u22121 p for the three di\ufb00erent values of kmin of the right panel of Figure 4 with the indicated \ufb01tting parameters. Right: The \ufb01tted log-normal distribution to the 3He \ufb02uences and predicted 4He distributions of three models compared with observations. The solid line which gives the best \ufb01t is for n = 2, kmin = 0.1kmax, the dashed line is for n = 2, kmin = 0.2kmax and the dash-dot line is for n = 1.5, kmin = 0.2kmax. Since, as argued above, the 3He \ufb02uence distribution \u03c83(LF3) is independent of \u03c4 \u22121 p , then for a population of events we have \u03c84(LF4) = Z \u221e 0 \u03c60 exp \u0012LF4 + LR(\u03c4 \u22121 p ) \u03c33 \u00132 f(\u03c4 \u22121 p )d\u03c4 \u22121 p . (6) Every term in the above equations is determined by observations and our models except the distribution f(\u03c4 \u22121 p , which is a re\ufb02ection of the level of the distribution of the level of turbulence and, when multiplied by the volume of the turbulent acceleration region (which does not a\ufb00ect the 3He /4He ratio), is related the overall strength of the event. Observations of solar \ufb02ares show that most extensive characteristics which are a good measure of the \ufb02are strength or magnitude, such as X-ray, optical or radio \ufb02uxes, appear to obey a steep power law distribution, usually expressed as a cumulative distribution \u03a6(> Fi) \u221dF \u2212n i (or di\ufb00erential distribution \u03c6(Fi) \u221dF \u2212n\u22121 i ) with typically n \u223c1.5 (see, e.g. Dennis 1985 and reference therein). Such a distribution seems to roughly agree with the prediction of the so-called avalanche model proposed by Lu & Hamilton (1991). Now assuming that \u03c4 \u22121 p also obeys such a power law distribution (\u0131.e. f(\u03c4 \u22121 p ) \u221d(\u03c4 \u22121 p )\u2212(n+1)) we can write the distribution of 4He as: \u03c84(LF4) = Z \u221e 0 \u03c60 exp \u0012LF4 + lnR0 + A/x \u03c33 \u00132 e\u2212nxdx, with x \u2261ln(\u03c4 \u22121 p /\u03c4 \u22121 p0 ). (7) Using the above relations we have calculated the 4He \ufb02uence distribution. The results for three models are compared with the observations on the right panel of Figure 5. Given the other model parameters (kmin, \u03b1 etc.) we have only one free parameter namely the index n for \f\u2013 11 \u2013 this \ufb01t. The solid line obtained for the top curve of the left panel (kmin = 0.2kmax, \u03b1 = 0.5), and for n = 2 provides a good \ufb01t to the observed distribution of 4He \ufb02uences. In order to demonstrate the sensitivity of the results to the parameters we also show two other model predictions based on slightly di\ufb00erent parameter values. These results provide additional quantitative evidence (beside those given in LPM04 and LPM06) on the validity of the SA of SEPs by turbulence, and indicate that with this kind of analysis one can begin to constrain model parameters. 4. SUMMARY AND" + } + ], + "Seungryong Kim": [ + { + "url": "http://arxiv.org/abs/1904.02969v1", + "title": "Semantic Attribute Matching Networks", + "abstract": "We present semantic attribute matching networks (SAM-Net) for jointly\nestablishing correspondences and transferring attributes across semantically\nsimilar images, which intelligently weaves the advantages of the two tasks\nwhile overcoming their limitations. SAM-Net accomplishes this through an\niterative process of establishing reliable correspondences by reducing the\nattribute discrepancy between the images and synthesizing attribute transferred\nimages using the learned correspondences. To learn the networks using weak\nsupervisions in the form of image pairs, we present a semantic attribute\nmatching loss based on the matching similarity between an attribute transferred\nsource feature and a warped target feature. With SAM-Net, the state-of-the-art\nperformance is attained on several benchmarks for semantic matching and\nattribute transfer.", + "authors": "Seungryong Kim, Dongbo Min, Somi Jeong, Sunok Kim, Sangryul Jeon, Kwanghoon Sohn", + "published": "2019-04-05", + "updated": "2019-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Establishing correspondences and transferring attributes across semantically similar images can facilitate a variety of computer vision applications [35, 34, 25]. In these tasks, the images resemble each other in contents but differ in visual attributes, such as color, texture, and style, e.g., the images with different faces as exempli\ufb01ed in Fig. 1. Numerous techniques have been proposed for the semantic correspondence [15, 24, 42, 19, 43, 23] and attribute transfer [11, 6, 28, 21, 38, 16, 20, 16, 34, 12], but these two tasks have been studied independently although they can be mutually complementary. To establish reliable semantic correspondences, stateof-the-art methods have leveraged deep convolutional neural networks (CNNs) in extracting descriptors [7, 53, 24] and regularizing correspondence \ufb01elds [15, 42, 19, 43, 23]. This research was supported by R&D program for Advanced Integrated-intelligence for Identi\ufb01cation (AIID) through the National Research Foundation of KOREA (NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289). \u2217Corresponding author Source Image Target\u00a0 Image Matching\u00a0 Fields Stylized\u00a0 Source SAM\u2010Net Attribute\u00a0Transfer\u00a0Networks Semantic\u00a0Matching\u00a0Networks \u2026 \u2026 Figure 1. Illustration of SAM-Net: for semantically similar images having both photometric and geometric variations, SAM-Net recurrently estimates semantic correspondences and synthesizes attribute transferred images in a joint and boosting manner. Compared to conventional handcrafted methods [35, 22, 5, 54, 48], they have achieved a highly reliable performance. To overcome the problem of limited ground-truth supervisions, some methods [42, 19, 43, 23] have tried to learn deep networks using only weak supervision in the form of image pairs based on the intuition that the matching cost between the source and target features over a set of transformations should be minimized at the correct transformation. These methods presume that the attribute variations between source and target images are negligible in the deep feature space. However, in practice the deep features often show limited performance in handling different attributes that exist in the source and target images, often degrading the matching accuracy dramatically. To transfer the attributes between source and target images, following the seminal work of Gatys et al. [10], numerous methods have been proposed to separate and recombine the contents and attributes using deep CNNs [11, 6, 28, 21, 38, 16, 20, 16, 34, 12]. Unlike the parametric methods [11, 21, 38, 16] that match the global statistics of deep features while ignoring the spatial layout of contents, the non-parametric methods [6, 28, 34, 12] directly \ufb01nd neural patches in the target image similar to the source patch and synthesize them to reconstruct the stylized image. These non-parametric methods generally estimate nearest 1 arXiv:1904.02969v1 [cs.CV] 5 Apr 2019 \fneighbor patches between source and target images with weak implicit regularization methods [6, 28, 34, 12] using a simple local aggregation followed by a winner-takesall (WTA). However, photorealistic attribute transfer needs highly regularized and semantically meaningful correspondences, and thus existing methods [6, 28, 12] frequently fail when the images have background clutters and different attributes while representing similar global feature statistics. A method called deep image analogy [34] has tried to estimate more semantically meaningful dense corrrespondences for photorealistic attribute transfer, but it still has limited localization ability with PatchMatch [3]. In this paper, we present semantic attribute matching networks (SAM-Net) for overcoming the aforementioned limitations of current semantic matching and attribute transfer techniques. The key idea is to weave the advantages of semantic matching and attribute transfer networks in a boosting manner. Our networks accomplish this through an iterative process of establishing more reliable semantic correspondences by reducing the attribute discrepancy between semantically similar images and synthesizing an attribute transferred image with the learned semantic correspondences. Moreover, our networks are learned from weak supervision in the form of image pairs using the proposed semantic attribute matching loss. Experimental results show that SAM-Net outperforms the latest methods for semantic matching and attribute transfer on several benchmarks, including TSS dataset [48], PF-PASCAL dataset [14], and CUB-200-2011 dataset [51]. 2. Related Work Semantic correspondence. Most conventional methods for semantic correspondence that use handcrafted features and regularization methods [35, 22, 5, 54, 48] have provided limited performance due to a low discriminative power. Recent approaches have used deep CNNs for extracting their features [7, 53, 24, 39] and regularizing correspondence \ufb01elds [15, 41, 42]. Rocco et al. [41, 42] proposed deep architecture for estimating a geometric matching model, but these methods estimate only globally-varying geometric \ufb01elds. To deal with locally-varying geometric deformations, some methods such as UCN [7] and CAT-FCSS [25] were proposed based on STNs [18]. Recently, PARN [19], NC-Net [43], and RTNs [23] were proposed to estimate locally-varying transformation \ufb01elds using a coarse-to-\ufb01ne scheme [19], neighbourhood consensus [43], and an iteration technique [23]. These methods [19, 43, 23] presume that the attribute variations between source and target images are negligible in the deep feature space. However, in practice the deep features often show limited performance in handling different attributes. Aberman et al. [1] presented a method to deal with the attribute variations between the images using a variant of instance normalization [16]. However, the method does not have an explicit learnable module to reduce the attribute discrepancy, thus yielding limited performance. Attribute transfer. There have been a lot of works on the transfer of visual attributes, e.g., color, texture, and style, from one image to another, and most approaches are tailored to their speci\ufb01c objectives [40, 47, 8, 2, 52, 9]. Since our method represents and synthesizes deep features to transfer the attribute between semantically similar images, the neural style transfer [11, 6, 21, 20] is highly related to ours. In general, these approaches can be classi\ufb01ed into parametric and non-parametric methods. In parametric methods, inspired by the seminal work of Gatys et al. [10], numerous methods have been presented, such as the work of Johnson et al. [21], AdaIN [16], and WCT [31]. Since these methods are globally formulated, they have shown limited performance for photorealistic stylization tasks [32, 38]. To alleviate these limitations, Luan et al. proposed a deep photo style transfer [38] that computes and uses the semantic labels. Li et al. proposed Photo-WCT [32] to eliminate the artifacts using additional smoothing step. However, these methods still have been formulated without considering semantically meaningful correspondence \ufb01elds. Among non-parametric methods, the seminal work of Li et al. [28] \ufb01rst searches local neural patches, which are similar to the patch of content image, in the target style image to preserve the local structure prior of content image, and then uses them to synthesize the stylized image. Chen et al. [6] sped up this process using the feed-forward networks to decode the synthesize features. Inspired by this, various approaches have been proposed to synthesize locally blended features ef\ufb01ciently [29, 49, 37, 30, 50]. However, the aforementioned methods are tailored to the artistic style transfer, and thus they focused on \ufb01nding the patches to reconstruct more plausible images, rather than \ufb01nding semantically meaningful dense correspondences. They generally estimate the nearest neighbor patches using weak implicit regularization methods such as WTA. Recently, Gu et al. [12] introduced a deep feature reshuf\ufb02e technique to connect both parametric and non-parametric methods, but they search the nearest neighbor using an expectationmaximization (EM) that also produces limited localization accuracy. More related to our work is a method called deep image analogy [34] that searches semantic correspondences using deep PatchMatch [3] in a coarse-to-\ufb01ne manner. However, PatchMatch inherently has a limited regularization power as shown in [27, 36, 33]. In addition, the method still needs the greedy optimization for feature deconvolution that induces computational bottlenecks, and only considers the translational \ufb01elds, thus having the limitation to handle more complicated deformations. 2 \fFeature\u00a0 Extraction Feature\u00a0 Extraction Semantic\u00a0 Matching s F t F s F ( ) t F T l T M \uf04c s I t I (a) Feature\u00a0 Extraction Feature\u00a0 Extraction Attribute\u00a0 Transfer s F t F A \uf04c s t F \uf0ac t F s t I \uf0ac s I t I (b) Feature\u00a0 Extraction Feature\u00a0 Extraction s F t F l T , s t L F \uf0ac , t L F Semantic\u00a0 Matching Attribute\u00a0 Transfer s t I \uf0ac AM \uf04c s I t I (c) Figure 2. Intuition of SAM-Net: (a) methods for semantic matching [41, 42, 23, 19], (b) methods for attribute transfer [11, 21, 28], and (c) SAM-Net, which recurrently weaves the advantages of both existing semantic matching and attribute transfer techniques. 3. Problem Statement Let us denote semantically similar source and target images as Is and It, respectively. The objective of our method is to jointly establish a correspondence \ufb01eld fi = [ui, vi]T between the two images that is de\ufb01ned for each pixel i = [ix, iy]T and synthesize an attribute transferred image Is\u2190t by transferring an attribute of target image It to a content of source image Is. CNN-based methods for semantic correspondence [41, 25, 42, 19, 43, 23] involve \ufb01rst extracting deep features [45, 25], denoted by F s i and F t i , from Is i and It i within local receptive \ufb01elds, and then estimating correspondence \ufb01eld fi of the source image using deep regularization models [41, 42, 23], as shown in Fig. 2(a). To learn the networks using only image pairs, some methods [42, 23] formulate the loss function based on the intuition that the matching cost between the source feature F s i and the target feature F t i+fi over a set of transformations should be minimized. For instance, they formulate the matching loss de\ufb01ned as LM = X i \u2225F s i \u2212F t i+fi\u22252 F , (1) where \u2225\u00b7 \u22252 F denotes Frobenius norm. To deal with more complex deformations such as af\ufb01ne transformation [27, 23], instead of F t i+fi, F t(Ti) or F t i+fi(Ai) can be used with a 2 \u00d7 3 matrix Ti = [Ai, fi]. Although semantically similar images can share similar contents but have different attributes, these methods [41, 42, 19, 43, 23] simply assume that the attribute variations between source and target images are negligible in the deep feature space. It thus cannot guarantee measuring a fully accurate matching cost without an explicit module to reduce the attribute gaps. To minimize the attribute discrepancy between source and target images, attribute or style transfer methods [11, 6, 21, 20] separate and recombine the content and attribute. Unlike the parametric methods [11, 38], the non-parametric methods [6, 28, 34, 12] directly \ufb01nd neural patches in the target image similar to the source patch and synthesize them to reconstruct the stylized feature F s\u2190t and image Is\u2190t, as shown in Fig. 2(b). Formally, they formulate two loss functions including the content loss de\ufb01ned as LC = X i \u2225F s\u2190t i \u2212F s i \u22252 F , (2) and the non-parametric attribute transfer loss de\ufb01ned as LA = X i X j\u2208Ni \u2225F s\u2190t j \u2212F t j+fi\u22252 F , (3) where i + fi is the center point of the patch in It that is most similar to a patch centered at i in Is. Generally, fi is determined using the matching scores of normalized crosscorrelation [6, 28] aggregated on Ni over all local patches followed by the labeling optimization such that fi = argmax m X j\u2208Ni (F s j \u00b7 F t j+m)/\u2225F s j \u2225\u2225F t j+m\u2225, (4) where the operator \u00b7 denotes inner product. However, the hand-designed discrete labeling techniques such as WTA [6, 28], PatchMatch [34], and EM [12] used to optimize (4) rely on weak implicit smoothness constraints, often producing poor matching results. In addition, they only consider the translational \ufb01elds, i.e., fi, thus limiting handling more complicated deformations caused by scale, rotation and skew that may exist among object instances. 4. Method 4.1. Overview We present the networks to recurrently estimate semantic correspondences and synthesize the stylized images in a boosting manner, as shown in Fig. 2(c). In the networks, correspondences are robustly established by matching the stylized source and target images, in contrast to existing methods [42, 23] that directly match source and target images that have the attribute discrepancy. At the same time, blended neural patches using the correspondences are used to reconstruct the attribute transferred image in a semanticaware and geometrically aligned manner. Our networks are split into three parts as shown in Fig. 3: feature extraction networks to extract source and target features F s and F t, semantic matching networks to establish 3 \fCorrelation Blending \u2026 \u2026 \u2026 Convolution Max-pooling Up-sampling \u2026 \u2026 droplink droplink skip skip Attribute Transfer Networks Semantic Matching Networks Feature Extraction Networks , s t l F \u00ac , s t l I \u00ac s I t I , t l F skip skip , s t l B \u00ac l C 1 l T 1 ( ; ) l l l G T T C W + = F ( ; ) s F I W F , ( ; ) s t l D B W \u00ac F ( ; ) l G C W F ( ; ) t F I W F Figure 3. Network con\ufb01guration of SAM-Net, consisting of feature extraction networks, semantic matching networks, and attribute transfer networks in a recurrent structure. Initially, F s\u2190t,0 = F s and F t,0 = [I2\u00d72, 02\u00d71]. They output T l i and Is\u2190t,l at each l-th iteration. (a) (b) (c) (d) (e) (f) (g) (h) Figure 4. Convergence of SAM-Net: (a) source image, (b) target image, iterative evolution of attribute transferred images (c), (e), and (g) and warped images using dense corresondences (d), (f), and (h) after iteration 1, 2, and 3. In the recurrent formulation of SAM-Net, the predicted transformation \ufb01elds and attribute transferred images become progressively more accurate through iterative estimation. correspondence \ufb01elds T, and attribute transfer networks to synthesize the attribute transferred image Is\u2190t. Since our networks are formulated in a recurrent manner, they output T l and Is\u2190t,l at each l-th iteration, as exempli\ufb01ed in Fig. 4. 4.2. Network Architecture Feature extraction networks. Our model accomplishes the semantic matching and attribute transfer using deep features [45, 25]. To extract the features for source F s and target F t, the source and target images (Is and It) are \ufb01rst passed through shared feature extraction networks with parameters WF such that Fi = F(Ii; WF ), respectively. In the recurrent formulation, an attribute transferred feature F s\u2190t,l from target to source images and a warped target feature F t,l, i.e., F t warped using the transformation \ufb01elds T l i , are reconstructed at each l-th iteration. Semantic matching networks. Our semantic matching networks consist of the matching cost computation and inference modules motivated by conventional RANSAC-like methods [17]. We \ufb01rst compute the correlation volume with respect to translational motion only [41, 42, 43, 23] and then pass it to subsequent convolutional layers to determine dense af\ufb01ne transformation \ufb01elds Ti. Unlike existing methods [41, 42, 23], our method computes the matching similarity between not only source and target features but also synthesized source and target features to minimize errors from the attribute discrepancy between source and target features such that: Cl i(p) =(1 \u2212\u03bbl)(F s i \u00b7 F t,l p )/\u2225F s i \u2225\u2225F t,l p \u2225 + \u03bbl(F s\u2190t,l i \u00b7 F t,l p )/\u2225F s\u2190t,l i \u2225\u2225F t,l p \u2225, (5) where p \u2208Pi for local search window Pi centered at i. \u03bbl controls the trade-off between content and attribute when computing the similarity, which is similar to [34]. Note that when \u03bbl = 0, we only consider the source feature F s without considering the stylized feature F s\u2190t. These similarities undergo L2 normalization to reduce errors [42]. Based on this, the matching inference networks with parameters WG iteratively estimate the residual between the previous and current transformation \ufb01elds [23] as T l i \u2212T l\u22121 i = F(Cl i; WG). (6) The current transformation \ufb01elds are then estimated in a re4 \fj i s F (a) t F l i i f \uf02b l j i f \uf02b l j j f \uf02b i j (b) t F i l i i f \uf02b l j i g \uf02b j l j j f \uf02b (c) Figure 5. Visualization of neural patch blending: for source feature F s in (a), unlike existing methods [34, 28, 12] that blend features of source F s and target F t using only traslationional \ufb01elds fi as in (b), our method blends the features with the learned af\ufb01ne transformation \ufb01elds T l i = [Al i, f l i] as in (c). current manner [23] as follows: T l i = [I2\u00d72, 02\u00d71] + X n\u2208\u03c6(l) F(Cn i ; WG), (7) where \u03c6(l) = {1, .., l \u22121}. Unlike [41, 42] that estimate a global af\ufb01ne or thin-plate spline transformation \ufb01eld, our networks are formulated as the encoder-decoder networks as in [44] to estimate locally-varying transformation \ufb01elds. Attribute transfer networks. To transfer the attribute of target feature F t into the content of source feature F s at l-th iteration, our attribute transfer networks \ufb01rst blend the source and target features as Bs\u2190t,l using estimated transformation \ufb01eld T l i and then reconstruct the stylized source image Is\u2190t,l using the decoder networks with parameters WD such that Is\u2190t,l = F(Bs\u2190t,l; WD). Speci\ufb01cally, our neural patch blending between F s and F t with the current transformation \ufb01eld T l i = [Al i, f l i] is formulated as shown in Fig. 5 such that Bs\u2190t,l i = (1 \u2212\u03bbl)F s i + \u03bbl X j\u2208Ni \u03b1l jF t i+gl j/ X j\u2208Ni \u03b1l j, (8) where gl j = (Al j \u2212I2\u00d72)(i \u2212j) + f l j. \u03b1l i is a con\ufb01dence of each pixel i that has T l i computed similar to [26] such that \u03b1l i = exp(Cl i(i))/ X p\u2208Pi exp(Cl i(p)). (9) Our neural patch blending module differs from the existing methods [34, 28, 12] in the use of learned transformation \ufb01elds and consideration of more complex deformations such as af\ufb01ne transformations. In addition, unlike exisiting style transfer methods [28, 12], our networks employ the con\ufb01dence to transfer the attribute of matchable points only tailored to our objective, as exempli\ufb01ed in Fig. 6. In addition, our decoder networks are formulated as a symmetric structure to feature extraction networks. Since the single-level decoder networks as in [16] cannot capture both complicated structures at high-level features and lowlevel information at low-level features, the multi-level decoder networks have been proposed as in [31, 32], but they are not very economic [12]. Instead, we use the skip connection from the source features F s to capture both low(a) (b) (c) (d) Figure 6. Effects on the con\ufb01dence in neural patch blending: (a) blending results of Is and It, (b) blending results of F s and F t followed by the decoder, (c) con\ufb01dence, and (d) blending results of F s and F t with the con\ufb01dence followed by the decoder. and high-level attribute characteristics [31, 32, 12]. However, using the skip connection through simple concatenation [44] makes the decoder networks reconstruct an image using only low-level features. To alleviate this, inspired by a dropout layer [46], we present a droplink layer such that the skipped features and upsampled features are stochastically linked to avoid the over\ufb01tting to certain level features: F s\u2190t,l h = (1 \u2212bh)F(Bs\u2190t,l; WD,h) + bhF s h, (10) where F s\u2190t,l h and F s h are the intermediate and skipped features at h-th level for h \u2208{1, ..., H}. WD,h is the parameters until h-th level. bh is a binary random variable. Note that if bh = 0, this becomes the no-skip connected layer. 4.3. Loss Functions Semantic attribute matching loss. Our networks are learned using weak supervision in the form of image pairs. Concretely, we present a semantic attribute matching loss in a manner that the transformation \ufb01eld T and the stylized image Is\u2190t can be simultaneously learned and inferred to minimize a single loss function. After the convergence of iterations at L-th iteration, an attribute transferred feature F s\u2190t,L and a warped target feature F t,L are used to de\ufb01ne the loss function. This intuition can be realized by minimizing the following objective: D(F s\u2190t,L, F t,L) = X i X j\u2208Ni \u2225F s\u2190t,L j \u2212F t,L j \u22252 F . (11) In comparison to existing the matching loss LM and the attribute transfer loss LA, this objective enables us to solve the photometric and geometric variations across semantically similar images simultaneously. Although using only this objective provides satisfactory performance, we extend this objective to consider both positive and negative samples to enhance network training and precise localization ability based on the intuition that the matching score should be minimized at the correct transformation while keeping the scores of other neighbor transformation candidates high. Finally, we formulate our semantic attribute matching loss as a cross-entropy loss as LAM = X i max (\u2212log(Ki), \u03c4), (12) 5 \f1 2 3 4 5 6 #Iteration 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Average flow accuracy Figure 7. Convergence analysis of SAM-Net for various numbers of iterations and search window sizes on the TSS benchmark [48]. (a) input images (b) iter 1 (c) iter 2 (d) iter 3 Figure 8. Ablation study of SAM-Net without (top) and with (bottom) attribute transfer networks as evolving iterations. where Ki is the softmax probability de\ufb01ned as Ki = exp(\u2212D(F s\u2190t,L i , F t,L i )) P q\u2208Qi exp(\u2212D(F s\u2190t,L i , F t,L q )) . (13) It makes the center point i within the neighbor Qi become a positive sample and the other points become negative samples. In addition, the truncated max operator max(\u00b7, \u03c4) is used to focus on the sailent parts such as objects during training with the parameter \u03c4. Other losses. We utilize two additional losses, namely the content loss LC as in (2) to preserve the structure of source image and the L2 regularization loss [21, 28] to encourage spatial smoothness in the stylized image. 5. Experiments 5.1. Training and Implementation Details To learn our SAM-Net, large-scale semantically similar image pairs are needed, but such public datasets are limited quantitatively. To overcome this, we adopt a two-step training technique, similar to [42]. In the \ufb01rst step, we train our networks using a synthetic training dataset provided in [41], where synthetic transformations are randomly applied to a single image to generate the image pairs, and thus the images do not have appearance variations. This enables the attribute transfer networks to be learned in an auto-encoder manner [31, 16, 32], but the matching networks still have Methods FG3D JODS PASC. Avg. Taniai et al. [48] 0.830 0.595 0.483 0.636 PF [13] 0.786 0.653 0.531 0.657 DCTM [27] 0.891 0.721 0.610 0.740 SCNet [15] 0.776 0.608 0.474 0.619 GMat. [41] 0.835 0.656 0.527 0.673 GMat. w/Inl. [42] 0.892 0.758 0.562 0.737 DIA [34] 0.762 0.685 0.513 0.653 RTNs [23] 0.901 0.782 0.633 0.772 SAM-Net w/(11) 0.891 0.789 0.638 0.773 SAM-Net wo/Att. 0.912 0.790 0.641 0.781 SAM-Net 0.961 0.822 0.672 0.818 Table 1. Matching accuracy compared to the state-of-the-art correspondence techniques on the TSS benchmark [48]. Methods PCK \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 PF [13] 0.314 0.625 0.795 DCTM [27] 0.342 0.696 0.802 SCNet [15] 0.362 0.722 0.820 GMat. [41] 0.410 0.695 0.804 GMat. w/Inl. [42] 0.490 0.748 0.840 DIA [34] 0.471 0.724 0.811 RTNs [23] 0.552 0.759 0.852 NC-Net [43] 0.789 SAM-Net 0.601 0.802 0.869 Table 2. Matching accuracy compared to the state-of-the-art correspondence techniques on the PF-PASCAL benchmark [14]. limited ability to deal with the attribute variations. To overcome this, in the second step, we \ufb01netune this pretrained network on public datasets for semantically similar image pairs from the training set of PF-PASCAL [14] following the split used in [14]. For feature extraction, we used the ImageNet-pretrained VGG-19 networks [45], where the activations are extracted from \u2018relu4-1\u2019 layer (i.e., H = 4). We gradually increase \u03bbl until 1 such that \u03bbl = 1 \u2212exp(\u2212l). During training, we set the maximum number of iteration L to 5 to avoid the gradient vanishing and exploding problem. During testing, the iteration count is increased to 10. Following [23], the window sizes of Ni, Pi, and Qi are set to 3 \u00d7 3, 9 \u00d7 9, and 9 \u00d7 9, respectively. The probability of bh is de\ufb01ned as 0.9 and in testing bh is set to 0.5. 5.2. Experimental Settings In the following, we comprehensively evaluated SAMNet through comparisons to state-of-the-art methods for semantic matching, including Taniai et al. [48], PF [13], SCNet [15], DCTM [24], DIA [34], GMat. [41], GMat. w/Inl. [42], NC-Net [43], RTNs [23], and for attribute transfer, including Gatys et al. [10], CNN-MRF [28], PhotoWCT [32], Gu et al. [12], and DIA [34]. Performance was 6 \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 9. Qualitative results on the TSS benchmark [48]: (a) source and (b) target images, warped source images using correspondences of (c) PF [13], (d) DCTM [27], (e) GMat [41], (f) DIA [34], (g) GMat. w/Inl. [42], and (h) SAM-Net. (a) (b) (c) (d) (e) (f) (g) (h) Figure 10. Qualitative results on the PF-PASCAL benchmark [13]: (a) source and (b) target images, warped source images using correspondences of (c) DCTM [27], (d) SCNet [15], (e) DIA [34] (f) GMat. w/Inl. [42], (g) RTNs [23], and (h) SAM-Net. measured on TSS dataset [48], PF-PASCAL dataset [14], and CUB-200-2011 dataset [51]. In Sec. 5.3, we \ufb01rst analyzed the effects of the components within SAM-Net, and then evaluated matching results with various benchmarks and quantitative measures in Sec. 5.4. We \ufb01nally evaluated photorealistic attribute transfer results with various applications in Sec. 5.5. 5.3. Ablation Study To validate the components within SAM-Net, we evaluated the matching accuracy for different numbers of iterations, with various sizes of Pi, and with and without attribute transfer module. For quantitative assessment, we examined the accuracy on the TSS benchmark [48]. As shown in Fig. 7, Fig. 8, and Table 1, SAM-Net converges in 2\u22123 iterations. In addition, the results of \u2018SAM-Net wo/Att.\u2019, i.e., SAM-Net without attribute transfer, show the effectiveness of attribute transfer module in the recurrent formulation. The results of \u2018SAM-Net wo/(11).\u2019, i.e., SAM-Net with the loss of (11), show the importance to consider the negative samples when training. By enlarging the size of Pi, the accuracy improves until 9\u00d79, but larger window sizes reduce matching accuracy due to greater matching ambiguity. Note that Qi = Pi following to [23]. 5.4. Semantic Matching Results TSS benchmark. We evaluated SAM-Net on the TSS benchmark [48], consisting of 400 image pairs. As in [24, 27], \ufb02ow accuracy was measured in Table 1. Fig. 9 shows qualitative results. Unlike existing methods [7, 48, 13, 15, 24, 41, 42, 23] that do not consider the attribute variations between semantically similar images, our SAM-Net has shown highly improved preformance qualitatively and quantitatively. DIA [34] has shown limited matching accuracy compared to other deep methods [42, 23], due to their limited regularization powers. Unlike this, the results of our SAM-Net shows that our method is more successfully transferring the attribute between source and target images to improve the semantic matching accuracy. PF-PASCAL benchmark. We also evaluated SAM-Net on the PF-PASCAL benchmark [14], which contains 1,351 image pairs over 20 object categories with PASCAL keypoint annotations [4]. For the evaluation metric, we used the PCK between \ufb02ow-warped keypoints and the ground truth as done in the experiments of [15]. Table 2 summarizes the PCK values, and Fig. 10 shows qualitative results. Similar to the experiments on the TSS benchmark [48], CNN-based methods [15, 41, 42, 42, 23] including our SAM-Net yield better performance, with SAM-Net providing the highest matching accuracy. 5.5. Applications Photorealistic attribute transfer. We evaluated SAMNet for photorealistic attribute transfer on the TSS [48] and PF-PASCAL benchmarks [14]. For evaluatation, we sampled the image pairs from these datasets and transferred the attribute of target image to the source image as shown in Fig. 11. Note that SAM-Net is designed to work on images contain that semantically similar contents and not effective for generic artistic style transfer applications as in [10, 21, 16]. As expected, existing methods tailored to 7 \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 11. Qualitative results of the photorealistic attribute transfer on the TSS [48] PF-PASCAL [14] benchmarks: (a) source and (b) target images, results of (c) Gatys et al. [10], (d) CNN-MRF [28], (e) Photo-WCT [32], (f) Gu et al. [12], (g) DIA [34], and (h) SAM-Net. (a) (b) (c) (d) (e) Figure 12. Qualitative results of the mask transfer on the CUB200-2011 benchmark [51]: source (a) images and (b) masks and target (c) images and (d) masks, and (e) warped source masks to the target images using correspondences from SAM-Net. artistic stylization such as a method of Gatys et al. [10] and CNN-MRF [28] produce limited quality images. Moreover, recent photorealistic stylization methods such as PhotoWCT [32] and Gu et al. [12] have limited performance for the images that have background clutters. DIA [34] provided degraded results due to its weak regularization technique. Unlike these methods, our SAM-Net has shown highly accurate and plausible results thanks to their learned transformation \ufb01elds to synthesize the images. Note that some methods such as Photo-WCT [32] and DIA [34] have used to re\ufb01ne their results using additional smoothing modules, but SAM-Net does not use any post-processing. Foreground mask transfer. We evaluated SAM-Net for mask transfer on the CUB-200-2011 dataset [51], which contains images of 200 bird categories, with annotated foreground masks. For semantically similar images that have very challenging photometric and geometric varia(a) (b) (c) (d) (e) Figure 13. Qualitative results of the object trans\ufb01guration on the CUB-200-2011 benchmark [51]: (a) source and (b) target images, results of (c) Gu et al. [12], (d) DIA [34], and (e) SAM-Net. tions, our SAM-Net successfully transfers the semantic labels, as shown in Fig. 12. Object trans\ufb01guration. We \ufb01nally applied our method to object trans\ufb01guration, e.g., translating a source bird into a target breed. We used object classes from the CUB-2002011 dataset [51]. In this application, our SAM-Net has shown very plausible results as exempli\ufb01ed in Fig. 13. 6." + }, + { + "url": "http://arxiv.org/abs/1810.12155v1", + "title": "Recurrent Transformer Networks for Semantic Correspondence", + "abstract": "We present recurrent transformer networks (RTNs) for obtaining dense\ncorrespondences between semantically similar images. Our networks accomplish\nthis through an iterative process of estimating spatial transformations between\nthe input images and using these transformations to generate aligned\nconvolutional activations. By directly estimating the transformations between\nan image pair, rather than employing spatial transformer networks to\nindependently normalize each individual image, we show that greater accuracy\ncan be achieved. This process is conducted in a recursive manner to refine both\nthe transformation estimates and the feature representations. In addition, a\ntechnique is presented for weakly-supervised training of RTNs that is based on\na proposed classification loss. With RTNs, state-of-the-art performance is\nattained on several benchmarks for semantic correspondence.", + "authors": "Seungryong Kim, Stephen Lin, Sangryul Jeon, Dongbo Min, Kwanghoon Sohn", + "published": "2018-10-29", + "updated": "2018-10-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Establishing dense correspondences across semantically similar images can facilitate a variety of computer vision applications including non-parametric scene parsing, semantic segmentation, object detection, and image editing [25; 22; 20]. In this semantic correspondence task, the images resemble each other in content but differ in object appearance and con\ufb01guration, as exempli\ufb01ed in the images with different car models in Fig. 1(a-b). Unlike the dense correspondence computed for estimating depth [34] or optical \ufb02ow [4], semantic correspondence poses additional challenges due to intra-class appearance and shape variations among different instances from the same object or scene category. To address these challenges, state-of-the-art methods generally extract deep convolutional neural network (CNN) based descriptors [5; 45; 18], which provide some robustness to appearance variations, and then perform a regularization step to further reduce the range of appearance. The most recent techniques handle geometric deformations in addition to appearance variations within deep CNNs. These methods can generally be classi\ufb01ed into two categories, namely methods for geometric invariance in the feature extraction step, e.g., spatial transformer networks (STNs) [15; 5; 20], and methods for geometric invariance in the regularization step, e.g., geometric matching networks [30; 31]. The STN-based methods infer geometric deformation \ufb01elds within a deep network and transform the convolutional activations to provide geometric-invariant features [5; 41; 20]. While this approach has shown geometric invariance to some extent, we conjecture that directly estimating the geometric deformations between a pair of input images would be more robust and precise than learning to transform each individual image to a geometric-invariant feature representation. This direct estimation approach is used by geometric matching-based techniques [30; 31], which recover a matching model directly through deep networks. Drawbacks of these methods include that globally-varying geometric \ufb01elds are inferred, and only \ufb01xed, untransformed versions of the features are used. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. arXiv:1810.12155v1 [cs.CV] 29 Oct 2018 \f(a) (b) (c) (d) (e) (f) Figure 1: Visualization of results from RTNs: (a) source image; (b) target image; (c), (d) warped source and target images using dense correspondences from RTNs; (e), (f) pseudo ground-truth transformations as in [36]. RTNs learn to infer transformations without ground-truth supervision. In this paper, we present recurrent transformer networks (RTNs) for overcoming the aforementioned limitations of current semantic correspondence techniques. As illustrated in Fig. 2, the key idea of RTNs is to directly estimate the geometric transformation \ufb01elds between two input images, like what is done by geometric matching-based approaches [30; 31], but also apply the estimated \ufb01eld to transform the convolutional activations of one of the images, similar to STN-based methods [15; 5; 20]. We additionally formulate the RTNs to recursively estimate the geometric transformations, which are used for iterative geometric alignment of feature activations. In this way, regularization is enhanced through recursive re\ufb01nement, while feature extraction is likewise iteratively re\ufb01ned according to the geometric transformations as well as jointly learned with the regularization. Moreover, the networks are learned in a weakly-supervised manner via a proposed classi\ufb01cation loss de\ufb01ned between the source image features and the geometrically-aligned target image features, such that the correct transformation is identi\ufb01ed by the highest matching score while other transformations are considered as negative examples. The presented approach is evaluated on several common benchmarks and examined in an ablation study. The experimental results show that this model outperforms the latest weakly-supervised and even supervised methods for semantic correspondence. 2 Related Work Semantic Correspondence To elevate matching quality, most conventional methods for semantic correspondence focus on improving regularization techniques while employing handcrafted features such as SIFT [27]. Liu et al. [25] pioneered the idea of dense correspondence across different scenes, and proposed SIFT \ufb02ow. Inspired by this, methods have been presented based on deformable spatial pyramids (DSP) [17], object-aware hierarchical graphs [39], exemplar LDA [3], joint image set alignment [44], and joint co-segmentation [36]. As all of these techniques use handcrafted descriptors and regularization methods, they lack robustness to geometric deformations. Recently, deep CNN-based methods have been used in semantic correspondence as their descriptors provide some degree of invariance to appearance and shape variations. Among them are techniques that utilize a 3-D CAD model for supervision [45], employ fully convolutional feature learning [5], learn \ufb01lters with geometrically consistent responses across different object instances [28], learn networks using dense equivariant image labelling [37], exploit local self-similarity within a fully convolutional network [18; 20], and estimate correspondences using object proposals [7; 8; 38]. However, none of these methods is able to handle non-rigid geometric variations, and most of them are formulated with handcrafted regularization. More recently, Han et al. [9] formulated the regularization into the CNN but do not deal explicitly with the signi\ufb01cant geometric variations encountered in semantic correspondence. Spatial Invariance Some methods aim to alleviate spatial variation problems in semantic correspondence through extensions of SIFT \ufb02ow, including scale-less SIFT \ufb02ow (SLS) [11], scale-space SIFT \ufb02ow (SSF) [29], and generalized DSP (GDSP) [13]. A generalized PatchMatch algorithm [1] was proposed for ef\ufb01cient matching that leverages a randomized search scheme. It was utilized by HaCohen et al. [6] in a non-rigid dense correspondence (NRDC) algorithm. Spatial invariance to scale and rotation is provided by DAISY \ufb01lter \ufb02ow (DFF) [40]. While these aforementioned techniques provide some degree of geometric invariance, none of them can deal with af\ufb01ne transformations over an image. Recently, Kim et al. [19; 21] proposed the discrete-continuous transformation matching 2 \fFeature Extraction Feature Extraction Flow Estimation Localisation i f i A s i D ( ) t i D A Source Target (a) Feature Extraction Feature Extraction Geometric Matching s i D t i D i T Source Target (b) Feature Extraction Feature Extraction Geometric Matching s i D ( ) t i D T i T Source Target (c) Figure 2: Intuition of RTNs: (a) methods for geometric inference in the feature extraction step, e.g., STN-based methods [5; 20], (b) methods for geometric invariance in the regularization step, e.g., geometric matching-based methods [30; 31], and (c) RTNs, which weave the advantages of both existing STN-based methods and geometric matching techniques, by recursively estimating geometric transformation residuals using geometry-aligned feature activations. (DCTM) framework where dense af\ufb01ne transformation \ufb01elds are inferred using a hand-designed energy function and regularization. To deal with geometric variations within CNNs, STNs [15] offer a way to provide geometric invariance by warping features through a global transformation. Inspired by STNs, Lin et al. [23] proposed inverse compositional STNs (IC-STNs) that replaces the feature warping with transformation parameter propagation. Kanazawa et al. [16] presented WarpNet that predicts a warp for establishing correspondences. Rocco et al. [30; 31] proposed a CNN architecture for estimating a geometric matching model for semantic correspondence. However, they estimate only globally-varying geometric \ufb01elds, thus leading to limited performance in dealing with locally-varying geometric deformations. To deal with locally-varying geometric variations, some methods such as UCN-spatial transformer (UCN-ST) [5] and convolutional af\ufb01ne transformer-FCSS (CAT-FCSS) [20] employ STNs [15] at the pixel level. Similarly, Yi et al. [41] proposed the learned invariant feature transform (LIFT) to learn sparsely, locally-varying geometric \ufb01elds, inspired by [42]. However, these methods determine geometric \ufb01elds by accounting for the source and target images independently, rather than jointly, which limits their prediction ability. 3 Background Let us denote semantically similar source and target images as Is and It, respectively. The objective is to establish a correspondence \ufb01eld fi = [ui, vi]T between the two images that is de\ufb01ned for each pixel i = [ix, iy]T in Is. Formally, this involves \ufb01rst extracting handcrafted or deep features, denoted by Ds i and Dt i, from Is i and It i within local receptive \ufb01elds, and then estimating the correspondence \ufb01eld fi of the source image by maximizing the feature similarity between Ds i and Dt i+fi over a set of transformations using handcrafted or deep geometric regularization models. Several approaches [25; 18] assume the transformation to be a 2-D translation with negligible variation within local receptive \ufb01elds. As a result, they often fail to handle complicated deformations caused by scale, rotation, or skew that may exist among object instances. For greater geometric invariance, recent approaches [19; 21] have modeled the deformations as an af\ufb01ne transformation \ufb01eld represented by a 2 \u00d7 3 matrix Ti = [ Ai | fi ] (1) that maps pixel i to i\u2032 = i + fi. Speci\ufb01cally, they maximize the similarity between the source Ds i and target Dt i\u2032(Ai), where D(Ai) represents the feature extracted from spatially-varying local receptive \ufb01elds transformed by a 2 \u00d7 2 matrix Ai [5; 20]. For simplicity, we denote Dt(Ti) = Dt i+fi(Ai). Approaches for geometric invariance in semantic correspondence can generally be classi\ufb01ed into two categories. The \ufb01rst group infers the geometric \ufb01elds in the feature extraction step by minimizing a matching objective function [5; 20], as exempli\ufb01ed in Fig. 2(a). Concretely, Ai is learned without a ground-truth A\u2217 i by minimizing the difference between Ds i and Dt i+fi(Ai) according to a ground-truth \ufb02ow \ufb01eld f \u2217 i . This enables explicit feature learning which aims to minimize/maximize convolutional activation differences between matching/non-matching pixel pairs [5; 20]. However, ground-truth \ufb02ow \ufb01elds f \u2217 i are still needed for learning the networks, and it predicts the geometric 3 \fSource Target Feature Extraction Net. Geometric Matching Network Encoder Decoder 1 k i \uf02d T 1 ( ) t k i D \uf02d T s i D Correlation \u2026 \u2026 Skip Connection 1 1 ( ( , ( )) | ) k k s t k i i i i G D D \uf02d \uf02d \uf03d \uf02b T T T W 1 ( ( , ( )) | ) s t k i i G D D \uf02d T W ( | ) i F I W Figure 3: Network con\ufb01guration of RTNs, consisting of a feature extraction network and a geometric matching network in a recurrent structure. \ufb01elds Ai based only on the source or target feature, without jointly considering the source and target, thus limiting performance. The second group estimates a geometric matching model directly through deep networks by considering the source and target features simultaneously [30; 31]. These methods formulate the geometric matching networks by mimicking conventional RANSAC-like methods [14] through feature extraction and geometric matching steps. As illustrated in Fig. 2(b), the geometric \ufb01elds Ti are predicted in a feed-forward network from extracted source features Ds i and target features Dt i. By learning to extract source and target features and predict geometric \ufb01elds in an end-to-end manner, more robust geometric \ufb01elds can be estimated compared to existing STN-based methods that consider source or target features independently as shown in [31]. A major limitation of these learning-based methods is the lack of ground-truth geometric \ufb01elds T\u2217 i between source and target images. To alleviate this problem, some methods use self-supervision such as synthetic transformations [30] or weak-supervision such as soft-inlier maximization [31], but these approaches constrain the global geometric \ufb01eld only. Moreover, these methods utilize feature descriptors extracted from the original upright images, rather than from geometrically transformed images, which limits their capability to represent severe geometric variations. 4 Recurrent Transformer Networks 4.1 Motivation and Overview In this section, we describe the formulation of recurrent transformer networks (RTNs). The objective of our networks is to learn and infer locally-varying af\ufb01ne deformation \ufb01elds Ti in an end-to-end and weakly-supervised fashion using only image pairs without ground-truth transformations T\u2217 i . Toward this end, we present an effective and ef\ufb01cient integration of the two existing approaches for geometric invariance, i.e., STN-based feature extraction networks [5; 20] and geometric matching networks [30; 31], that includes a novel weakly-supervised loss function tailored to our objective. Speci\ufb01cally, the \ufb01nal geometric \ufb01eld is recursively estimated by deforming the activations of feature extraction networks according to the intermediate output of the geometric matching networks, in contrast to existing approaches based on geometric matching which consider only \ufb01xed, upright versions of features [30; 31]. At the same time, our method outperforms STN-based approaches [5; 20] by using a deep CNN-based geometric matching network instead of handcrafted matching criteria. Our recurrent geometric matching approach intelligently weaves the advantages of both existing STN-based methods and geometric matching techniques, by recursively estimating geometric transformation residuals using geometry-aligned feature activations. Concretely, our networks are split into two parts, as shown in Fig. 3: a feature extraction network to extract source Ds i and target Dt(Ti) features, and a geometric matching network to infer the geometric \ufb01elds Ti. To learn these networks in a weakly-supervised manner, we formulate a novel classi\ufb01cation loss de\ufb01ned without ground-truth T\u2217 i based on the assumption that the transformation which maximizes the similarity of the source features Ds i and transformed target features Dt(Ti) at a pixel i should be correct, while the matching scores of other transformation candidates should be minimized. 4 \f(a) (b) (c) (d) (e) (f) Figure 4: Visualization of search window Ni in RTNs (e.g., |Ni| : 5 \u00d7 5): Source images with the search window of (a) stride 4, (c) stride 2 , (e) stride 1, and target images with (b), (d), (f) transformed points for (a), (c), (e), respectively. As evolving iterations, the dilate strides are reduced to consider precise matching details. 4.2 Feature Extraction Network To extract convolutional features for source Ds and target Dt, the input source and target images (Is, It) are \ufb01rst passed through fully-convolutional feature extraction networks with shared parameters WF such that Di = F(IiWF ), and the feature for each pixel then undergoes L2 normalization. In the recurrent formulation, at each iteration the target features Dt can be extracted according to Ti such that Dt(Ti) = F(It(Ti)|WF ). However, extracting each feature by transforming local receptive \ufb01elds within the target image It according to Ti for each pixel i and then passing it through the networks would be time-consuming when iterating the networks. Instead, we employ a strategy similar to UCN-ST [5] and CAT-FCSS [20] by \ufb01rst extracting the convolutional features of the entire image It by passing it through the networks except for the last convolutional layer, and then computing Dt(Ti) by transforming the resultant convolutional features and \ufb01nally passing it through the last convolution with stride to combine the transformed activations independently [5; 20]. It should be noted that any other convolutional features [35; 12] could be used in this framework. 4.3 Recurrent Geometric Matching Network Constraint Correlation Volume To predict the geometric \ufb01elds from two convolutional features Ds and Dt, we \ufb01rst compute the correlation volume with respect to translational motion only [30; 31] and then pass it to subsequent convolutional layers to determine dense af\ufb01ne transformation \ufb01elds. As shown in [31], this two-step approach reliably prunes incorrect matches. Speci\ufb01cally, the similarity between two extracted features is computed as the cosine similarity with L2 normalization: C(Ds i , Dt(Tj)) = < Ds i , Dt(Tj) >/ rX l < Ds i , Dt(Tl) >2, (2) where j, l \u2208Ni for the search window Ni of pixel i. Compared to [30; 31] that consider all possible samples within an image, the constraint correlation volume de\ufb01ned within Ni reduces the matching ambiguity and computational times. However, due to the limited search window range, it may not cover large geometric variations. To alleviate this limitation, inspired by [43], we utilize dilation techniques in a manner that the local neighborhood Ni is enlarged with larger stride than 1 pixel, and this dilation is reduced as the iterations progress, as exempli\ufb01ed in Fig. 4. Recurrent Geometry Estimation Based on this matching similarity, the recurrent geometry estimation network with parameters WG iteratively estimates the residual between the previous and current geometric transformation \ufb01elds as Tk i \u2212Tk\u22121 i = F(C(Ds i , Dt(Tk\u22121)); WG), (3) where Tk i denotes the transformation \ufb01elds at the k-th iteration. The \ufb01nal geometric \ufb01elds are then estimated in a recurrent manner as follows: Ti = T0 i + X k\u2208{1,..,Kmax} F(C(Ds i , Dt(Tk\u22121)); WG), (4) where Kmax denotes the maximum iteration and T0 i is an initial geometric \ufb01eld. Unlike [30; 31] which estimate a global af\ufb01ne or thin-plate spline transformation \ufb01eld, we formulate the encoderdecoder networks as in [32] to estimate locally-varying geometric \ufb01elds. Moreover, our networks are 5 \f(a) (b) (c) (d) (e) (f) Figure 5: Convergence of RTNs: (a) source image; (b) target image; Iterative evolution of warped images (c), (d), (e), and (f) after iteration 1, 2, 3, and 4. In the recurrent formulation of RTNs, the predicted transformation \ufb01eld becomes progressively more accurate through iterative estimation. formulated in a fully-convolutional manner, thus source and target inputs of any size can be processed, in contrast to [30; 31] which can take inputs of only a \ufb01xed size. Iteratively inferring af\ufb01ne transformation residuals boosts matching precision and facilitates convergence. Moreover, inferring residuals instead of carrying the input information through the network has been shown to improve network optimization [12]. As shown in Fig. 5, the predicted transformation \ufb01eld becomes progressively more accurate through iterative estimation. 4.4 Weakly-supervised Learning A major challenge of semantic correspondence with deep CNNs is the lack of ground-truth correspondence maps for training. Obtaining such ground-truth data through manual annotation is labor-intensive and may be degraded by subjectivity [36; 7; 8]. To learn the networks using only weak supervision in the form of image pairs, we formulate the loss function based on the intuition that the matching score between the source feature Ds i at each pixel i and the target feature Dt(Ti) should be maximized while keeping the scores of other transformation candidates low. This can be treated as a classi\ufb01cation problem in that the network can learn a geometric \ufb01eld as a hidden variable for maximizing the scores for matchable Ti while minimizing the scores for non-matchable transformation candidates. The optimal \ufb01elds Ti can be learned with a classi\ufb01cation loss [20] in a weakly-supervised manner by minimizing the energy function L(Ds i , Dt(T)) = \u2212 X j\u2208Mi p\u2217 j log(p(Ds i , Dt(Tj))), (5) where the function p(Ds i , Dt(Tj)) is a softmax probability de\ufb01ned as p(Ds i , Dt(Tj)) = exp(C(Ds i , Dt(Tj))) P l\u2208Mi exp(C(Ds i , Dt(Tl))), (6) with p\u2217 j denoting a class label de\ufb01ned as 1 if j = i, and 0 otherwise for j \u2208Mi for the search window Mi, such that the center point i within Mi becomes a positive sample while the other points are negative samples. With this loss function, the derivatives \u2202L/\u2202Ds and \u2202L/\u2202Dt(T) of the loss function L with respect to the features Ds and Dt(T) can be back-propagated into the feature extraction networks F(\u00b7|WF ). Explicit feature learning in this manner with the classi\ufb01cation loss has been shown to be reliable [20]. Likewise, the derivatives \u2202L/\u2202Dt(T) and \u2202Dt(T)/\u2202T of the loss function L with respect to geometric \ufb01elds T can be back-propagated into the geometric matching networks F(\u00b7|WG) to learn these networks without ground truth T\u2217. It should be noted that our loss function is conceptually similar to [31] in that it is formulated with source and target features in a weakly-supervised manner. While [31] utilizes only positive samples in learning feature extraction networks, our method considers both positive and negative samples to enhance network training. 5 Experimental Results and Discussion 5.1 Experimental Settings In the following, we comprehensively evaluated our RTNs through comparisons to state-of-the-art methods for semantic correspondence, including SF [25], DSP [17], Zhou et al. [45], Taniai et al. [36], 6 \f(a) (b) (c) (d) (e) (f) Figure 7: Qualitative results on the TSS benchmark [36]: (a) source image, (b) target image, (c) DCTM [18], (d) SCNet [9], (e) GMat. w/Inl. [31], and (f) RTNs. The source images are warped to the target images using correspondences. PF [7], SCNet [9], DCTM [18], geometric matching (GMat.) [30], and GMat. w/Inl. [31], as well as employing the SIFT \ufb02ow optimizer1 together with UCN-ST [5], FCSS [18], and CAT-FCSS [20]. Performance was measured on the TSS dataset [36], PF-WILLOW dataset [7], and PF-PASCAL dataset [8]. In Sec. 5.2, we \ufb01rst analyze the effects of the components within RTNs, and then evaluate matching results with various benchmarks and quantitative measures in Sec. 5.3. 5.2 Ablation Study 1 2 3 4 5 6 #Iteration 0.4 0.5 0.6 0.7 0.8 Average flow accuracy Figure 6: Convergence analysis of RTNs w/ResNet [12] for various numbers of iterations and search window sizes on the TSS benchmark [36]. To validate the components within RTNs, we evaluated the matching accuracy for different numbers of iterations, with various window sizes of Ni, for different backbone feature extraction networks such as VGGNet [35], CAT-FCSS [20], and ResNet [12], and with pretrained or learned backbone networks. For quantitative assessment, we examined the matching accuracy on the TSS benchmark [36], as described in the following section. As shown in Fig. 6, RTNs w/ResNet [12] converge in 3\u22125 iterations. By enlarging the window size of Ni, the matching accuracy improves until 9\u00d79 with longer training and testing times, but larger window sizes reduce matching accuracy due to greater matching ambiguity. Note that Mi = Ni. Table 1 shows that among the many state-of-the-art feature extraction networks, ResNet [12] exhibits the best performance for our approach. As shown in comparisons between pretrained and learned backbone networks, learning the feature extraction networks jointly with geometric matching networks can boost matching accuracy, as similarly seen in [31]. 5.3 Matching Results TSS Benchmark We evaluated RTNs on the TSS benchmark [36], which consists of 400 image pairs divided into three groups: FG3DCar [24], JODS [33], and PASCAL [10]. As in [18; 19], \ufb02ow accuracy was measured by computing the proportion of foreground pixels with an absolute \ufb02ow endpoint error that is smaller than a threshold of T = 5, after resizing each image so that its larger dimension is 100 pixels. Table 1 compares the matching accuracy of RTNs to state-ofthe-art correspondence techniques, and Fig. 7 shows qualitative results. Compared to handcrafted methods [25; 17; 36; 7], most CNN-based methods have better performance. In particular, methods that use STN-based feature transformations, namely UCN-ST [5] and CAT-FCSS [20], show improved ability to deal with geometric variations. In comparison to the geometric matching-based methods GMat. [30] and GMat. w/Inl. [30], RTNs consisting of feature extraction with ResNet and recurrent 1For these experiments, we utilized the hierarchical dual-layer belief propagation of SIFT \ufb02ow [25] together with alternative dense descriptors. 7 \fMethods Feature Regular. Superv. FG3D. JODS PASC. Avg. SF [25] SIFT SF 0.632 0.509 0.360 0.500 DSP [17] SIFT DSP 0.487 0.465 0.382 0.445 Taniai et al. [36] HOG TSS 0.830 0.595 0.483 0.636 PF [7] HOG LOM 0.786 0.653 0.531 0.657 DCTM [18] CAT-FCSS\u2020 DCTM 0.891 0.721 0.610 0.740 UCN-ST [5] UCN-ST SF Sup. 0.853 0.672 0.511 0.679 FCSS [18; 20] FCSS SF Weak. 0.832 0.662 0.512 0.668 CAT-FCSS SF Weak. 0.858 0.680 0.522 0.687 SCNet [9] VGGNet AG Sup. 0.764 0.600 0.463 0.609 VGGNet AG+ Sup. 0.776 0.608 0.474 0.619 GMat. [30] VGGNet GMat. Self. 0.835 0.656 0.527 0.673 ResNet GMat. Self. 0.886 0.758 0.560 0.735 GMat. w/Inl. [31] ResNet GMat. Weak. 0.892 0.758 0.562 0.737 RTNs VGGNet\u2020 R-GMat. Weak. 0.875 0.736 0.586 0.732 RTNs VGGNet R-GMat. Weak. 0.893 0.762 0.591 0.749 RTNs CAT-FCSS R-GMat. Weak. 0.889 0.775 0.611 0.758 RTNs ResNet R-GMat. Weak. 0.901 0.782 0.633 0.772 Table 1: Matching accuracy compared to state-of-the-art correspondence techniques (with feature, regularization, and supervision) on the TSS benchmark [36]. \u2020 denotes a pre-trained feature. (a) (b) (c) (d) (e) (f) Figure 8: Qualitative results on the PF-WILLOW benchmark [7]: (a) source image, (b) target image, (c) UCN-ST [5], (d) SCNet [9], (e) GMat. w/Inl. [31], and (f) RTNs. The source images are warped to the target images using correspondences. geometric matching modules provide higher performance. RTNs additionally outperform existing CNN-based methods trained with supervision of \ufb02ow \ufb01elds. It should be noted that GMat. w/Inl. [31] was learned with the initial network parameters set through self-supervised learning as in [30]. RTNs instead start from fully-randomized parameters in geometric matching networks. PF-WILLOW Benchmark We also evaluated our method on the PF-WILLOW benchmark [7], which includes 10 object sub-classes with 10 keypoint annotations for each image. For the evaluation metric, we use the probability of correct keypoint (PCK) between \ufb02ow-warped keypoints and the ground truth [26; 7] as in the experiments of [18; 9; 20]. Table 2 compares the PCK values of RTNs to state-of-the-art correspondence techniques, and Fig. 8 shows qualitative results. Our RTNs exhibit performance competitive to the state-of-the-art correspondence techniques including the latest weakly-supervised and even supervised methods for semantic correspondence. Since RTNs estimate locally-varying geometric \ufb01elds, it provides more precise localization ability, as shown in the results of \u03b1 = 0.05, in comparison to existing geometric matching networks [30; 31] which estimate globally-varying geometric \ufb01elds only. PF-PASCAL Benchmark Lastly, we evaluated our method on the PF-PASCAL benchmark [8], which contains 1,351 image pairs over 20 object categories with PASCAL keypoint annotations [2]. Following the split in [9; 31], we used 700 training pairs, 300 validation pairs, and 300 testing pairs. For the evaluation metric, we use the PCK between \ufb02ow-warped keypoints and the ground 8 \fMethods PF-WILLOW [7] PF-PASCAL [8] \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 PF [7] 0.284 0.568 0.682 0.314 0.625 0.795 DCTM [18] 0.381 0.610 0.721 0.342 0.696 0.802 UCN-ST [5] 0.241 0.540 0.665 0.299 0.556 0.740 CAT-FCSS [20] 0.362 0.546 0.692 0.336 0.689 0.792 SCNet [9] 0.386 0.704 0.853 0.362 0.722 0.820 GMat. [30] 0.369 0.692 0.778 0.410 0.695 0.804 GMat. w/Inl. [31] 0.370 0.702 0.799 0.490 0.748 0.840 RTNs w/VGGNet 0.402 0.707 0.842 0.506 0.743 0.836 RTNs w/ResNet 0.413 0.719 0.862 0.552 0.759 0.852 Table 2: Matching accuracy compared to state-of-the-art correspondence techniques on the PFWILLOW benchmark [7] and PF-PASCAL benchmark [8]. (a) (b) (c) (d) (e) (f) Figure 9: Qualitative results on the PF-PASCAL benchmark [8]: (a) source image, (b) target image, (c) CAT-FCSS w/SF [20], (d) SCNet [9], (e) GMat. w/Inl. [31], and (f) RTNs. The source images are warped to the target images using correspondences. truth as done in the experiments of [9]. Table 2 summarizes the PCK values, and Fig. 9 shows qualitative results. Similar to the experiments on the PF-WILLOW benchmark [7], CNN-based methods [9; 30; 31] including our RTNs yield better performance, with RTNs providing the highest matching accuracy. 6" + }, + { + "url": "http://arxiv.org/abs/1707.05471v1", + "title": "DCTM: Discrete-Continuous Transformation Matching for Semantic Flow", + "abstract": "Techniques for dense semantic correspondence have provided limited ability to\ndeal with the geometric variations that commonly exist between semantically\nsimilar images. While variations due to scale and rotation have been examined,\nthere lack practical solutions for more complex deformations such as affine\ntransformations because of the tremendous size of the associated solution\nspace. To address this problem, we present a discrete-continuous transformation\nmatching (DCTM) framework where dense affine transformation fields are inferred\nthrough a discrete label optimization in which the labels are iteratively\nupdated via continuous regularization. In this way, our approach draws\nsolutions from the continuous space of affine transformations in a manner that\ncan be computed efficiently through constant-time edge-aware filtering and a\nproposed affine-varying CNN-based descriptor. Experimental results show that\nthis model outperforms the state-of-the-art methods for dense semantic\ncorrespondence on various benchmarks.", + "authors": "Seungryong Kim, Dongbo Min, Stephen Lin, Kwanghoon Sohn", + "published": "2017-07-18", + "updated": "2017-07-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Establishing dense correspondences across semantically similar images is essential for numerous tasks such as nonparametric scene parsing, scene recognition, image registration, semantic segmentation, and image editing [15, 33, 32]. Unlike traditional dense correspondence for estimating depth [46] or optical \ufb02ow [9, 51], semantic correspondence estimation poses additional challenges due to intra-class appearance and shape variations among object instances, which can degrade matching by conventional approaches [33, 59]. Recently, several methods have attempted to deal with the appearance differences using convolutional neural network (CNN) based descriptors because of their high invariance to appearance variations [34, 11, 61, 24]. However, geometric variations are considered in just a limited manner through constraint settings such as those used for depth or optical \ufb02ow. Some methods solve for geometric variations such as scale or rotation [18, 41, 21], but they consider only a discrete set of scales or rotations as possible solutions, and (a) (b) (c) (d) (e) (f) (g) (h) Figure 1. Visualization of our DCTM results: (a) source image, (b) target image, (c), (d) ground truth correspondences, (e), (f), (g), (h) warped images and correspondences after discrete and continuous optimization, respectively. For images undergoing non-rigid deformations, our DCTM estimates reliable correspondences by iteratively optimizing the label space via continuous regularization. do not capture the non-rigid geometric deformations that commonly exist between semantically similar images. It has been shown that these non-rigid image deformations can be locally well approximated by af\ufb01ne transformations [45, 30, 29]. To estimate dense af\ufb01ne transformation \ufb01elds, a possible approach is to discretize the space of af\ufb01ne transformations and \ufb01nd a labeling solution. However, the higher-dimensional search space for af\ufb01ne transformations makes discrete global optimization algorithms such as graph cut [6] and belief propagation [48, 52] computationally infeasible. For more ef\ufb01cient optimization over large label spaces, the PatchMatch Filter (PMF) [37] integrates constant-time edge-aware \ufb01ltering (EAF) [43, 36] with PatchMatch-based randomized search [2]. PMF is leveraged for dense semantic correspondence in DAISY Filter Flow (DFF) [59], which \ufb01nds labels for displacement \ufb01elds as well as for scale and rotation. Extending DFF to af\ufb01ne transformations would be challenging though. One reason is that its ef\ufb01cient technique for computing DAISY features [54] at pre-determined scales and rotations cannot be applied for af\ufb01ne transformations. Another reason is that, as shown in [27, 21], the weak implicit smoothing embedded in PMF makes it more susceptible to erroneous local minima, and this problem may be magni\ufb01ed in the higher1 arXiv:1707.05471v1 [cs.CV] 18 Jul 2017 \fdimensional af\ufb01ne transformation space. Explicit smoothing models have been adopted to alleviate this problem in the context of stereo matching [28, 3], but were designed speci\ufb01cally for depth regularization. In this paper, we introduce an effective method for estimating dense af\ufb01ne transformation \ufb01elds between semantically similar images, as shown in Fig. 1. The key idea is to couple a discrete local labeling optimization with a continuous global regularization that updates the discrete candidate labels. An af\ufb01ne transformation \ufb01eld is ef\ufb01ciently inferred in a \ufb01lter-based discrete labeling scheme inspired by PMF, and then the discrete af\ufb01ne transformation \ufb01eld is globally regularized in a moving least squares (MLS) manner [45]. These two steps are iterated in alternation until convergence. Through the synergy of the discrete local labeling and continuous global regularization, our method yields continuous solutions from the space of af\ufb01ne transformations, rather than selecting from a pre-de\ufb01ned, \ufb01nite set of discrete samples. We show that this continuous regularization additionally overcomes the aforementioned implicit smoothness problem in PMF. Moreover, we model the effects of af\ufb01ne transformations directly within the state-of-the-art fully convolutional selfsimilarity (FCSS) descriptor [24], which leads to signi\ufb01cant improvements in processing speed over computing descriptors on various af\ufb01ne transformations of the image. Experimental results show that the presented model outperforms the latest methods for dense semantic correspondence on several benchmarks, including that of Taniai et al. [53], Proposal Flow [16], and PASCAL [10]. 2. Related Work Dense Semantic Flow Most conventional techniques for dense semantic correspondence have employed handcrafted features such as SIFT [35] or DAISY [54]. To improve matching quality, they have focused on optimization. Liu et al. [33] pioneered the idea of dense correspondence across different scenes, and proposed SIFT Flow which is based on hierarchical dual-layer belief propagation. Inspired by this, Kim et al. [23] proposed the deformable spatial pyramid (DSP) which performs multi-scale regularization with a hierarchical graph. Among other methods are those that take an exemplar-LDA approach [7], employ joint image set alignment [62], or jointly solve for cosegmentation [53]. Recently, CNN-based descriptors have been used to establish dense semantic correspondences. Zhou et al. [61] proposed a deep network that exploits cycle-consistency with a 3D CAD model [40] as a supervisory signal. Choy et al. [11] proposed the universal correspondence network (UCN) based on fully convolutional feature learning. Most recently, Kim et al. [24] proposed the FCSS descriptor that formulates local self-similarity (LSS) [47] within a fully convolutional network. Because of its LSS-based structure, FCSS is inherently insensitive to intra-class appearance variations while maintaining precise localization ability. However, none of these methods is able to handle nonrigid geometric variations. Several methods aim to alleviate geometric variations through extensions of SIFT Flow, including scale-less SIFT Flow (SLS) [18], scale-space SIFT Flow (SSF) [41], and generalized DSP (GDSP) [21]. However, these techniques have a critical practical limitation that their computation increases linearly with the search space size. A generalized PatchMatch algorithm [2] was proposed for ef\ufb01cient matching that leverages a randomized search scheme. This was utilized by HaCohen et al. [15] in a non-rigid dense correspondence (NRDC) algorithm, but employs weak matching evidence that cannot guarantee reliable performance. Geometric invariance to scale and rotation is provided by DFF [59], but its implicit smoothing model which relies on randomized sampling and propagation of good estimates in the direct neighborhood often induces mismatches. A segmentation-aware approach [56] was proposed to provide geometric robustness for descriptors, e.g., SIFT [35], but can have a negative effect on the discriminative power of the descriptor. Recently, Ham et al. [16] presented the Proposal Flow (PF) algorithm to estimate correspondences using object proposals. While these aforementioned techniques provide some amount of geometric invariance, none of them can deal with af\ufb01ne transformations across images, which are a frequent occurrence in dense semantic correspondence. Image Manipulation A possible approach for estimating dense af\ufb01ne transformation \ufb01elds is to interpolate sparsely matched points using a method, including thin plate splines (TPS) [4], motion coherence [60], coherence point drift [39], or smoothly varying af\ufb01ne stitching [30]. MLS is also a scattered point interpolation technique \ufb01rst introduced in [26] to reconstruct a continuous function from a set of point samples by incorporating spatially-weighted least squares. MLS has been successfully used in applications such as image deformation [45], surface reconstruction [13], image super-resolution and denoising [5], and color transfer [22]. Inspired by the MLS concept, our method utilizes it to regularize estimated af\ufb01ne transformation \ufb01elds, but with a different weight function and an ef\ufb01cient computational scheme. More related to our work is the method of Lin et al. [29], which jointly estimates correspondence and relative patch orientation for descriptors. However, it is formulated with pre-computed sparse correspondences and also requires considerable computation to solve a complex nonlinear optimization. By contrast, our method adopts dense descriptors that can be evaluated ef\ufb01ciently for any af\ufb01ne transformation, and employs quadratic continuous optimization to rapidly infer dense af\ufb01ne transformation \ufb01elds. \f3. Method 3.1. Problem Formulation and Model Given a pair of images I and I\u2032, the objective of dense correspondence estimation is to establish a correspondence i\u2032 for each pixel i = [ix, iy]. Unlike conventional dense correspondence settings for estimating depth [46], optical \ufb02ow [9, 51], or similarity transformations [59, 21], our objective is to infer a \ufb01eld of af\ufb01ne transformations, each represented by a 2 \u00d7 3 matrix Ti = \u0014 Ti,x Ti,y \u0015 (1) that maps pixel i to i\u2032 = Tii, where i is pixel i represented in homogeneous coordinates such that i = [i, 1]T . In this work, we solve for af\ufb01ne transformations that may lie anywhere in the continuous solution space. This is made possible by formulating the inference of dense af\ufb01ne transformation \ufb01elds as a discrete optimization problem with continuous regularization. This optimization seeks to minimize an energy of the form E(T) = Edata(T) + \u03bbEsmooth(T), (2) consisting of a data term that accounts for matching evidence between descriptors and a smoothness term that favors similar af\ufb01ne transformations among adjacent pixels with a balancing parameter \u03bb. Our data term is de\ufb01ned as follows: Edata(T) = X i X j\u2208Ni \u03c9I ij min(\u2225Dj \u2212D\u2032 j\u2032(Ti)\u22251, \u03c4). (3) It is designed to estimate the af\ufb01ne transformation Ti by aggregating the matching costs of descriptors between neighboring pixels j and transformed pixels j\u2032 = Tij within a local aggregation window Ni. A truncation threshold \u03c4 is used to deal with outliers and occlusions. It should be noted that aggregated data terms have been popularly used in stereo [46] and optical \ufb02ow [27]. For dense semantic correspondence, several methods have employed aggregated data terms; however, they often produce undesirable results across object boundaries due to uniform weights that ignore image structure [23, 21], or fail to deal with geometric distortions like af\ufb01ne transformations as they rely on a regular grid structure for local aggregation windows [59]. By contrast, the proposed method adaptively aggregates matching costs using edge-preserving bilateral weights \u03c9I ij as in [55, 19] on a geometrically-variant grid structure in order to produce spatially smooth yet discontinuity-preserving labeling results even under af\ufb01ne transformations. Our smoothness term is de\ufb01ned as follows to regularize af\ufb01ne transformation \ufb01elds Ti within a local neighborhood: Esmooth(T) = X i X j\u2208Mi \u03c5I ij\u2225Tij \u2212Tjj\u22252. (4) , i l t , i l s s t i i i l s i \uf02dW l t i \uf02dW [ ,0] l T i t i \uf02dT W [ ,0] l T i s i \uf02dT W i (a) FCSS [24] [ ,0] l T i t i \uf02dT W [ ,0] l T i s i \uf02dT W i (b) Af\ufb01ne-FCSS Figure 2. Illustration of (a) FCSS descriptor [24] and (b) af\ufb01neFCSS descriptor. Within a support window, sampling patterns Wl s and Wl t are transformed according to af\ufb01ne \ufb01elds Ti. When the af\ufb01ne transformation T is constrained to [I2\u00d72, u] with u = [ux, uy]T and Mi is the 4-neighborhood, this smoothness term becomes the \ufb01rst order derivative of the optical \ufb02ow vector as in many conventional methods [33, 38]. However, non-rigid deformations occur with high frequency in semantic correspondence, and such a basic constraint is inadequate for modeling the smoothness of af\ufb01ne transformation \ufb01elds. Our smoothness term is formulated to address this by regularizing estimated af\ufb01ne transformations Ti in a moving least squares manner [45] within local neighborhood Mi. We de\ufb01ne the smoothness constraint of af\ufb01ne transformation \ufb01elds by \ufb01tting Ti based on the af\ufb01ne \ufb02ow \ufb01elds of neighboring pixels Tjj. Unlike conventional moving least square solvers [45], our smoothness term incorporates edge-preserving bilateral weights \u03c5I ij as in [55, 19] for image structure-aware regularization. Minimizing the energy in (2) is a non-convex optimization problem de\ufb01ned over an in\ufb01nite continuous solution space. A similar issue exists for optical \ufb02ow estimation [8, 58, 42]. To minimize the non-convex energy function, several techniques such as a hybrid method with descriptor matching [8, 42] and a coarse-to-\ufb01ne scheme [58] have been used, but they are tailored to optical \ufb02ow estimation and have exhibited limited performance. We instead use a penalty decomposition scheme to alternately solve for the discrete and continuous af\ufb01ne transformation \ufb01elds. An ef\ufb01cient \ufb01lter-based discrete optimization technique is used to locally estimate discrete af\ufb01ne transformations in a manner similar to PMF [37]. The weakness of the implicit smoothing in the discrete local optimization is overcome by regularizing the af\ufb01ne transformation \ufb01elds through global optimization in the continuous space. This alternating optimization is repeated until convergence. Furthermore, to acquire matching evidence for semantic correspondence under spatially-varying af\ufb01ne \ufb01elds, we extend the FCSS descriptor [24] to model af\ufb01ne variations. 3.2. Af\ufb01ne-FCSS Descriptor To estimate a matching cost, a dense descriptor Di is extracted over the local support window of each image point Ii. For this we employ the state-of-the-art FCSS descriptor \f[24] for dense semantic correspondence, which formulates LSS [47] within a fully convolutional network in a manner where the patch sampling patterns and self-similarity measure are both learned. Formally, FCSS can be described as a vector of feature values Di = S lDl i for l \u2208{1, ..., L} with the maximum number of sampling patterns L, where the feature values are computed as Dl i = exp(\u2212S(i \u2212Wl s, i \u2212Wl t)/W\u03c3). (5) S(\u00b7, \u00b7) represents the self-similarity between two convolutional activations taken from a sampling pattern around center pixel i, and can be expressed as S(i\u2212Wl s, i\u2212Wl t) = \u2225F(Ai; Wl s)\u2212F(Ai; Wl t)\u22252, (6) where F(Ai; Wl s) = Ai\u2212Wl s and F(Ai; Wl t) = Ai\u2212Wl t, Wl s = [W l s,x, W l s,y] and Wl t = [W l t,x, W l t,y] compose the l-th learned sampling pattern, and Ai is the convolutional activation through feed-forward process F(Ii; Wc) for Ii with network weights Wc. The network parameters Wc, Ws, Wt, and W\u03c3 are learned in an end-to-end manner to provide optimal correspondence performance. The FCSS descriptor provides high invariance to appearance variations, but it inherently cannot deal with geometric variations due to its pre-de\ufb01ned sampling patterns for all pixels in an image. Furthermore, although its computation is ef\ufb01cient, FCSS cannot in practice be evaluated exhaustively over all the af\ufb01ne candidates during optimization. To alleviate these limitations, we extend the FCSS descriptor to adapt to af\ufb01ne transformation \ufb01elds. This is accomplished by reformulating the sampling patterns so that they account for the af\ufb01ne transformations. To expedite this computation, we \ufb01rst compute Ai over the entire image domain by passing it through the network. An FCSS descriptor Di(Ti) transformed under an af\ufb01ne \ufb01eld Ti can then be built by computing self-similarity on transformed sampling patterns \u2225F(Ai; Ti[Wl s, 0]T ) \u2212F(Ai; Ti[Wl t, 0]T )\u22252. (7) With this approach, repeated computation of convolutional activations over different af\ufb01ne transformations of the image is avoided. The af\ufb01ne transformation is ef\ufb01ciently inferred in a discrete optimization described in the following section. Differences between the FCSS descriptor and the af\ufb01ne-FCSS descriptor are illustrated in Fig. 2. 3.3. Solution Since af\ufb01ne transformation \ufb01elds are de\ufb01ned in an in\ufb01nite label space, minimizing our energy function E(T) directly is infeasible. Through \ufb01ne-scale discretization of this space, af\ufb01ne transformation \ufb01elds could be estimated through discrete global optimization, but at a tremendous computational cost. To address this issue, we introduce an Continuous Optimization Discrete Optimization i k S I ij \uf077 1 min(|| ( ) || , ) j j i \uf074 \uf0a2 \uf0a2 \uf02d T ' i i \uf03dTi i j\uf0a2\uf03dT j j i j I ij \uf075 2 || || t i i \uf02d L j T j i L j ' t j j \uf03dT j Figure 3. Our DCTM method consists of discrete optimization and continuous optimization. Our DCTM method differs from the conventional PMF [37] by alternately optimizing the discrete label space and performing the continuous regularization. auxiliary af\ufb01ne \ufb01eld L to decouple our data and regularization terms, and approximate the original minimization problem as the following auxiliary energy formulation: Eaux(T, L) = X i X j\u2208Ni \u03c9I ij min(\u2225Dj \u2212D\u2032 j\u2032(Ti)\u22251, \u03c4) + \u00b5 X i \u2225Li \u2212Ti\u22252 + \u03bb X i X j\u2208Mi \u03c5I ij\u2225Lij \u2212Tjj\u22252. (8) Since this energy function is based on two af\ufb01ne transformations, L and T, we employ alternating minimization to solve for them and boost matching performance in a synergistic manner. We split the optimization of Eaux(L, T) into two sub-problems, namely a discrete local optimization problem with respect to T and a continuous global optimization problem with respect to L. Increasing \u00b5 through the iterations drives the af\ufb01ne \ufb01elds T and L together and eventually results in lim\u00b5\u2192\u221eEaux \u2248E. Discrete Optimization To infer the discrete af\ufb01ne transformation \ufb01eld Tt with Lt\u22121 being \ufb01xed at the t-th iteration, we \ufb01rst discretize the continuous parameter space and then solve the problem through \ufb01lter-based label inference. For discrete af\ufb01ne transformation candidates T \u2208L, the matching cost between FCSS descriptors Dj and D\u2032 j\u2032(T) is \ufb01rst measured as Cj(T) = min(\u2225Dj \u2212D\u2032 j\u2032(T)\u22251, \u03c4), (9) where D\u2032 j\u2032(T) is the af\ufb01ne-FCSS descriptor with respect to T. This yields an af\ufb01ne-invariant matching cost. Furthermore, since j\u2032 varies according to af\ufb01ne \ufb01elds such that j\u2032 = Tj, af\ufb01ne-varying regular grids can be used when aggregating matching costs, thus enabling af\ufb01ne-invariant cost \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 4. DCTM convergence: (a) Source image; (b) Target image; Iterative evolution of warped images (c), (e), (g) after discrete optimization and (d), (f), (h) after continuous optimization. Our DCTM optimizes the label space with continuous regularization during the iterations, which facilitates convergence and boosts matching performance. aggregation. To aggregate the raw matching costs, we apply EAF on Ci(T) such that \u00af Ci(T) = X j\u2208Ni \u03c9I ijCj(T), (10) where \u03c9I ij is the normalized adaptive weight of a support pixel j, which can be de\ufb01ned in various ways with respect to the structures of the image I [55, 14, 19]. In determining the af\ufb01ne \ufb01eld T, the matching costs are also augmented by the previously estimated af\ufb01ne transformation \ufb01eld Lt\u22121 i such that Gi(T) = \u00b5\u2225T \u2212Lt\u22121 i \u22252 + \u03bb X j\u2208Mi \u03c5I ij\u2225Tj \u2212Lt\u22121 i j\u22252. (11) Since \u2225Tj \u2212Lt\u22121 i j\u22252 = \u2225(T \u2212Lt\u22121 i )j\u22252 and T \u2212Lt\u22121 i is independent to pixel j within the support window, Gi(T) can be ef\ufb01ciently computed by using constant-time EAF, as described in detail in the supplementary material. The resultant label at the t-th iteration is determined with a winner-takes-all (WTA) scheme: Tt i = argminT\u2208L{ \u00af Ci(T) + Gi(T)}. (12) Continuous Optimization To solve the continuous af\ufb01ne transformation \ufb01eld Lt with Tt being \ufb01xed, we formulate the problem as an image warping minimization: X i \uf8eb \uf8ed\u00b5\u2225Li \u2212Tt i\u22252 + \u03bb X j\u2208Mi \u03c5I ij\u2225Lij \u2212Tt jj\u22252 \uf8f6 \uf8f8. (13) Since this involves solving spatially-varying weighted least squares at each pixel i, the computational burden inevitably increases when considering non-local neighborhoods Mi. To expedite this, existing MLS solvers adopted grid-based sampling [45] at the cost of quantization errors or parallel processing [22] with additional hardware. In contrast, our method optimizes the objective with a sparse matrix solver, yielding a substantial runtime gain. Since the Lij term can be formulated in the xand y-directions separatively, [Li,xj, Li,yj]T , we decompose the objective into Algorithm 1: DCTM Framework Input: images I, I\u2032, FCSS network parameter W Output: dense af\ufb01ne transformation \ufb01elds T Parameters: number of segments K, pyramid levels F /\u2217Initialization \u2217/ 1 : Partition I into a set of disjoint K segments {Sk} 2 : Initialize af\ufb01ne \ufb01elds as Ti = [I2\u00d72, 02\u00d71] for f = 1 : F do 3 : Build convolution activations Af, A\u2032f for If, I\u2032f 4 : Initialize af\ufb01ne \ufb01elds Tf i = Lf\u22121 i when f > 2 while not converged do /\u2217Discrete Optimization \u2217/ 5 : Initialize af\ufb01ne \ufb01elds Tt i = Lt\u22121 i for k = 1 : K do /\u2217Propagation \u2217/ 6 : For Sk, construct af\ufb01ne candidates T \u2208Lp from neighboring segments 7 : Build cost volumes \u00af Ci(T) and Gi(T) 8 : Determine Tt i using (12) /\u2217Random Search \u2217/ 9 : Construct af\ufb01ne candidates T \u2208Lr from randomly sampled af\ufb01ne \ufb01elds 10 : Determine Tt i by Step 7-8 end for /\u2217Continuous Optimization \u2217/ 11 : Estimate af\ufb01ne \ufb01elds Lt i from Tt i using (15) end while end for two separable energy functions. For the x-direction, the energy function can be represented as X i \uf8eb \uf8ed\u00b5\u2225Li,x \u2212Tt i,x\u22252 + \u03bb X j\u2208Mi \u03c5I ij\u2225Li,xj \u2212Tt j,xj\u22252 \uf8f6 \uf8f8. (14) By setting the gradient of this objective with respect to Lx,i to zero, the minimizer Lt i,x is obtained by solving a linear system based on a large sparse matrix: (\u00b5/\u03bbI + U)Lt x = (\u00b5/\u03bbI + K)Tt x, (15) \fwhere I denotes a 3N\u00d73N identity matrix with N denoting the number of pixels in image I. Lt x and Tt x denote 3N \u00d71 column vectors containing Lt i,x and Tt i,x, respectively. U and K denote matrices de\ufb01ned as U = \uf8ee \uf8f0 \u03c8(VX2) \u03c8(VXY ) \u03c8(VX) \u03c8(VXY ) \u03c8(VY 2) \u03c8(VY ) \u03c8(VX) \u03c8(VY ) IN\u00d7N \uf8f9 \uf8fb, (16) and K = \uf8ee \uf8f0 V\u03c8(X) 0 0 0 V\u03c8(Y ) 0 0 0 V \uf8f9 \uf8fb, (17) where V is an N\u00d7N kernel matrix whose nonzero elements are given by the weights \u03c5I ij, \u03c8(\u00b7) denotes a diagonalizaition operator, X and Y denote N \u00d71 column vectors containing ix and iy, respectively. X2 = X \u25e6X, Y 2 = Y \u25e6Y , and XY = X \u25e6Y , where \u25e6denotes the Hadamard product. Since \u03c5I ij is a normalized bilateral weight, the matrices U and K can be ef\ufb01ciently computed using recent EAF algorithms [14, 19]. Furthermore, since \u00b5/\u03bbI + U is a blockdiagonal matrix, Lt x can be estimated ef\ufb01ciently using a fast sparse matrix solver [25]. After optimizing Lt y in a similar manner, we then have the continuous af\ufb01ne \ufb01elds Lt. Iterative Inference In our \ufb01lter-based discrete optimization, exhaustively evaluating the raw and aggregated costs for every label L is still prohibitively time-consuming. Thus we utilize the PMF [37] which jointly leverages label cost \ufb01ltering and fast randomized PatchMatch search in a high dimensional label space. Our discrete optimization differs from the PMF by optimizing the discrete label space with continuous regularization during the iterations, which facilitates convergence and boosts matching performance. We \ufb01rst decompose an image I into a set of K disjoint segments I = {Sk, k = 1, ..., K} and build its set of spatially adjacent segment neighbors. Then for each segment Sk, two sets of label candidates from the propagation and random search steps are evaluated for each graph node in scan order. In the propagation step, for each segment Sk, a candidate pixel i is randomly sampled from each neighboring segment, and a set of current best labels Lp for i is de\ufb01ned by {Ti}. For these Lp, EAF-based cost aggregation is then performed for the segment Sk. In the random search step, a center-biased random search as done in PatchMatch [2] is performed for the current segment Sk, where a sequence of random labels Lr sampled around the current best label is evaluated. After an iteration of the propagation and random search steps for all segments, we apply continuous optimization as described in the preceding section to regularize the discrete af\ufb01ne transformation \ufb01elds. After each iteration, we enlarge \u00b5 such that \u00b5 \u2190c\u00b5 with a constant value 1 < c \u22642 to accelerate convergence. Fig. 3 summarizes our DCTM method, consisting of discrete and Methods FG3D JODS PASC. Avg. SIFT Flow [33] 0.632 0.509 0.360 0.500 DSP [23] 0.487 0.465 0.382 0.445 Zhou et al. [61] 0.721 0.514 0.436 0.556 Taniai et al. [53] 0.830 0.595 0.483 0.636 SF w/DAISY [54] 0.636 0.373 0.338 0.449 SF w/VGG [49] 0.756 0.490 0.360 0.535 SF w/FCSS [24] 0.830 0.653 0.494 0.660 SLS [18] 0.525 0.519 0.320 0.457 SSF [41] 0.687 0.344 0.370 0.467 SegSIFT [56] 0.612 0.421 0.331 0.457 Lin et al. [29] 0.406 0.283 0.161 0.283 DFF [59] 0.489 0.296 0.214 0.333 GDSP [21] 0.639 0.374 0.368 0.459 Proposal Flow [16] 0.786 0.653 0.531 0.657 DCTM w/DAISY 0.710 0.506 0.482 0.566 DCTM w/VGG 0.790 0.611 0.528 0.630 DCTM wo/Cont. 0.850 0.637 0.559 0.682 DCTM wo/C2F 0.859 0.684 0.550 0.698 DCTM 0.891 0.721 0.610 0.740 Table 1. Matching accuracy compared to state-of-the-art correspondence techniques on the Taniai benchmark [53]. continuous optimization, and Fig. 4 illustrates the convergence of our DCTM method. To boost matching performance and convergence of our algorithm, we apply our method in a coarse-to-\ufb01ne manner, where images If are constructed at F image pyramid levels f = {1, ..., F} and af\ufb01ne transform \ufb01elds Tf are predicted at level f. Coarser scale results are then used as initialization for the \ufb01ner levels. Algorithm 1 provides a summary of the overall procedure of our DCTM method. 4. Experimental Results 4.1. Experimental Settings For our experiments, we used the FCSS descriptor provided by authors, which is learned on Caltech-101 dataset [12]. For EAF for \u03c9I ij and \u03c5I ij, we utilized the guided \ufb01lter [20], where the radius and smoothness parameters are set to {16, 0.01}. The weights in energy function were initially set to {\u03bb, \u00b5} = {0.01, 0.1} by cross-validation, but \u00b5 increases as evolving iterations with c = 1.8. The SLIC [1] segment number K increases sublinearly with the image size, e.g., K = 500 for 640 \u00d7 480 images. The image pyramid level F is set to 3. We implemented our DCTM method in Matlab/C++ on Intel Core i7-3770 CPU at 3.40 GHz, and measured the runtime on a single CPU core. Our code will be made publicly available. In the following, we comprehensively evaluated our DCTM method through comparisons to the state-of-theart methods for dense semantic correspondences, including SIFT Flow [33], DSP [23], Zhou et al. [61], UCN [11], \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 5. Qualitative results on the Taniai benchmark [53]: (a) source image, (b) target image, (c) Lin et al. [29], (d) DFF [59], (e) PF [16], (f) Taniai et al. [53], (g) SF w/FCSS [24], and (h) DCTM. The source images were warped to the target images using correspondences. Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SIFT Flow Zhou et al. Taniai et al. SFw/FCSS SSF Lin et al. DFF GDSP PF DCTMw/Cont. DCTMw/C2F DCTM (a) FG3DCar Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SIFT Flow Zhou et al. Taniai et al. SFw/FCSS SSF Lin et al. DFF GDSP PF DCTMw/Cont. DCTMw/C2F DCTM (b) JODS Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 SIFT Flow Zhou et al. Taniai et al. SFw/FCSS SSF Lin et al. DFF GDSP PF DCTMw/Cont. DCTMw/C2F DCTM (c) PASCAL Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SIFT Flow Zhou et al. Taniai et al. SFw/FCSS SSF Lin et al. DFF GDSP PF DCTMw/Cont. DCTMw/C2F DCTM (d) Average Figure 6. Average \ufb02ow accuracy with respect to endpoint error threshold on the Taniai benchmark [53]. Taniai et al. [53], SIFT Flow optimization with VGG1 [49] and FCSS [24] descriptor. Furthermore geometric-invariant methods including SLS [18], SSF [41], SegSIFT [56], Lin et al. [29], DFF [59], GDSP [21], and PF [16] were evaluated. The performance was measured on Taniai benchmark [53], Proposal Flow dataset [16], and PASCAL-VOC 1In the \u2018VGG\u2019, ImageNet pretrained VGG-Net [49] from the botton conv1 to the conv3-4 layer were used with L2 normalization [50]. dataset [10]. To validate the components of our method, we additionally examined the performance contributions of the continuous optimization (wo/Cont.) and the coarse-to\ufb01ne scheme (wo/C2F). Furthermore the performance of our DCTM method when combined with other dense descriptors2 was examined using the DAISY [54] and VGG [49]. 4.2. Results Taniai Benchmark [53] We \ufb01rst evaluated our DCTM method on the Taniai benchmark [53], which consists of 400 image pairs divided into three groups: FG3DCar [31], JODS [44], and PASCAL [17]. As in [53, 24], \ufb02ow accuracy was measured by computing the proportion of foreground pixels with an absolute \ufb02ow endpoint error that is smaller than a certain threshold T, after resizing images so that its larger dimension is 100 pixels. Table 1 summarizes the matching accuracy for state-ofthe-art correspondence techniques (T = 5 pixels). Fig. 5 displays qualitative results for dense \ufb02ow estimation. Fig. 6 plots the \ufb02ow accuracy with respect to error threshold. Compared to methods based on handcrafted features [41, 59, 21], CNN based methods [53, 24] provide higher accuracy even though they do not consider geometric variations. The method of Lin et al. [29] cannot estimate reliable correspondences due to unstable sparse correspondences. Thanks to its discrete labeling optimization with continuous regularization and af\ufb01ne-FCSS, our DCTM method provides state-of-the-art performance. Proposal Flow Benchmark [16] We also evaluated our FCSS descriptor on the Proposal Flow benchmark [16], which includes 10 object sub-classes with 10 keypoint annotations for each image. For the evaluation metric, we used the probability of correct keypoint (PCK) between \ufb02owwarped keypoints and the ground truth [34, 16]. The warped keypoints are deemed to be correctly predicted if they lie within \u03b1 \u00b7 max(H, W) pixels of the ground-truth keypoints for \u03b1 \u2208[0, 1], where H and W are the height and width of the object bounding box, respectively. The PCK values 2These experiments use only the upright version of the descriptors. \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 7. Qualitative results on the Proposal Flow benchmark [16]: (a) source image, (b) target image, (c) SSF [41], (d) DSP [23], (e) GDSP [21], (f) PF [16], (g) SF w/FCSS [24], and (h) DCTM. The source images were warped to the target images using correspondences. (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 8. Visualizations of dense \ufb02ow \ufb01eld with color-coded part segments on the PASCAL-VOC part dataset [10]: (a) source image, (b) target image, (c) source mask, (d) DFF [59], (e) GDSP [21], (f) Zhou et al. [61], (g) SF w/FCSS [24], (h) DCTM, and (i) target mask. Methods PCK \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 SIFT Flow [33] 0.247 0.380 0.504 DSP [23] 0.239 0.364 0.493 Zhou et al. [61] 0.197 0.524 0.664 SF w/FCSS [24] 0.354 0.532 0.681 SSF [41] 0.292 0.401 0.531 Lin et al. [29] 0.192 0.354 0.487 DFF [59] 0.241 0.362 0.510 GDSP [21] 0.242 0.487 0.512 Proposal Flow [16] 0.284 0.568 0.682 DCTM 0.381 0.610 0.721 Table 2. Matching accuracy compared to state-of-the-art correspondence techniques on the Proposal Flow benchmark [16]. were measured for different correspondence techniques in Table 2. Fig. 7 shows qualitative results for dense \ufb02ow estimation. Our DCTM method exhibits performance competitive to the state-of-the-art correspondence techniques. PASCAL-VOC Parts Dataset [10] Lastly, we evaluated our DCTM method on the dataset provided by [62], where the images are sampled from the PASCAL parts dataset [10]. With human-annotated part segments, we measured part matching accuracy using the weighted intersection over union (IoU) score between transferred segments and ground truths, with weights determined by the pixel area of each part. To evaluate alignment accuracy, we measured the PCK metric using keypoint annotations for the 12 rigid PASCAL classes [57]. Table 3 summarizes the matching accuracy compared to state-of-the-art correspondence methods. Fig. 8 visualizes estimated dense \ufb02ow with color-coded part segMethods IoU PCK \u03b1 = 0.05 \u03b1 = 0.1 Zhou et al. [61] 0.24 UCN [11] 0.26 0.44 SF w/ FCSS [33] 0.44 0.28 0.47 DFF [59] 0.36 0.14 0.31 GDSP [21] 0.40 0.16 0.34 Proposal Flow [16] 0.41 0.17 0.36 DCTM 0.48 0.32 0.50 Table 3. Matching accuracy on the PASCAL-VOC dataset [10]. ments. From the results, our DCTM method is found to yield the highest matching accuracy. Computation Speed For all the test cases, our DCTM method converges with 3-5 iterations on each image pyramid level. For 320 \u00d7 240 images, the average runtime of DCTM is 15-20 seconds, compared to 216 seconds for GDSP [21], 73 seconds for DFF [59], 276 seconds for Lin et al. [29], and 321 seconds for Taniai et al. [53]. 5." + }, + { + "url": "http://arxiv.org/abs/1702.00926v1", + "title": "FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence", + "abstract": "We present a descriptor, called fully convolutional self-similarity (FCSS),\nfor dense semantic correspondence. To robustly match points among different\ninstances within the same object class, we formulate FCSS using local\nself-similarity (LSS) within a fully convolutional network. In contrast to\nexisting CNN-based descriptors, FCSS is inherently insensitive to intra-class\nappearance variations because of its LSS-based structure, while maintaining the\nprecise localization ability of deep neural networks. The sampling patterns of\nlocal structure and the self-similarity measure are jointly learned within the\nproposed network in an end-to-end and multi-scale manner. As training data for\nsemantic correspondence is rather limited, we propose to leverage object\ncandidate priors provided in existing image datasets and also correspondence\nconsistency between object pairs to enable weakly-supervised learning.\nExperiments demonstrate that FCSS outperforms conventional handcrafted\ndescriptors and CNN-based descriptors on various benchmarks.", + "authors": "Seungryong Kim, Dongbo Min, Bumsub Ham, Sangryul Jeon, Stephen Lin, Kwanghoon Sohn", + "published": "2017-02-03", + "updated": "2017-02-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Establishing dense correspondences across semantically similar images is essential for numerous tasks such as scene recognition, image registration, semantic segmentation, and image editing [17, 31, 24, 49, 54]. Unlike traditional dense correspondence approaches for estimating depth [39] or optical \ufb02ow [3, 44], in which visually similar images of the same scene are used as inputs, semantic correspondence estimation poses additional challenges due to intra-class variations among object instances, as exempli\ufb01ed in Fig. 1. Often, basic visual properties such as colors and gradients are not shared among different object instances in the same class. These variations, in addition to other complications from occlusion and background clutter, lead to signi\ufb01cant differences in appearance that can distract matching by handcrafted feature descriptors [34, 46]. Although powerful optimization techniques can help by enforcing smoothness constraints over a correspondence map [31, 24, 54, 45, 18], Image 1 (a) Source image Image 2 (b) Target image Image 1 (c) Window Image 1 (d) Window (e) FCSS in (c) (f) FCSS in (d) Figure 1. Visualization of local self-similarity. Even though there are signi\ufb01cant differences in appearance among different instances within the same object class in (a) and (b), local self-similarity in our FCSS descriptor is preserved between them as shown in (e) and (f), thus providing robustness to intra-class variations. they are limited in effectiveness without a proper matching descriptor for semantic correspondence estimation. Over the past few years, convolutional neural network (CNN) based features have become increasingly popular for correspondence estimation thanks to their localization precision of matched points and their invariance to minor geometric deformations and illumination changes [19, 51, 41, 50]. However, for computing semantic correspondences within this framework, greater invariance is needed to deal with the more substantial appearance differences. This could potentially be achieved with a deeper convolutional network [42], but would come at the cost of signi\ufb01cantly reduced localization precision in matching details as shown in [32, 21]. Furthermore, as training data for semantic correspondence is rather limited, a network cannot be trained properly in a supervised manner. To address these issues, we introduce a CNN-based descriptor that is inherently insensitive to intra-class appearance variations while maintaining precise localization ability. The key insight, illustrated in Fig. 1, is that among different object instances in the same class, their local structural layouts remain roughly the same. Even with dissimi1 arXiv:1702.00926v1 [cs.CV] 3 Feb 2017 \flar colors, gradients, and small differences in feature positions, the local self-similarity (LSS) between sampled patch pairs is basically preserved. This property has been utilized for non-rigid object detection [40], sketch retrieval [5], and cross-modal correspondence [25]. However, existing LSSbased techniques are mainly handcrafted and need further robustness to capture reliable matching evidence from semantically similar images. Our proposed descriptor, called fully convolutional selfsimilarity (FCSS), formulates LSS within a fully convolutional network in manner where the patch sampling patterns and self-similarity measure are both learned. We propose a convolutional self-similarity (CSS) layer that encodes the LSS structure and possesses differentiability, allowing for end-to-end training together with the sampling patterns. The convolutional self-similarities are measured at multiple scales, using skip layers [32] to forward intermediate convolutional activations. Furthermore, since limited training data is available for semantic correspondence, we propose a weakly-supervised feature learning scheme that leverages correspondence consistency between object locations provided in existing image datasets. Experimental results show that the FCSS descriptor outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks, including that of Taniai et al. [45], Proposal Flow [18], the PASCAL dataset [6], and Caltech-101 [13]. 2. Related Work Feature Descriptors Conventional gradient-based and intensity comparison-based descriptors, such as SIFT [34], HOG [8], DAISY [46], and BRIEF [4], have shown limited performance in dense correspondence estimation across semantically similar but different object instances. Over the past few years, besides these handcrafted features, several attempts have been made using deep CNNs to learn discriminative descriptors for local patches from large-scale datasets. Some of these techniques have extracted immediate activations as the descriptor [16, 14, 9, 33], which have shown to be effective for patch-level matching. Other methods have directly learned a similarity measure for comparing patches using a convolutional similarity network [19, 51, 41, 50]. Even though CNN-based descriptors encode a discriminative structure with a deep architecture, they have inherent limitations in handling large intra-class variations [41, 10]. Furthermore, they are mostly tailored to estimate sparse correspondences, and cannot in practice provide dense descriptors due to their high computational complexity. Of particular importance, current research on semantic correspondence lacks an appropriate benchmark with dense ground-truth correspondences, making supervised learning of CNNs less feasible for this task. LSS techniques, originally proposed in [40], have achieved impressive results in object detection, image retrieval by sketching [40], deformable shape class retrieval [5], and cross-modal correspondence estimation [47, 25]. Among the more recent cross-modal descriptors is the dense adaptive self-correlation (DASC) descriptor [25], which provides satisfactory performance but is unable to handle non-rigid deformations due to its \ufb01xed patch pooling scheme. The deep self-correlation (DSC) descriptor [26] reformulates LSS in a deep non-CNN architecture. As all of these techniques utilize handcrafted descriptors, they lack the robustness that is possible with CNNs. Dense Semantic Correspondence Many techniques for dense semantic correspondence employ handcrafted features such as SIFT [34] or HOG [8]. To improve the matching quality, they focus on optimization. Among these methods are some based on SIFT Flow [31, 24], which uses hierarchical dual-layer belief propagation (BP). Other instances include the methods with an exemplar-LDA approach [2], through joint image set alignment [54], or together with cosegmentation [45]. More recently, more powerful CNN-based descriptors have been used for establishing dense semantic correspondences. Pre-trained ConvNet features [27] were employed with the SIFT Flow algorithm [33] and with semantic \ufb02ow using object proposals [18]. Choy et al. [7] proposed a deep convolutional descriptor based on fully convolutional feature learning and a convolutional spatial transformer [23]. As these methods formulate the networks by combining existing convolutional networks only, they face a tradeoff between appearance invariance and localization precision that presents inherent limitations on semantic correspondence. Weakly-Supervised Feature Learning For the purpose of object recognition, Dosovitskiy et al. [11] trained the network to discriminate between a set of surrogate classes formed by applying various transformations. For object matching, Lin et al. [28] proposed an unsupervised learning to learn a compact binary descriptor by leveraging an iterative training scheme. More closely related to our work is the method of Zhou et al. [53], which exploits cycleconsistency with a 3D CAD model [35] as a supervisory signal to train a deep network for semantic correspondence. However, the need to have a suitable 3D CAD model for each object class limits its applicability. 3. The FCSS Descriptor 3.1. Problem Formulation and Overview Let us de\ufb01ne an image I such that Ii : I \u2192R3 for pixel i = [ix, iy]T . For each image point Ii, a dense descriptor Di : I \u2192RL of dimension L is de\ufb01ned on a local support window. For LSS, this descriptor represents locally self-similar structure around a given pixel by recording the similarity between certain patch pairs within a local window. Formally, LSS can be described as a vector of feature 2 \fvalues Di = S lDi(l) for l \u2208{1, ..., L}, where the feature values are computed as Di(l) = maxj\u2208Ni exp (\u2212S (Pj\u2212sl, Pj\u2212tl) /\u03bb) , (1) where S(Pi\u2212sl, Pi\u2212tl) is a self-similarity distance between two patches Pi\u2212sl and Pi\u2212tl sampled on sl and tl, the lth selected sampling pattern, around center pixel i. To alleviate the effects of outliers, the self-similarity responses are encoded by non-linear mapping with an exponential function of a bandwidth \u03bb [1]. For spatial invariance to the position of the sampling pattern, the maximum self-similarity within a spatial window Ni is computed. By leveraging CNNs, our objective is to design a dense descriptor that formulates LSS in a fully convolutional and end-to-end manner for robust estimation of dense semantic correspondences. Our network is built as a multi-scale series of convolutional self-similarity (CSS) layers that each includes a two-stream shifting transformer for applying a sampling pattern. To learn the network, including its selfsimilarity measures and sampling patterns, in a weaklysupervised manner, our network utilizes correspondence consistency between pairs of input images as well as object locations provided in existing datasets. 3.2. CSS: Convolutional Self-Similarity Layer We \ufb01rst describe the convolutional self-similarity (CSS) layer, which provides robustness to intra-class variations while preserving localization precision of matched points around \ufb01ne-grained object boundaries. Convolutional Similarity Network Previous LSS-based techniques [40, 25, 26] evaluate (1) by sampling patch pairs and then computing their similarity using handcrafted metrics, which often fails to yield detailed matching evidence for estimating semantic correspondences. Instead, we compute the similarity of sampled patch pairs through CNNs. With l omitted for simplicity, the self-similarity between a patch pair Pi\u2212s and Pi\u2212t is formulated through a Siamese network, followed by decision or metric network [51, 19] or a simple L2 distance [41, 50] as shown in Fig. 2(a). Specifically, convolutional activations through feed-forward processes F(Pi\u2212s; Wc) and F(Pi\u2212t; Wc) with CNN parameters Wc are used to measure self-similarity based on the L2 distance, such that S(Pi\u2212s, Pi\u2212t) = \u2225F(Pi\u2212s; Wc) \u2212F(Pi\u2212t; Wc)\u22252. (2) Note that our approach employs the Siamese network to measure self-similarity within a single image, in contrast to recent CNN-based descriptors [41] that directly measure the similarity between patches from two different images. However, computing S(Pi\u2212s, Pi\u2212t) for all sampling patterns (s, t) in this network is time-consuming, since the L\u20102\u00a0Norm. Local\u00a0Support \u2010Window Shifting\u00a0 Transform. Shifting\u00a0 Transform. Siamese\u2010Similarity\u00a0 Network Local\u00a0Support \u2010Window Shifting\u00a0 Transform. Shifting\u00a0 Transform. Siamese\u2010Similarity\u00a0 Network (a) Straightforward implementation of CSS layer Similarity\u00a0 Network Shifting\u00a0 Transform. Shifting\u00a0 Transform. L\u20102 Norm Local\u00a0Support \u2010Window Similarity\u00a0 Network Shifting\u00a0 Transform. Shifting\u00a0 Transform. L\u20102\u00a0Norm. Local\u00a0Support \u2010Window (b) Ef\ufb01cient implementation of CSS layer Figure 2. Convolutional self-similarity (CSS) layers, implemented as (a) straightforward and (b) ef\ufb01cient versions. With the ef\ufb01cient scheme, convolutional self-similarity is equivalently solved while avoiding repeated computations for convolutions. number of iterations through the Siamese network is linearly proportional to the number of sampling patterns. To expedite this computation, we instead generate the convolutional activations of an entire image by passing it through the CNN, similar to [22], and then measure the selfsimilarity for the sampling patterns directly on the convolutional activations Ai = F(Ii; Wc), as shown in Fig. 2(b). Formally, this can be written as S(Pi\u2212s, Pi\u2212t) = \u2225Ai\u2212s \u2212Ai\u2212t\u22252. (3) With this scheme, the self-similarity is measured by running the similarity network only once, regardless of the number of sampling patterns. Interestingly, a similar computational scheme was used to measure the similarity between two different images in [52], whereas our scheme instead measures self-similarity within a single image. Two-Stream Shifting Transformer The sampling patterns (s, t) of patch pairs are a critical element of local selfsimilarity. In our CSS layer, a sampling pattern for a pixel i can be generated by shifting the original activation Ai by s and t to form two different activations from which selfsimilarity is measured. While this spatial manipulation of data within the network could be learned and applied using a spatial transformer layer [23], we instead formulate a simpli\ufb01cation of this, called a shifting transformer layer, in which the shift transformations s and t are de\ufb01ned as network parameters that can be learned because of the differentiability of the shifting transformer layer. In this way, the optimized sampling patterns can be learned in the CNN. Concretely, the sampling patterns are de\ufb01ned as network parameters Ws = [Wsx, Wsy]T and Wt = [Wtx, Wty]T 3 \ffor all (s, t). Since the shifted sampling is repeated in an (integer) image domain, the convolutional self-similarity activation Ai is shifted simply without interpolation in the image domain according to the sampling patterns. We \ufb01rst de\ufb01ne the sampled activations though a two-stream shifting transformer as Ai\u2212Ws = F(Ai; Ws), Ai\u2212Wt = F(Ai; Wt). (4) From this, convolutional self-similarity is then de\ufb01ned as S(Pi\u2212Ws, Pi\u2212Wt) = \u2225F(Ai; Ws) \u2212F(Ai; Wt)\u22252. (5) Note that S(Pi\u2212Ws, Pi\u2212Wt) represents a convolutional self-similarity vector de\ufb01ned for all (s, t). Differentiability of Convolutional Self-Similarity For end-to-end learning of the proposed descriptor, the derivatives for the CSS layer must be computable, so that gradients of the \ufb01nal loss can be back-propagated to the convolutional similarity and shifting transformer layers. To obtain the derivatives for the convolutional similarity layer and the shifting transformer layers, we \ufb01rst compute the Taylor expansion of the shifting transformer activations, under the assumption that Ai is smoothly varying with respect to shifting parameters Ws: Ai\u2212Wn s = Ai\u2212Wn\u22121 s +(Wn s \u2212Wn\u22121 s )\u25e6\u25bdAi\u2212Wn\u22121 s , (6) where Wn\u22121 s represents the sampling patterns at the (n \u2212 1)th iteration during training, and \u25e6denotes the Hadamard product. \u25bdAi\u2212Wn\u22121 s is a spatial derivative on each activation slice with respect to \u25bdx and \u25bdy. By differentiating (6) with respect to Wn sx, we get the shifting parameter derivatives as \u2202Ai\u2212Wn s \u2202Wn sx = \u25bdxAi\u2212Wn\u22121 s . (7) By the chain rule, with n omitted, the derivative of the \ufb01nal loss L with respect to Wsx can be expressed as \u2202L \u2202Wsx = \u2202L \u2202Ai\u2212Ws \u2202Ai\u2212Ws \u2202Wsx . (8) Similarly, \u2202L/\u2202Wsy, \u2202L/\u2202Wtx, and \u2202L/\u2202Wty can be calculated. Moreover, the derivative of the \ufb01nal loss L with respect to Ai can be formulated as \u2202L \u2202Ai = \u2202L \u2202Ai\u2212Ws \u2202Ai\u2212Ws \u2202Ai + \u2202L \u2202Ai\u2212Wt \u2202Ai\u2212Wt \u2202Ai = \u2202L \u2202Ai\u2212Ws + \u2202L \u2202Ai\u2212Wt , (9) since \u2202Ai\u2212Ws/\u2202Ai is 1 on the pixel i \u2212Ws. In this way, the derivatives for the CSS layer can be computed. CSS\u00a0Layer CSS\u00a0Layer CSS\u00a0 Layer Image Convolutional\u00a0Activations FCSS\u00a0Descriptor Conv. Conv. Conv. Up. /Nonlin. /Max\u2010Pool. Up.\u00a0/\u00a0Nonlin./Max\u2010Pool. Skip Skip \u2026 \u2026 Up.\u00a0/\u00a0Nonlin./Max\u2010Pool. Figure 3. Network con\ufb01guration of the FCSS descriptor, consisting of convolutional self-similarity layers at multiple scales. 3.3. Network Con\ufb01guration for Dense Descriptor Multi-Scale Convolutional Self-Similarity Layer In building the descriptor through a CNN architecture, there is a trade-off between robustness to semantic variations and \ufb01ne-grained localization precision [32, 21]. The deeper convolutional layers gain greater robustness to semantic variations, but also lose localization precision of matching details around object boundaries. On the contrary, the shallower convolutional layers better preserve matching details, but are more sensitive to intra-class appearance variations. Inspired by the skip layer scheme in [32], we formulate the CSS layers in a hierarchical manner to encode multiscale self-similarities as shown in Fig. 3. Even though the CSS layer itself provides robustness to semantic variations and \ufb01ne-grained localization precision, this scheme enables the descriptor to boost both robustness and localization precision. The CSS layers are located after multiple intermediate activations, and their outputs are concatenated to construct the proposed descriptor. In this way, the descriptor naturally encodes self-similarity at multiple scales of receptive \ufb01elds, and further learns optimized sampling patterns on each scale. Many existing descriptors [21, 51] also employ a multi-scale description to improve matching quality. For intermediate activations Ak i = F(Ii; Wk c ), where k \u2208{1, ..., K} is the level of convolutional activations and Wk c is convolutional similarity network parameters at the kth level, the self-similarity at the the kth level is measured according to sampling patterns Wk s and Wk t as S(Pi\u2212Wk s , Pi\u2212Wk t ) = \u2225F(Ak i ; Wk s) \u2212F(Ak i ; Wk t )\u22252. (10) Since the intermediate activations are of smaller spatial resolutions than the original image resolution, we apply a bilinear upsampling layer [32] after each CSS layer. Non-linear Gating and Max-Pooling Layer The CSS responses are passed through a non-linear gating layer to mitigate the effects of outliers [1]. Furthermore, since the pre-learned sampling patterns used in the CSS layers are \ufb01xed over an entire image, they may be sensitive to nonrigid deformation as described in [26]. To address this, we perform the max-pooling operation within a spatial window 4 \fNi centered at a pixel i after the non-linear gating: Dk i = maxj\u2208Ni exp(\u2212S(Pj\u2212Wk s , Pj\u2212Wk t )/Wk \u03bb), (11) where Wk \u03bb is a learnable parameter for scale k. The maxpooling layer provides an effect similar to using pixelvarying sampling patterns, providing robustness to nonrigid deformation. The descriptor for each pixel then undergoes L2 normalization. Finally, the proposed descriptor Di = S kDk i is built by concatenating feature responses across all scales. Fig. 3 displays an overview of the FCSS descriptor construction. 3.4. Weakly-Supervised Dense Feature Learning A major challenge of semantic correspondence estimation with CNNs is the lack of ground-truth correspondence maps for training data. Constructing training data without manual annotation is dif\ufb01cult due to the need for semantic understanding. Moreover, manual annotation is very labor intensive and somewhat subjective. To deal with this problem, we propose a weakly-supervised learning scheme based on correspondence consistency between image pairs. Fully Convolutional Feature Learning For training the network with image pairs I and I\u2032, the correspondence contrastive loss [7] is de\ufb01ned as L(W) = 1 2N X i\u2208\u2126li\u2225F(Ii; W) \u2212F(I\u2032 i\u2032; W)\u22252 (1 \u2212li)max(0, C \u2212\u2225F(Ii; W) \u2212F(I\u2032 i\u2032; W)\u22252), (12) where i and i\u2032 are either a matching or non-matching pixel pair, and li denotes a class label that is 1 for a positive pair and 0 otherwise. \u2126represents the set of training samples, and N is the number of training samples. C is the maximal cost. The loss for a negative pair approaches zero as their distance increases. W = {Wk c , Wk s, Wk t , Wk \u03bb | k = 1, ..., K} represents all network parameters. By backpropagating the partial derivative of L(W), the overall network can be learned. Unlike existing CNN-based descriptor learning methods which use a set of patch pairs [41, 51, 19], we use a set of image pairs for training. Such an image-wise learning scheme expedites feature learning by reducing the computational redundancy that occurs when computing convolutional activations for two adjacent pixels in the image. Our approach is conceptually similar to [7], but we learn the descriptor in a weakly-supervised manner that leverages correspondence consistency between each image pair so that the positive and negative samples are actively determined during training. Correspondence Consistency Check Intuitively, the correspondence relation from a source image to a target image should be consistent with that from the target image to Image FCSS\u00a0Descriptor Loss FCSS\u00a0Network FCSS\u00a0Network Positive Negative Positive\u00a0Samples Negative\u00a0Samples Figure 4. Weakly-supervised learning of the FCSS descriptor using correspondence consistency between object locations. the source image. After forward-propagation with the training image pairs to obtain F(I; W) and F(I\u2032; W), the best match i\u2217for each pixel i is computed by comparing feature descriptors from the two images through k-nearest neighbor (k-NN) search [15]: i\u2217= argmini\u2032 \u2225F(Ii; W) \u2212F(I\u2032 i\u2032; W)\u22252. (13) After running k-NN twice for the source and target images respectively, we check the correspondence consistency and identify the pixel pairs with valid matches as positive samples. Invalid matches are also used to generate negative samples. We randomly select the positive and negative samples during training. Since the negative samples ensue from erroneous local minima in the energy cost, they provide the effects of hard negative mining during training [41]. The feature learning begins by initializing the shifting transform with randomly selected sampling patterns. We found that even initial descriptors generated from the random patterns provide enough positive and negative samples to be used for weakly-supervised feature learning. A similar observation was also reported in [25]. To boost this feature learning, we limit the correspondence candidate regions according to object location priors such as an object bounding box containing the target object to be matched, which are provided in most benchmarks [13, 12, 6]. Similar to [54, 53, 18], it is assumed that true matches exist only within the object region as shown in Fig. 4. Utilizing this prior mitigates the side effects that may occur due to background clutter when directly running the k-NN, and also expedites the feature learning process. 4. Experimental Results and Discussion 4.1. Experimental Settings For our experiments, we implemented the FCSS descriptor using the VLFeat MatConvNet toolbox [36]. For convolutional similarity networks in the CSS layers, we used the ImageNet pretrained VGG-Net [42] from the bottom conv1 to the conv3-4 layer, with their network parameters as initial values. Three CSS layers are located after conv2-2, conv32, and conv3-4, thus K = 3. Considering the trade-off 5 \fMethods FD3D. JODS PASC. Avg. SIFT [31] 0.632 0.509 0.360 0.500 DAISY [46] 0.636 0.373 0.338 0.449 LSS [40] 0.644 0.349 0.359 0.451 DASC [25] 0.668 0.454 0.261 0.461 DeepD. [41] 0.684 0.315 0.278 0.426 DeepC. [51] 0.753 0.405 0.335 0.498 MatchN. [19] 0.561 0.380 0.270 0.404 LIFT [50] 0.730 0.318 0.306 0.451 VGG [42] 0.756 0.490 0.360 0.535 VGG w/S-CSS\u2020 0.762 0.521 0.371 0.551 VGG w/S-CSS 0.775 0.552 0.391 0.573 VGG w/M-CSS 0.806 0.573 0.451 0.610 FCSS 0.830 0.656 0.494 0.660 Table 1. Matching accuracy for various feature descriptors with \ufb01xed SF optimization on the Taniai benchmark [45]. VGG w/SCSS\u2020 denotes results with randomly selected sampling patterns. Methods FG3D. JODS PASC. Avg. DFF [49] 0.495 0.304 0.224 0.341 DSP [24] 0.487 0.465 0.382 0.445 SIFT Flow [31] 0.632 0.509 0.360 0.500 Zhou et al. [53] 0.721 0.514 0.436 0.556 Taniai et al. [45] 0.830 0.595 0.483 0.636 Proposal Flow [18] 0.786 0.653 0.531 0.657 FCSS w/DSP [24] 0.527 0.580 0.439 0.515 FCSS w/SF [31] 0.830 0.656 0.494 0.660 FCSS w/PF [18] 0.839 0.635 0.582 0.685 Table 2. Matching accuracy compared to state-of-the-art correspondence techniques on the Taniai benchmark [45]. between ef\ufb01ciency and robustness, the number of sampling patterns is set to 64, thus the total dimension of the descriptor is L = 192. Before each CSS layer, convolutional activations are normalized to have a L2 norm [43]. To learn the network, we employed the Caltech-101 dataset [13] excluding testing image pairs used in experiments. The number of trainig samples N is 1024. C is set to 0.2. The learned parameters are used for all the experiments. Our code with pretrained parameters will be made publicly available. In the following, we comprehensively evaluated our descriptor through comparisons to state-of-the-art handcrafted descriptors, including SIFT [34], DAISY [46], HOG [8], LSS [40], and DASC [25], as well as recent CNNs-based feature descriptors, including MatchNet (MatchN.) [19], Deep Descriptor (DeepD.) [41], Deep Compare (DeepC.) [51], UCN [7], and LIFT [50]1. The performance was measured on Taniai benchmark [45], Proposal Flow dataset [18], PASCAL-VOC dataset [6], and Caltech-101 benchmark [13]. To additionally validate the components of the 1Since MatchN. [19], DeepC. [51], DeepD. [41], and LIFT [50] were developed for sparse correspondence, sparse descriptors were \ufb01rst built by forward-propagating images through networks and then upsampled. Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 SIFT DAISY LSS DASC DeepD. DeepC. MatchN. LIFT VGG VGGw/S-CSS VGGw/M-CSS FCSS (a) FG3DCar Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SIFT DAISY LSS DASC DeepD. DeepC. MatchN. LIFT VGG VGGw/S-CSS VGGw/M-CSS FCSS (b) JODS Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 SIFT DAISY LSS DASC DeepD. DeepC. MatchN. LIFT VGG VGGw/S-CSS VGGw/M-CSS FCSS (c) PASCAL Error threshold (pixels) 5 10 15 Flow accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 SIFT DAISY LSS DASC DeepD. DeepC. MatchN. LIFT VGG VGGw/S-CSS VGGw/M-CSS FCSS (d) Average Figure 5. Average \ufb02ow accuracy with respect to endpoint error threshold on the Taniai benchmark [45]. FCSS descriptor, we evaluated the initial VGG-Net (conv34) [42] (VGG), the VGG-Net with learned single-scale CSS layer (VGG w/S-CSS) and learned multi-scale CSS layers (VGG w/M-CSS)2. As an optimizer for estimating dense correspondence maps, we used the hierarchical dual-layer BP of the SIFT Flow (SF) optimization [31], whose code is publicly available. Furthermore, the performance of the FCSS descriptor when combined with other powerful optimizers was examined using the Proposal Flow (PF) [18] and the deformable spatial pyramid (DSP) [24]. 4.2. Results Taniai Benchmark [45] We \ufb01rst evaluated our FCSS descriptor on the Taniai benchmark [45], which consists of 400 image pairs divided into three groups: FG3DCar [29], JODS [37], and PASCAL [20]. As in [45], \ufb02ow accuracy was measured by computing the proportion of foreground 2In the \u2018VGG w/S-CSS\u2019 and \u2018VGG w/M-CSS\u2019, the sampling patterns were only learned with VGG-Net layers \ufb01xed. 6 \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 6. Qualitative results on the Taniai benchmark [45]: (a) source image, (b) target image, (c) SIFT [34], (d) DASC [25], (e) DeepD. [41], (f) MatchN. [19], (g) VGG [42], and (h) FCSS. The source images were warped to the target images using correspondences. (a) (b) (c) (d) (e) (f) (g) (h) Figure 7. Qualitative results on the Proposal Flow benchmark [18]: (a) source image, (b) target image, (c) DAISY [46], (d) DeepD. [41], (e) DeepC. [51], (f) LIFT [50], (g) VGG [42], and (h) FCSS. The source images were warped to the target images using correspondences. Methods PCK \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 SIFT [31] 0.247 0.380 0.504 DAISY [46] 0.324 0.456 0.555 LSS [40] 0.347 0.504 0.626 DASC [25] 0.255 0.411 0.564 DeepD. [41] 0.187 0.308 0.430 DeepC. [51] 0.212 0.364 0.518 MatchN. [19] 0.205 0.338 0.476 LIFT [50] 0.197 0.322 0.449 LIFT\u2020 [50] 0.224 0.346 0.489 VGG [42] 0.224 0.388 0.555 VGG w/S-CSS 0.239 0.422 0.595 VGG w/M-CSS 0.344 0.514 0.676 FCSS 0.354 0.532 0.681 Table 3. Matching accuracy for various feature descriptors with SF optimization on the Proposal Flow benchmark [18]. LIFT\u2020 denotes results of LIFT [50] with densely sampled windows. pixels with an absolute \ufb02ow endpoint error that is smaller than a certain threshold T, after resizing images so that its larger dimension is 100 pixels. Table 1 summarizes the matching accuracy for various feature descriptors with the SF optimization \ufb01xed (T = 5 pixels). Interestingly, while both the CNN-based descriptors [41, 51, 19, 50] and the handcrafted descriptors [34, 40, 46, 25] tend to show similar performance, our method outperforms both of these approaches. Fig. 5 shows the \ufb02ow accuracy with varying error thresholds. Fig. 6 shows qualitative results. More results are available in the supplementary materials. Methods PCK \u03b1 = 0.05 \u03b1 = 0.1 \u03b1 = 0.15 DSP [24] 0.239 0.364 0.493 SIFT Flow [31] 0.247 0.380 0.504 Zhou et al. [53] 0.197 0.524 0.664 Proposal Flow [18] 0.284 0.568 0.682 FCSS w/DSP [24] 0.302 0.475 0.602 FCSS w/SF [31] 0.354 0.532 0.681 FCSS w/PF [18] 0.295 0.584 0.715 Table 4. Matching accuracy compared to state-of-the-art correspondence techniques on the Proposal Flow benchmark [18]. Table 2 compares the matching accuracy (T = 5 pixels) with other correspondence techniques. Taniai et al. [45] and Proposal Flow [18] provide plausible \ufb02ow \ufb01elds, but their methods have limitations due to their usage of handcrafted features. Thanks to its invariance to intra-class variations and precise localization ability, our FCSS achieves the best results both quantitatively and qualitatively. Proposal Flow Benchmark [18] We also evaluated our FCSS descriptor on the Proposal Flow benchmark [18], which includes 10 object sub-classes with 10 keypoint annotations for each image. For the evaluation metric, we used the probability of correct keypoint (PCK) between \ufb02owwarped keypoints and the ground truth [33, 18]. The warped keypoints are deemed to be correctly predicted if they lie within \u03b1 \u00b7 max(h, w) pixels of the ground-truth keypoints for \u03b1 \u2208[0, 1], where h and w are the height and width of the object bounding box, respectively. 7 \f(a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 8. Visualizations of dense \ufb02ow \ufb01eld with color-coded part segments on the PASCAL-VOC part dataset [6]: (a) source image, (b) target image, (c) source mask, (d) LSS [38], (e) DeepD. [41], (f) DeepC. [51], (g) LIFT [50], (h) FCSS, and (i) target mask. (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 9. Visualizations of dense \ufb02ow \ufb01elds with mask transfer on the Caltech-101 dataset [13]: (a) source image, (b) target image, (c) source mask, (d) SIFT [34], (e) DASC [25], (f) MatchN. [19], (g) LIFT [50], (h) FCSS, and (i) target mask. Methods IoU PCK \u03b1 = 0.05 \u03b1 = 0.1 FlowWeb [24] 0.43 0.26 Zhou et al. [53] 0.24 Proposal Flow [18] 0.41 0.17 0.36 UCN [7] 0.26 0.44 FCSS w/SF [31] 0.44 0.28 0.47 FCSS w/PF [18] 0.46 0.29 0.46 Table 5. Matching accuracy on the PASCAL-VOC part dataset [6]. The PCK values were measured for various feature descriptors with SF optimization \ufb01xed in Table 3, and for different correspondence techniques in Table 4. Fig. 7 shows qualitative results for dense \ufb02ow estimation. Our FCSS descriptor with SF optimization shows competitive performance compared to recent state-of-the-art correspondence methods. When combined with PF optimization instead, our method signi\ufb01cantly outperforms the existing state-ofthe-art descriptors and correspondence techniques. PASCAL-VOC Part Dataset [6] Our evaluations also include the dataset provided by [54], where the images are sampled from the PASCAL part dataset [6]. With humanannotated part segments, we measured part matching accuracy using the weighted intersection over union (IoU) score between transferred segments and ground truths, with weights determined by the pixel area of each part. To evaluate alignment accuracy, we measured the PCK metric using keypoint annotations for the 12 rigid PASCAL classes [48]. Table 5 summarizes the matching accuracy compared to state-of-the-art correspondence methods. Fig. 8 visualizes estimated dense \ufb02ow with color-coded part segments. From the results, our FCSS descriptor is found to yield the highest matching accuracy. Methods LT-ACC IoU LOC-ERR DSP [24] 0.77 0.47 0.35 SIFT Flow [31] 0.75 0.48 0.32 Proposal Flow [18] 0.78 0.50 0.25 VGG [42] w/SF [31] 0.78 0.51 0.25 FCSS w/SF [31] 0.80 0.50 0.21 FCSS w/PF [31] 0.83 0.52 0.22 Table 6. Matching accuracy on the Caltech-101 dataset [13]. Caltech-101 Dataset [13] Lastly, we evaluated our FCSS descriptor on the Caltech-101 dataset [13]. Following the experimental protocol in [24], we randomly selected 15 pairs of images for each object class, and evaluated matching accuracy with three metrics: label transfer accuracy (LTACC) [30], the IoU metric, and the localization error (LOCERR) of corresponding pixel positions. Table 6 summarizes the matching accuracy compared to state-of-the-art correspondence methods. Fig. 9 visualizes estimated dense \ufb02ow \ufb01elds with mask transfer. For the results, our FCSS descriptor clearly outperforms the comparison techniques. 5." + } + ], + "Junho Kim": [ + { + "url": "http://arxiv.org/abs/2403.13513v1", + "title": "What if...?: Counterfactual Inception to Mitigate Hallucination Effects in Large Multimodal Models", + "abstract": "This paper presents a way of enhancing the reliability of Large Multimodal\nModels (LMMs) in addressing hallucination effects, where models generate\nincorrect or unrelated responses. Without additional instruction tuning\nparadigm, we introduce Counterfactual Inception, a novel method that implants\ncounterfactual thoughts into LMMs using carefully chosen, misaligned\ncounterfactual keywords. This method is grounded in the concept of\ncounterfactual thinking, a cognitive process where humans consider alternative\nrealities and outcomes. By applying this human-like reasoning mechanism to\nLMMs, we aim to reduce hallucination effects and improve the models'\ntrustworthiness. We also propose Dual-modality Verification Process (DVP), a\nrigorous framework for selecting optimal counterfactual keywords to trigger\ncounterfactual thinking into LMMs, concurrently considering visual and\nlinguistic context. Our extensive experiments across various LMMs, including\nboth open-source and proprietary models, corroborate that our method\nsignificantly mitigates hallucination phenomena across different datasets.", + "authors": "Junho Kim, Yeon Ju Kim, Yong Man Ro", + "published": "2024-03-20", + "updated": "2024-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction After witnessing the great success of Large Language Models (LLMs)-based products, such as OpenAI ChatGPT [40] and Google Gemini [15], the emergence of Large Multi-modal Models (LMMs) naturally followed as the next step towards a unified, general-purpose AI system [37, 65, 1]. In the vision research community, various works [26, 25, 69] have actively resorted LMMs into the vision models due to their remarkable capability of providing off-the-shelf or zero-shot performance. Especially when it comes to in-context learning [6, 1], prompt engineering [68, 7, 16], and chainof-thought [59, 22, 66, 8], vision models can exploit LLMs\u2019 capabilities into the more generalized multi-modal applications such as visual understanding and reasoning under user-assistant interactive systems [63, 18]. Albeit the recent breakthroughs of multi-modal instruction tuning approaches [12, 33] that unlock enhanced visual proficiency by aligning model responses with human-specific instructions, it is no longer secret that they still struggle with inevitable hallucination effects in their responses [30, 67]. The hallucination effects in LMMs involve false premises, where the models generate incorrect, nonsensical, or unrelated responses associated with the vision inputs. As exemplified in the middle of fig. 1, LMMs occasionally generate contents that are either factually incorrect or not grounded in the image they receive. These confusing responses not only fail to capture the true intentions of the users, but more importantly, they can lead to an over-reliance problem. This is especially concerning in scenarios with information asymmetry, where issues of trustworthiness and applicability are crucial such as in safety-critical areas [2, 58, 48]. To alleviate hallucination effects in LMMs, recent studies have been proposed in the context of curated instruction-tuning [30, 55], or cue-augmented \u2217Equal contribution. \u2020 Corresponding author. Preprint. Under review. arXiv:2403.13513v1 [cs.CV] 20 Mar 2024 \fThe Hallucination in LMMs & Concept of Counterfactual Thinking Describe this photo in detail. A man standing on the back of a yellow SUV, ironing clothes ... The SUV is driving down a street, surrounded by other cars... There are several other cars in the scene, including one on the left side of the SUV, another behind it, and a few more on the right side. A traffic light can be seen in the background, indicating that the SUV is driving through an intersection. A man standing on the back of a yellow van, ironing a blue shirt. The van is parked on the street, and the man appears to be multitasking by doing ironing while also standing on the vehicle. The scene is quite unusual and captures the attention of the viewer. Assistant Response: User Question Baselines: Hallucination Assistant Response: Ours: Counterfactual Thinking Woman Bus Sewing What if ...? Cooking Evening Figure 1: Overall concept of implanting counterfactual thoughts into LMMs. By considering what if...? scenarios using counterfactuals (e.g., woman, bus, cooking, etc.) that are imaginable from the visual input, LMMs can mitigate hallucination effects without additional instruction-tuning, as indicated by the green markings. methods [56, 61, 67, 10] using external solvers. However, they require additional computation for the training on tailored instruction or labor-intensive resources and studies to fine-tune the models [50, 64]. To step out such limitations and mitigate hallucinations in a training-free manner, we present a novel way of eliciting the reasoning capabilities of exceptionality inhered within off-the-shelf LMMs by utilizing counterfactual thinking. In our daily life, we ponder possibilities of what if...? scenarios at least once in awhile; these sorts of thoughts can be termed as counterfactual that is contrary to what actually happened [39, 13]. By focusing on how events might have unfolded differently if we had taken alternative actions, we can identify better self-understand of the perceived reality and exert more control on the current. In addition, these conscious thoughts can potentially give us a chance to make different choices in the future without making mistakes [45]. Motivated by such human tendency, we aim to delve into the following question: \"Can we elicit rational counterfactual thinking under what-if scenarios and mitigate hallucination effects in LMMs\u2019 responses?\". To address the question, our study begins with comprehensive investigation on the role of counterfactual thinking in LMMs\u2019 responses. We first hypothesize that deliberately deviated language keywords place additional information in the context, during inference. We demonstrate that this counterfactual approach is indeed helpful to mitigate hallucination effects, similar to human perception (see section 3). A concise overview of our concept is illustrated in fig. 1 (right). As in the figure, when presented with imaginative yet rational counterfactual keywords, LMMs show better alignment with factual scenarios for the given visual inputs and user instruction. This process enables LMMs to refine their hallucinatory responses by considering the differences between the factual and counterfactual scenarios in a contrastive manner. Based on our exploration for the role of counterfactual thinking, we propose Counterfactual Inception, a novel method of implanting counterfactual thoughts into LMMs using inconsistent counterfactual keywords, against given visual context. Our approach can enhance reliability of LMMs in their responses by exposing them towards exceptional reality within a contrastive environment. To consistently prompt LMMs to engage in counterfactual thinking, our strategy focuses on the competent selection of optimal counterfactual keywords for triggering the thought process. Accordingly, we present Dual-modality Verification Process (DVP), a solid framework to filter out the suboptimal keywords within rigorous vision-language context verification. Through extensive analyses on recent LMMs including open-source [31, 57, 4] and proprietary models [15, 42], we corroborate that Counterfactual Inception helps to alleviate hallucination phenomena in general across various dataset [33, 28, 50, 51]. Our approach demonstrates the potential of counterfactual thinking as not only a cognitive process but also as a practical approach for developing more trustworthy AI systems. Our contributions can be summarized into three folds: (i) we introduce Counterfactual Inception, a novel method that embeds counterfactual thinking into LMMs using deliberately deviated language keywords to mitigate hallucination effects, (ii) we present Dual-modality Verification Process (DVP), a rigorous framework designed to refine the selection of counterfactual keywords, ensuring the optimal trigger of counterfactual thoughts in LMMs, and (iii) Through extensive experiments and 2 \fanalyses on various LMMs, including both open-source and proprietary models, we provide empirical evidence that Counterfactual Inception effectively reduces the hallucination phenomena across different datasets. 2 Related Work Vision-Language Large Multimodal Models. The release of open-sourced LLMs [52, 11] has spurred active research towards more generalized domain integration, especially vision-language (VL) modalities (Large Multimodal Models \u2014 LMMs). By using the language models as linguistic channels, LMMs can integrate visual information into broader range of VL understanding tasks [17, 60, 38]. After the surge of VL learning [27, 26, 63] facilitated cross-modal alignment, recent prevalent approach for advanced LMMs is on to adopt visual instruction-tuning [12, 33] on various datasets. LLaVA series [33, 31, 32] has paved the way for building a general-purpose AI model capable of interacting freely and adapting to users\u2019 instructions. Along with such paradigm, a wide range of advanced architectures and adaptations to specific domains [29, 24] have actively emerged. Additionally, numerous proprietary LMMs are expanding their capabilities into vision tasks, as evidenced by the release of products such as Qwen-VL [4] from Alibaba Group, Gemini Vision [15] from Google, and GPT-4V [42] from OpenAI, which allow users to interact with the models for the analysis of image inputs. Hallucination Effects in Large Multimodal Models. Despite the remarkable advancements of LMMs, the major issue of hallucination still persists in their responses. Hallucination refers to the phenomenon where generated textual expressions are inconsistent with the accompanying image, one of the long-standing challenges in image captioning [46]. When it comes to LMM, this problem is exacerbated due to their use of the expressive capabilities of LLMs, which enable more detailed and rich descriptions [20]. As their representation becomes abundant, the complexity of hallucinations also increases, leading to a multifaceted issue. This includes challenges such as: (i) the scarcity of large-scale image-text instruction pairs [30], and (ii) the entropic gap between visual and textual data [50], which becomes evident during the process of aligning modalities. In the context of fine-tuning LMMs, recent works have explored various methods, including finetuning existing LMMs with robustly generated instructions [30, 55], implementing multi-step LMMaided reasoning [56, 61, 67, 10], and utilizing Reinforcement Learning from Human Feedback (RLHF) [50, 64] for providing human feedback instructions. We encourage readers to refer a recent hallucination survey compilation [34] for more up-to-date works. Additionally, several works have facilitated contrastive approaches not only in the context of LMMs [23, 19], but also in LLMs [21, 14] to self-refine models\u2019 responses. Our counterfactual-based method aligns with contrastive approaches; however, we emphasize its significant mitigation of hallucination effects, achieved without additional training or human-resource instructions. 3 The Role of Counterfactual Thinking The concept of counterfactual thinking in LMMs revolves around the generation of what-if scenarios, a critical aspect for enhancing the model\u2019s understanding and predictive capabilities considering exceptionality. We first delve into the rationale of counterfactual thinking and its effectiveness in alleviating hallucination, particularly in relation to visual understanding and their response generation. 3.1 Implanting Counterfactual Thinking via Misaligned Keywords What would be a feasible approach for implanting counterfactual thoughts into LLMs to trigger counterfactual alternative in their thinking? Our answer is harnessing counterfactual keywords that act as ideal catalyst for eliciting nuanced and diverse counterfactual reasoning [54]. The keywords serve as minimal information, ensuring that the models are not constrained by tight predictive bounds. This approach allows LMMs to fully employ their predictive powers, enabling them to explore a broad spectrum of potential outcomes. In other words, for the given visual inputs, the clear form of counterfactual keywords introduces mismatched elements to induce the model into considering alternative scenarios, which is crucial for maintaining simplicity and precision in complex counterfactual analyses. A single keyword can 3 \fThe Implanting Counterfactual Thinking: Analyze the image in a comprehensive and detailed manner. LLaVA-1.5-13B (Baseline): GPT-4V: USER Question: Visual Input: The image features a beautiful beach scene with a large, colorful umbrella providing shade for a person sitting underneath it. The umbrella is positioned near the water, offering a relaxing spot for the individual to enjoy the beach atmosphere. In addition to the umbrella, there are two boats visible in the background, one closer to the left side of the image and the other further to the right. The boats add to the coastal ambiance of the scene. The image features a colorful umbrella, which is open and providing shade on a beach. The umbrella is positioned near the water, with the ocean in the background. The beach appears to be relatively empty, with no other people or objects visible in the scene. The umbrella's vibrant colors stand out against the natural backdrop, creating a visually appealing and relaxing atmosphere. USER Prompt: Factual: Beach, Ocean, Waves, Sand, Beach umbrella, Colorful, Daytime, Horizon, Shoreline, Summer C.Factual: Mountain, River, Snow, Forest, Tent, Monochrome, Nighttime, Cityscape, Cliffside , Winter LLaVA-1.5-13B (C. Thinking): Point out the key elements you can see in the image. Then, come up with a list of alternative counterfactuals that could logically fit in the scene and visually plausible. Figure 2: The comparison between LLaVA-1.5 [31] model and its counterfactual variation. Counterfactual keywords by GPT-4V [42] are used to conditional inference process. span multiple interpretations and results, encouraging the model to explore various possibilities. This advantage to generate diverse scenarios from a single cue augments the model\u2019s predictive prowess and its understanding into various contexts. Furthermore, by combining these keywords, they can be seamlessly integrated within visual context, rendering them versatile and adaptable to the scenarios. As an initial approach of counterfactual thinking, we instruct GPT-4V [42] to identify factual keywords that accurately represent the current reality depicted in the given image as a starting point. While using factual keywords directly seems straightforward, it is important to acknowledge the limitations of such method. Relying solely on factual keywords can result in a limited range of responses and potentially lead to error accumulation in LMMs, as analyzed in section 5.4. Instead, we add directives to extract the corresponding counterfactual keywords, which embody a visual proximity yet inconsistency with the model\u2019s perceived reality (see Appendix. A for detailed instruction). fig. 2 illustrates the process of counterfactual thinking, which involves generating conditional responses for the targeted LMMs, facilitated by GPT-4V. This is achieved through the use of generated counterfactual keywords, importantly in a training-free manner. 3.2 Assessing Efficacy of Counterfactual Keywords for Hallucination As illustrated in fig. 2, it is evident that counterfactual thinking can lessen the hallucination effects, compared to the baseline LMM model [31]. To further assess the efficacy of counterfactual thinking for the nonsensical phenomena, we conduct a human evaluation study comparing the baseline model with its counterfactual thinking variation using LLaVA-QA90 dataset [33], which includes 90 questions for the 30 images sampled from COCO dataset [9]. The dataset categorizes the questions into three types: conversation, detail description, and complex reasoning. Among these, we randomly selected 15 images from detail description category, where the questions require elaborate explanation for the visual contents in the images, such as \"Describe the following image.\" or \"What is this photo about?\", which is an optimal evaluation to measure existence of hallucination in LMMs beyond closed question forms (e.g., yes/no questions). We have recruited 10 participants from the crowdsourcing platform Prolific and asked them to respond to two consecutive questions for the two different response variations from LLaVA-1.5 (13B Vicuna [11]) \u2014 i.e., the baseline and its counterfactual thinking version. For the initial question, we asked participants to identify and write down any inconsistent elements observed in the model\u2019s description as keywords, which requires a careful comparison for the given image-response pairs. Subsequently, we requested them to assess the degree of consistency between the model\u2019s description and the image (i.e., consistency score), assigning a score on a scale from 1 to 5 (survey details in Appendix. B). The graphical results in fig. 3(a) show that participants have rated the counterfactual thinking version with an average consistency score of 4.13 out of 5, which significantly exceeds that of the baseline model [31] scored 3.3. Moreover, when we calculated the average number of hallucinatory keywords 4 \fper image, as requested in the first question, we can observe that, in contrast to the baseline\u2019s average of 1.7 per image, the counterfactual variation effectively reduces hallucination effects to 0.81 per image without the need for additional instruction training. Notably, for the original evaluation of detailed description (GPT-aided evaluation [33]), the implantation of counterfactual keywords also enhanced the results from 74.9 to 77.6 compared with the baseline. 3.3 The Counterfactual Selection is The Matter After validating the effectiveness of counterfactual thinking in the inference phase of LMMs, we realize the necessity of exploration for the significance of selecting counterfactual keywords. Importantly, beyond using a proprietary model for counterfactual extraction, we investigate whether the current dominant open-/close-source LLMs possess the potential for self-generating counterfactual keywords. To assess such potential in a more challenging setting, we conduct an analysis using the in-the-wild dataset, LLaVA-Bench (In-the-Wild) [33]. Baseline C. Thinking CF. Keywords: Consistency Score : Human Evaluation Study: Analyses & Results: 0.0% 31.3% 36.4% 1.0% \u266f3.8 47.1 67.5 53.2 73.0 \u266f6.8 \u266f4.6 \u266f3.5 74.9 77.6 4.1 3.3 \u266f1.7 \u266f0.8 Step 1 Step 2 (a) Prolific Eval. (b)Keyword Select Eval. Figure 3: Evaluation on counterfactual thinking In our comparative study illustrated in fig. 3(b), a significant variance emerges in the self-generation of counterfactual keywords among different models [31, 15, 42], GPT-4V [42] shows a significant level of proficiency, and show almost zero-overlap ratio (overlapped keyword ratio between the factual and counterfactual keywords that model generates) in their counterfactual keyword responses, akin to that of the human result. While the specifics of its training is confidential, it is likely attributed to its diverse training that includes a wide array of hypothetical and counterfactual scenarios found in various texts like literature and speculative fiction. Whereas, LLaVA1.5 [31], and Gemini Pro exhibit substantially weaker capabilities and high overlapped ratio between factual and counterfactual keywords. This difference might be attributed to their training methodologies and limited data diversity. For instance, LLaVA-1.5, being optimized for specific tasks and limited instructions, which may not have been exposed to a broad range of counterfactual contents. Similarly, Gemini Pro [15], despite its advances, may lack sufficient training in speculative contexts necessary for effective counterfactual keyword generation \u2014 as a hypothesis, the size of the models could be a factor2 influencing this capability. Another discussion point is that humans seem to lack of competence in generating counterfactual keywords in this constrained study. For instance, we have observed that human evaluators, after viewing the Mona Lisa, noted down counterfactual keywords such as another famous masterwork, Girl with a Pearl Earring, or Leonardo DiCaprio, instead of Leonardo da Vinci as a word pun. Additionally, due to limited prior knowledge on diverse topics, humans show inconsistency in performing this specific task. Accordingly, in the following section, we posit GPT-4V as ultimate knowledge pool to derive competent counterfactuals and propose a new method of implanting the counterfactual thoughts into LMMs. 4 Optimal Counterfactual Keywords and Inception Preliminary. We have identified that the extracted counterfactual keywords considerably affect the model responses. Before the details of our method, we formulate the earlier counterfactual thinking 2We are unable to directly compare these results due to the lack of API access for Gemini Ultra [15] and budget constraints. We will address this further in section 6 5 \fDual-modality Verification Process (DVP) Prompt: Generate Factual Key. / Counterfactual Key. \ud835\udc41\ud835\udc41 times. Factual: Dog, Painting, Parody, Renaissance, Costume, Landscape BG CF#1: Cat, Drawing, Original, Modern, Casual Clothing, Cityscape BG CF#2: Fox, Sketch, Homage, Baroque, Uniform, Seaside BG CF#3: Wolf, Print, Satire, Medieval, Formal Wear, Forest BG CF#4: ... Step 1. Counterfactual Keywords Extraction Step 2. Visual Verification Step 3. Linguistic Verification Contradiction Score CLIP Score CF. Keywords : Suboptimal CF. Keys : Optimal CF. Keys : Image & F. Keys Counterfactual Inception . Prompt: Please use {CF. Keywords} that are different from the facts as a guide to understand the image well. Then, {describe this photo in detail.} Answer: The image features a painting of a dog, possibly a dachshund, dressed up as the famous painting of the Mona Lisa. The dog is wearing a bandana and is positioned in a way that resembles the famous portrait. The painting is set against a backdrop of mountains, adding to the artistic composition. The dog's attire and the painting's subject matter create a unique and amusing representation of the famous artwork. : Placeholder for CF. Keywords / Question. {placeholder} Analysis on Visual / Linguistic Verification F. Keywords CF. Keywords CLIP Score Contradiction Figure 4: Overall process of DVP to assort competent counterfactual keyword and subsequent Counterfactual Inception for triggering counterfactual thoughts. for better understanding. For the given LMMs M\u03b8, parameterized with \u03b8, our objective is generating output sequences y \u03d5+ for the given anchor t, we designate them as positive concept feature t+. Similarly, if they meet the condition DM[idq, :] < \u03d5\u2212, we categorize them as negative concept feature t\u2212. Here, \u03d5+ and \u03d5\u2212represent the hyper-parameters for positive and negative relaxation, which are both set to 0.3 and 0.1, respectively. Note that, we opt for soft relaxation when selecting positive concept features because the main purpose of our unsupervised setup is to group subdivided concept prototypes into the targeted broader-level categories. In this context, a soft positive bound is advantageous as it facilitates a smoother consolidation process. While, we set tight negative relaxation for selecting negative concept features, which aligns with findings in various studies (Khosla et al., 2020; Kalantidis et al., 2020; Robinson et al., 2021; Wang et al., 2021a) emphasizing that hard negative mining is crucial to advance self-supervised learning. In the end, after choosing in-batch positive and negative concept features t+ and t\u2212for the given anchor t, we sample positive segmentation features y+ and negative segmentation features y\u2212from the concept-matched Y = {y \u2208Rr}hw within the same spatial location as the selected concept features. Through the concept-wise self-supervised learning in Eq. (4), we can then guide the segmentation head S to enhance the likelihood of semantic groups Y . We re-emphasize that for the given anchor feature (head), our goal of USS is the feature consolidation corresponding to positive concept features (torso, hand, leg, etc.), and the separation corresponding to negative concept features (sky, water, board, etc.), in order to achieve the targeted broader-level semantic groups (person). Concept Bank: Out-batch Accumulation. Unlike image-level self-supervised learning, unsupervised dense prediction requires more intricate pixel-wise comparisons, as discussed in Zhang et al. (2021). To facilitate this, we establish a concept bank, similar to He et al. (2020) but notably at a pixel-level scale, to accumulate out-batch concept features for additional comparison pairs. Following the same selection criterion as described above, we dynamically sample in-batch features in each training iteration and accumulate them into the concept bank Ybank \u2208Rk\u00d7b\u00d7r for continuously utilizing other informative feature from out-batches, where b represents the maximum number of feature points saved for each concept in M \u2208Rk\u00d7c. We incorporate these additional positive and negative concept features into the sets of y+ and y\u2212for the concept-wise self-supervised learning. Here, creating a concept bank can be seen as incorporating global views into the pixellevel self-supervised learning beyond local views, which also corresponds to considering all feature representations T \u2032 \u2208Rn\u00d7hw\u00d7c (n: total number of images in dataset) for frontdoor adjustment. As a concept bank update strategy, we implement random removal of 50% of the bank\u2019s patch features for each concept prototype, followed by random sampling of 50% new patch features into the concept bank at every training iteration. In addition, to perform stable self-supervised learning, we employ: (i) using log-probability not to converge to near-zero value due to numerous multiplication of p(m|t)=0. Then, enhancing Et\u2208T [p(Y |do(t))] for our main purpose to accomplish unsuperivsed dense prediction can be simplified with increasing Et\u2208T [p(Y |t\u2032, m=q)p(t\u2032)]. When p(t\u2032) is assumed to be uniform distribution, it satisfies Et\u2208T [p(Y |do(t))] \u2191\u221dEt\u2208T [p(Y |t\u2032, m=q)] \u2191so that enhancing the likelihood of semantic groups Y directly leads to increasing causal effect between T and Y even under the presence of U. 6 \fTable 1: Comparing quantitative results and applicability to other self-supervised methods on CAUSE. (a) Experimental results on COCO-Stuff. Method (C = 27) Backbone mIoU pAcc IIC (Ji et al., 2019) ResNet18 6.7 21.8 PiCIE (Cho et al., 2021) ResNet18 14.4 50.0 SegDiscover (Huang et al., 2022) ResNet50 14.3 40.1 SlotCon (Wen et al., 2022) ResNet50 18.3 42.4 HSG (Ke et al., 2022) ResNet50 23.8 57.6 ReCo+ (Shin et al., 2022) DeiT-B/8 32.6 54.1 DINO (Caron et al., 2021) ViT-S/16 8.0 22.0 + STEGO (Hamilton et al., 2022) ViT-S/16 23.7 52.5 + HP (Seong et al., 2023) ViT-S/16 24.3 54.5 + CAUSE-MLP ViT-S/16 25.9 66.3 + CAUSE-TR ViT-S/16 33.1 70.4 DINO (Caron et al., 2021) ViT-S/8 11.3 28.7 + ACSeg (Li et al., 2023) ViT-S/8 16.4 + TranFGU (Yin et al., 2022) ViT-S/8 17.5 52.7 + STEGO (Hamilton et al., 2022) ViT-S/8 24.5 48.3 + HP (Seong et al., 2023) ViT-S/8 24.6 57.2 + CAUSE-MLP ViT-S/8 27.9 66.8 + CAUSE-TR ViT-S/8 32.4 69.6 DINO (Caron et al., 2021) ViT-B/8 13.0 42.4 + DINOSAUR (Seitzer et al., 2023) ViT-B/8 24.0 + STEGO (Hamilton et al., 2022) ViT-B/8 28.2 56.9 + CAUSE-MLP ViT-B/8 34.3 72.8 + CAUSE-TR ViT-B/8 41.9 74.9 (c) Self-supervised methods with CAUSE-TR. Dataset Self-Supervised Methods Backbone mIoU pAcc COCO-Stuff DINOv2 (Oquab et al., 2023) ViT-B/14 45.3 78.0 Cityscapes 29.9 89.8 Pascal VOC 53.2 91.5 COCO-Stuff iBOT (Zhou et al., 2022) ViT-B/16 39.5 73.8 Cityscapes 23.0 89.1 Pascal VOC 53.4 89.6 COCO-Stuff MSN (Assran et al., 2022) ViT-S/16 34.1 72.1 Cityscapes 21.2 89.1 Pascal VOC 30.2 84.2 COCO-Stuff MAE (He et al., 2022) ViT-B/16 21.5 59.1 Cityscapes 12.5 82.0 Pascal VOC 25.8 83.7 (b) Experimental results on Cityscapes. Method (C = 27) Backbone mIoU pAcc IIC (Ji et al., 2019) ResNet18 6.4 47.9 PiCIE (Cho et al., 2021) ResNet18 10.3 43.0 SegSort (Hwang et al., 2019) ResNet101 12.3 65.5 SegDiscover (Huang et al., 2022) ResNet50 24.6 81.9 HSG (Ke et al., 2022) ResNet50 32.5 86.0 ReCo+ (Shin et al., 2022) DeiT-B/8 24.2 83.7 DINO (Caron et al., 2021) ViT-S/8 10.9 34.5 + TransFGU (Yin et al., 2022) ViT-S/8 16.8 77.9 + HP (Seong et al., 2023) ViT-S/8 18.4 80.1 + CAUSE-MLP ViT-S/8 21.7 87.7 + CAUSE-TR ViT-S/8 24.6 89.4 DINO (Caron et al., 2021) ViT-B/8 15.2 52.6 + STEGO (Hamilton et al., 2022) ViT-B/8 21.0 73.2 + HP (Seong et al., 2023) ViT-B/8 18.4 79.5 + CAUSE-MLP ViT-B/8 25.7 90.3 + CAUSE-TR ViT-B/8 28.0 90.8 (d) Experimental results on Pascal VOC 2012. Method (C = 21) Backbone mIoU IIC (Ji et al., 2019) ResNet18 9.8 SegSort (Hwang et al., 2019) ResNet101 11.7 DenseCL (Wang et al., 2021b) ResNet50 35.1 HSG (Ke et al., 2022) ResNet50 41.9 MaskContrast (Van Gansbeke et al., 2021) ResNet50 35.0 MaskDistill (Van Gansbeke et al., 2022) ResNet50 48.9 DINO (Caron et al., 2021) ViT-S/8 +TransFGU (Yin et al., 2022) ViT-S/8 37.2 +ACSeg (Li et al., 2023) ViT-S/8 47.1 +CAUSE-MLP ViT-S/8 46.0 +CAUSE-TR ViT-S/8 50.0 DINO (Caron et al., 2021) ViT-B/8 +DeepSpectral (Melas-Kyriazi et al., 2022) ViT-B/8 37.2 +DINOSAUR (Seitzer et al., 2023) ViT-B/8 37.2 +Leopart (Ziegler & Asano, 2022) ViT-B/8 41.7 +COMUS (Zadaianchuk et al., 2023) ViT-B/8 50.0 +CAUSE-MLP ViT-B/8 47.9 +CAUSE-TR ViT-B/8 53.3 probabilities: 1 |Y | log p(Y |t\u2032, m)= 1 |Y | log Q y\u2208Y p(y|t\u2032, m)=Ey\u2208Y [log p(y|t\u2032, m)], and (ii) utilizing exponential moving average (EMA) on teacher-student structure, all of which have been widely used by recent self-supervised learning frameworks such as Grill et al. (2020); Chen et al. (2021); Caron et al. (2021); Zhou et al. (2022); Assran et al. (2022). Please see complete details of Step 2 procedure in Algorithm 2 and Appendix B. 4 EXPERIMENTS 4.1 EXPERIMENTAL DETAILS Inference. In inference phase for USS, STEGO (Hamilton et al., 2022) and HP (Seong et al., 2023) equally perform the following six steps: (a) learning C cluster centroids (Caron et al., 2018) from the trained segmentation head output where C denotes the number of categories in dataset, (b) upsampling segmentation head output to the image resolution, (c) finding the most closest centroid indices to the upsampled output, (d) refining the predicted indices through Fully-connected Conditional Random Field (CRF) (Kr\u00e4henb\u00fchl & Koltun, 2011) with 10 steps, (e) Hungarian Matching (Kuhn, 1955) for alignment with CRF indices and true labels, and (f) evaluating mean of intersection over union (mIoU) and pixel accuracy (pAcc). We follow the equal six steps with Sema of CAUSE. Implementation. Following recent works, we adopt DINO as an encoder baseline and freeze it, where the feature dimension c of T depends on the size of ViT: small (c = 384) or base (c = 768). For hyper-parameter in the clusterbook, the number of concept k in M is set to 2048 to encompass concept prototypes from pre-trained features as much as possible. During the self-supervised learning, the number of feature accumulation b in concept bank is set to 100. In addition, output dimension r of segmentation head is set to 90 based on the dimension analysis (Koenig et al., 2023). For the segmentation head, we use two variations: (i) CAUSE-MLP with simple MLP layers as in Hamilton et al. (2022) and (ii) CAUSE-TR with a single layer transformer. Please see details in Appendix B. 7 \fTable 2: Comparing linear probing performance. COCO-Stuff Cityscapes Method Baseline mIoU pAcc mIoU pAcc DINO (Caron et al., 2021) ViT-S/8 33.9 68.6 22.8 84.6 +HP (Seong et al., 2023) ViT-S/8 42.7 75.6 30.6 91.2 +CAUSE-MLP ViT-S/8 46.4 77.3 35.2 92.1 +CAUSE-TR ViT-S/8 47.2 78.8 37.2 93.5 DINO (Caron et al., 2021) ViT-B/8 29.4 66.8 23.0 84.2 +STEGO (Hamilton et al., 2022) ViT-B/8 41.0 76.1 26.8 90.3 +CAUSE-MLP ViT-B/8 48.3 79.8 38.2 93.4 +CAUSE-TR ViT-B/8 52.3 80.1 40.2 94.5 Table 3: Results of CAUSE with larger categories. Method Backbone mIoU pAcc COCO-81 MaskContrast (Van Gansbeke et al., 2021) ResNet50 3.7 8.8 TransFGU (Yin et al., 2022) ViT-S/8 12.7 64.3 CAUSE-MLP ViT-S/8 19.1 78.8 CAUSE-TR ViT-S/8 21.2 75.2 COCO-171 IIC (Ji et al., 2019) ResNet50 2.2 15.7 PiCIE (Cho et al., 2021) ResNet50 5.6 29.8 TransFGU (Yin et al., 2022) ViT-S/8 12.0 34.3 CAUSE-MLP ViT-S/8 10.6 44.9 CAUSE-TR ViT-S/8 15.2 46.6 Image Label STEGO HP CAUSE ReCo+ sidewalk parking car bus vegetation sky terrain road building pole person bicycle rider traffic sign traffic light truck wall ignored Figure 4: Qualitative comparison of unsupervised semantic segmentation for Cityscapes dataset. Datasets. We mainly benchmark CAUSE with three datasets: COCO-Stuff (Caesar et al., 2018), Cityscapes (Cordts et al., 2016), and Pascal VOC (Everingham et al., 2010). COCO-Stuff is a scene texture segmentation dataset as subset of MS-COCO 2017 (Lin et al., 2014) with full pixel annotations of common Stuff and Thing categories. Cityscapes is an urban street scene parsing dataset with annotations. Following Ji et al. (2019); Cho et al. (2021), we use the curated 27 mid-level categories from label hierarchy for COCO-Stuff and Cityscapes. As an object-centric USS, we follow Van Gansbeke et al. (2022) and report the results of total 21 classes for PASCAL VOC. 4.2 VALIDATING CAUSE Quantitative & Qualitative Results. We validate CAUSE by comparing with recent USS frameworks using mIoU and pAcc on various datasets. Table 1 (a) and (b) show CAUSE generally outperforms HSG (Ke et al., 2022), TransFGU (Yin et al., 2022), STEGO (Hamilton et al., 2022), HP (Seong et al., 2023), and ReCo+ (Shin et al., 2022), and our method achieves state-of-the-art results without any external information. Table 2 shows another superior quantitative results of CAUSE for linear probing than baselines, which indicates competitive dense representation quality learned in unsupervised manners. Furthermore, Fig. 1 and Fig. 4 illustrate CAUSE effectively assembles different level of granularity (head, torso, hand, leg, etc.), into one semantically-alike group (person). Please see additional qualitative results, analyses, and failure cases in Appendix C. Applicability to Object-centric Semantic Segmentation. Preceding works, rooted in objectcentric semantic segmentation models (Van Gansbeke et al., 2021; Yin et al., 2022; Zadaianchuk et al., 2023), initially generate pseudo-labels that differentiate between foreground (objects) and background. This process is typically accomplished by using Mask R-CNN (He et al., 2017) and DeepLabv3 (Chen et al., 2017), or saliency maps from DeepUSPS (Nguyen et al., 2019). In contrast, STEGO and HP abstains from relying on any external information beyond self-supervised knowledge. Therefore, they inherently lack the capability to segment an image into two broad categories: objects and a single background category, making them unsuitable for direct application to object-centric semantic segmentation. However, we highlight that simply adjusting smoother positive relaxation in CAUSE enables to discern background from foreground without any external information. The results of Pascal VOC 2012 is shown in Table 1(d), and its figures are illustrated in Appendix C. 8 \f(a) Log scale of IoU results for each categories in COCO-Stuff (Black: Thing / Gray: Stuff) mIoU pAcc coco city mIoU pAcc coco city mIoU pAcc coco city (b) Positive Relaxation \u03d5+ (c) Negative Relaxation \u03d5\u2212 (d) Concept number k in M Figure 5: Additional experimental for in-depth analysis and ablation studies of CAUSE-TR. Table 4: Quantitative ablation results by controlling the other three factors of CAUSE-TR on ViT-B/8. (%) CAUSE-MLP CAUSE-TR COCO-Stuff Cityscapes COCO-Stuff Cityscapes Method of Concept Discretization Bank CRF mIoU pAcc mIoU pAcc mIoU pAcc mIoU pAcc Maximizing Modularity (Newman, 2006) \u2717 \u2717 24.9 54.1 15.8 75.6 27.8 57.3 17.3 79.2 \u2713 \u2717 31.3 69.0 25.3 89.5 39.5 72.5 28.8 90.7 \u2717 \u2713 27.5 57.9 17.3 78.8 30.3 60.1 19.6 82.1 \u2713 \u2713 34.3 72.8 25.7 90.3 41.9 74.9 28.0 90.8 K-Means++ (Arthur & Vassilvitskii, 2007) \u2713 \u2713 27.8 64.7 18.9 81.3 33.7 62.7 20.4 83.2 Spectral Clustering (Von Luxburg, 2007) \u2713 \u2713 30.7 65.1 20.8 83.5 35.9 66.7 22.8 84.1 Agglomerative Clustering (M\u00fcllner, 2011) \u2713 \u2713 31.4 67.9 22.2 84.0 37.7 68.1 24.5 86.3 Ward-Hierarchical Clustering (Murtagh & Legendre, 2014) \u2713 \u2713 31.8 67.5 22.9 84.7 37.5 68.2 24.7 87.0 Generalization Capability We first incorporate alternative self-supervised methods as our baseline, replacing DINO (Caron et al., 2021). In Table 1(c), we present an overview of adaptability in CAUSE across DINOv2 (Oquab et al., 2023), iBOT (Zhou et al., 2022), MSN (Assran et al., 2022), and MAE (He et al., 2022). Furthermore, we extend the number of clusters in CAUSE by utilizing MSCOCO 2017 (Lin et al., 2014), which comprises 80 object categories and one background category: (object-centric) COCO-81, and 171 categories encompassing both Stuff and Thing categories: COCO171. Note that, positive \u03d5+ relaxation is set to 0.4 and 0.55 respectively. Table 3 highlights CAUSE retains superior performances for USS even with larger categories. Especially, TransFGU (Yin et al., 2022) used Grad-CAM (Selvaraju et al., 2017) for generating category-specific pseudo-labels, thereby keeping consistent mIoU performance compared with COCO-81 and COCO-171. Nonetheless, CAUSE has a great advantage to pAcc especially in COCO-171 without any external information. Categorical Analysis. To demonstrate that CAUSE can effectively address the targeted level of semantic grouping, we closely examine IoU results for each category. By validating the IoU results on a logarithmic scale in Fig. 5(a), we can observe that STEGO and HP struggle with segmenting Thing categories in COCO-Stuff, which demands fine-grained discrimination among concepts within complex scenes. In contrast, CAUSE consistently exhibits superior capability in segmenting concepts across most categories. These results are largely attributed to the causal design aspects, including the construction of the concept clusterbook and concept-wise self-supervised learning among concept prototypes. Beyond the quantitative results, it is important to highlight again that CAUSE exhibits significantly improved visual quality in achieving targeted level of semantic groupings than baselines as in Fig. 1 and Fig. 4. We include further discussions and limitations in Appendix D. Ablation Studies. We conduct ablation studies on six factors of CAUSE to identify where the effectiveness comes from as in Fig. 5 and Table 4: (i) positive \u03d5+ and (ii) negative relaxation \u03d5\u2212, (iii) the number of concepts k in M, (iv) the effects of concept bank Ybank and (v) fully-connected CRF, and (vi) discretizing methods for concept clusterbook M. Through the empirical results, we first observe the appropriate relaxation parameter plays a crucial role in determining the quality of self-supervised learning. Furthermore, unlike semantic representation-level pre-training (Bao et al., 2022), we find that the number of discretized concepts saturates after reaching 2048 for clustering. We also highlight the effects of concept bank, CRF, and modularity maximization for effective USS. 9 \f5" + }, + { + "url": "http://arxiv.org/abs/2308.14005v2", + "title": "Calibrating Panoramic Depth Estimation for Practical Localization and Mapping", + "abstract": "The absolute depth values of surrounding environments provide crucial cues\nfor various assistive technologies, such as localization, navigation, and 3D\nstructure estimation. We propose that accurate depth estimated from panoramic\nimages can serve as a powerful and light-weight input for a wide range of\ndownstream tasks requiring 3D information. While panoramic images can easily\ncapture the surrounding context from commodity devices, the estimated depth\nshares the limitations of conventional image-based depth estimation; the\nperformance deteriorates under large domain shifts and the absolute values are\nstill ambiguous to infer from 2D observations. By taking advantage of the\nholistic view, we mitigate such effects in a self-supervised way and fine-tune\nthe network with geometric consistency during the test phase. Specifically, we\nconstruct a 3D point cloud from the current depth prediction and project the\npoint cloud at various viewpoints or apply stretches on the current input image\nto generate synthetic panoramas. Then we minimize the discrepancy of the 3D\nstructure estimated from synthetic images without collecting additional data.\nWe empirically evaluate our method in robot navigation and map-free\nlocalization where our method shows large performance enhancements. Our\ncalibration method can therefore widen the applicability under various external\nconditions, serving as a key component for practical panorama-based machine\nvision systems. Code is available through the following link:\n\\url{https://github.com/82magnolia/panoramic-depth-calibration}.", + "authors": "Junho Kim, Eun Sun Lee, Young Min Kim", + "published": "2023-08-27", + "updated": "2024-02-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Acquiring depth maps of the surrounding environment is a crucial step for AR/VR and robotics applications, as the depth maps serve as building blocks for mapping and localization. While dense LiDAR or RGB-D scanning [1, 2, 3, 4, 5] has been widely used for depth acFigure 1: Motivation and overview of our approach. Panoramic perception enables efficient navigation due to the large field of view (top). Nevertheless, the performance drops due to the gaps between the training dataset with upright cameras in mediumsized rooms and the deployment scenarios with limited data and various domain shifts. The proposed solution suggests test-time training using geometric consistency to mitigate the gap (bottom). quisition, the methods are often computationally expensive or require costly hardware. Panoramic depth estimation [6, 7, 8, 9, 10, 11, 12, 13], on the other hand, enables quick and cost-effective depth computation. It outputs a dense depth map from a single neural network inference arXiv:2308.14005v2 [cs.CV] 2 Feb 2024 \fgiven only 360\u25e6camera input, which is becoming more widely accessible [14, 15]. Further, the large field of view of panoramic depth maps can model the comprehensive 3D context from a single image capture. The holistic view provides ample visual cues for robust localization, and allows efficient 3D mapping. An illustrative example is shown in Figure 1a, where a robot navigation agent equipped with panorama view observes larger areas and builds more comprehensive grid map than the agent with perspective view when deployed for the same trajectory. While existing panoramic depth estimation methods can estimate highly accurate depth maps in trained environments [11, 8, 7, 6], their performances often deteriorate when deployed in unseen environments with large domain gaps. For example, as shown in Figure 1b, depth estimation networks trained on upright panorama images in mediumsized rooms perform poorly in images containing large camera rotation or captured in large rooms. Such scenarios are highly common in AR/VR or robotics applications, yet it is infeasible to collect large amounts of densely annotated ground-truth data for panorama images or perform data augmentations to realistically and thoroughly cover all the possible adversaries. Further, while numerous unsupervised domain adaptation methods have been proposed for depth estimation [16, 17, 18, 19], most of them mainly consider sim-to-real gap minimization and require the labelled training dataset for adaptation which is infeasible for memory-limited applications. In this paper, we propose a quick and effective calibration method for panoramic depth estimation in challenging environments with large domain shifts. Given a pretrained depth estimation network, our method applies testtime adaptation [20, 21, 22] on the network solely using objective functions derived from test data. Conceptually, we are treating depth estimation networks as sensors that output depth maps from images, which then makes the process similar to \u2018calibration\u2019 in depth or LiDAR sensing literature for accurate measurements. Our resulting scheme is flexibly applicable in either online or offline manner adaptation. As shown in Figure 1c, the light-weight training calibrates the network towards making more accurate predictions in the new environment. Our calibration scheme consists of two key components that effectively utilize the holistic spatial context uniquely provided by panoramas. First of all, our method operates using training objectives that impose geometric consistencies from novel view synthesis and panorama stretching. To elaborate, as shown in Figure 2, we leverage the fullsurround 3D structure available from panoramic depth estimation and generate synthetic panoramas. The training objectives then minimize the geometric discrepancy between depth estimations from the synthesized panoramas and the original view. Second, we propose light-weight data augmentation to cope with offline scenarios where only a limited amount of test-time training data is available. Specifically, we augment the test data by applying arbitrary pose shifts or synthetic stretches, similar to the techniques used for the training objectives. Since our calibration method aims at adapting the network during the test phase using geometric consistencies, it is compute and memory efficient while being able to handle a wide variety of domain shifts. Our method does not require the computational demands of additional network pre-training [21, 23], or memory to store the original training dataset during adaptation [16, 18, 17, 24]. Nevertheless, our method shows large amounts of performance enhancements when tested in challenging domain shifts such as low lighting or room-scale change. Further, due to the lightweight formulation, our method could easily be applied to numerous downstream tasks in localization and mapping. We experimentally verify that our calibration scheme effectively improves performance in two exemplary tasks, namely map-free localization and robot navigation. To summarize, our key contributions are as follows: (i) a novel testtime adaptation method for calibrating panoramic depth estimation, (ii) a data augmentation technique to handle lowresource adaptation scenarios, and (iii) an effective application of our calibration method on downstream mapping and localization tasks. 2. Related Work Monocular Depth Estimation Following the pioneering work of Eigen et al. [25], many existing works focus on developing neural network models that output depth maps from image input [26, 27, 28, 29, 30, 31]. Recent approaches such as MiDAS [26] or DPT [27] can make highly accurate depth predictions from images due to extensive training on large depth-annotated datasets [32, 33, 34]. As a result, there have been numerous applications in localization and mapping that leverage monocular depth estimation. For example, map-free visual localization [35] localizes the camera position using maps built from monocular depth estimation, which is highly efficient compared to building a 3D map by Structure-from-Motion. Another example is in robot navigation methods that directly estimate occupancy grid maps from input images [36, 37, 38], which could be implicitly regarded as monocular depth estimation. Compared to perspective images, monocular depth estimation using panorama images has been relatively understudied with the limited amount of available data. While recent works [11, 7, 9, 6, 12, 13] have demonstrated accurate depth estimation in trained environments, their performance are known to deteriorate when tested in new datasets with varying lighting or depth distributions [10]. Such performance discrepancies are more noticeable for panoramic images since there are fewer depth-annotated images avail\fable compared to perspective images. One possible remedy is to re-project the panoramic image to multiple perspective images to create individual depth estimations and stitch the results together, as proposed by Rey-Area et al. [39] and Peng et al. [40]. Although this may yield more robust depth estimation utilizing abundant previous frameworks for perspective images, the process involves fusing depth maps resulting from multiple neural network inferences, which is computationally expensive. Our method takes a different direction of quickly calibrating the network to the new environment for robust panoramic depth estimation. We demonstrate the effectiveness of our method on the aforementioned applications of depth estimation, namely mapfree localization and robot navigation, to verify its practicality on downstream tasks. Domain Adaptation for Depth Estimation A vast majority of domain adaptation methods target classification tasks [20, 21, 22, 41, 42, 43, 44], and aim to minimize loss functions defined over the predicted class probabilities. Existing methods could be categorized into those that only use the test data or those that require the original training dataset for adaptation. For the former, pseudo-labelling methods [43], masking-based methods [45, 44], batch normalization updating methods [46], and fully test-time training methods [20, 47] impose self-supervised learning objectives to adapt to the test domain data. On the other hand, methods belonging to the latter, namely unsupervised domain adaptation methods [42, 48, 49] and test time training methods with auxiliary task networks [21, 22, 23], utilize the original training dataset to impose consistencies between the source and target domain predictions. For depth estimation, most adaptation methods [16, 50, 24, 19, 18] focus on the sim-to-real domain gap and apply techniques from pseudo-labelling or unsupervised domain adaptation. Methods such as DESC [17] and 3D-PL [50] adapt using pseudo labels generated from additional semantic priors or style transferred predictions [51, 52, 53, 54]. In contrast, unsupervised domain adaptation methods [24, 16, 17] additionally utilize the depth-annotated training data and apply style transfer networks to learn a common feature representation between the source and target domain. Compared to existing domain estimation methods for depth estimation, our calibration method can effectively handle a wider range of domain shifts and can perform light-weight online adaptation as no additional training data is required. We extensively evaluate our method against existing domain adaptation techniques in Section 5, where our method outperforms the tested baselines in various depth estimation scenarios. Figure 2: Description of the proposed test-time training objectives. 3. Method Given a panoramic depth estimation network F\u0398(\u00b7) trained on the source domain S, the object of our calibration scheme is to adapt the network to a new, unseen target domain T during the test phase. Our method could perform adaptation both in an online and offline manner: in the online case, the network is simultaneously optimized and evaluated whereas in the offline case the network is first optimized using samples from the target domain and then evaluated with another set of target domain samples. As shown in Figure 2, our method leverages training objectives that impose geometric consistencies between the synthesized views generated from the full-surround depth predictions (Section 3.1). To further cope with offline adaptation scenarios where only a small number of images are available for training, we propose to apply data augmentation based on panorama synthesis (Section 3.2). 3.1. Test-Time Training Objectives Given a panorama image I\u2208RH\u00d7W \u00d73, the depth estimation network outputs a depth map \u02c6 D=F\u0398(I)\u2208RH\u00d7W \u00d71. The test-time adaptation enforces consistencies between depth estimations of additional input images synthesized from the current predictions, and eventually achieves stable prediction under various environment setups. The test-time training objective is given as follows, L = LS + LC + LN, (1) where LS is the stretch loss, LC is the Chamfer loss, and LN is the normal loss. Stretch Loss Stretch loss aims to tackle the depth distribution shifts that commonly occur in panoramic depth estimation by imposing consistencies between depth predictions made at different panorama stretches. Panoramic \fdepth estimation models make large errors when confronted with images captured in scenes with drastic depth distribution changes [10]. The key intuition for stretch loss is to hallucinate the depth estimation network as if it is making predictions in a room with similar depth distributions as the trained source domain, through panoramic stretching shown in Figure 2a. The panorama stretching operation [55, 56] warps the input panorama I to a panorama captured from the same 3D scene but stretched along the x, y axes by a factor of k. For a panorama image I this could be expressed as follows, Sk img(I)[u, v] = I[u, H \u03c0 arctan(1 k tan(\u03c0v H ))], (2) where I[u, v] is the color value at coordinate (u, v) and Sk img(\u00b7) is the k-times stretching function for images. A similar operation could be defined for depth maps, namely Sk dpt(D)[u, v] = \u03ba(v)D[i, H \u03c0 arctan(1 k tan(\u03c0v H ))], (3) where \u03ba(v)= q k2 sin2(\u03c0v/H)+ cos2(\u03c0v/H) is the correction term to account for the depth value changes due to stretching. Stretch loss enforces depth predictions made at large scenes to follow predictions made at contracted scenes (using k < 1) and for those at small scenes to follow predictions made at enlarged scenes (using k > 1). The distinction between large and small scenes is made by thresholding on the average depth value using thresholds \u03b41, \u03b42. Formally, this could be expressed as follows, LS= \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 P k\u2208Ks\u2225\u02c6 D\u2212S1/k dpt (F\u0398(Sk img(I)))\u22252 if avg( \u02c6 D) < \u03b41 P k\u2208Kl\u2225\u02c6 D\u2212S1/k dpt (F\u0398(Sk img(I)))\u22252 if avg( \u02c6 D) > \u03b42 0 otherwise, (4) where avg( \u02c6 D) is the pixel-wise average for the depth map \u02c6 D = F\u0398(I), and Kl = {\u03c3, \u03c32}, Ks = {1/\u03c3, 1/\u03c32} are the stretch factors used for contracting and enlarging panoramas. In our implementation we set \u03b41=1, \u03b42=2.5, \u03c3=0.8. Chamfer and Normal Loss While stretch loss guides depth predictions to have coherent scale, chamfer and normal loss encourages depth predictions to be geometrically consistent at a finer level. The loss functions operate by generating synthetic views at small random pose perturbations from the original viewpoint, and minimizing discrepancies between depth predictions made at synthetic views and the original view. First, the Chamfer loss minimizes the Chamfer distance between depth predictions made at different poses. Given a panoramic depth map D, let B(D) : RH\u00d7W \u00d71 \u2192RHW \u00d73 denote the back-projection function that maps each pixel\u2019s depth values D[u, v] to a point in 3D space D[u, v]\u2217S[u, v] where S[u, v] \u2208R3 is a point on the unit sphere corresponding to the panorama image coordinate (u, v). Further, let W(I, D; R, t) denote the warping function that outputs an image rendered at an arbitrary pose R, t, as shown in Figure 2b. Then, the Chamfer loss is given as follows, LC = X x\u2208B( \u02c6 D) min y\u2208B(Dwarp) \u2225\u02dc Rx + \u02dc t \u2212y\u22252 2, (5) where Dwarp = F\u0398(W(I, \u02c6 D; \u02dc R, \u02dc t)) is the depth prediction made from the warped image at a randomly chosen pose \u02dc R, \u02dc t near the origin. We choose \u02dc R to be a random rotation around the z-axis and \u02dc t as a random translation sampled from [\u22120.5, 0.5]3. Second, the normal loss imbues an additional layer of geometric consistency by aligning normal vectors of the depth maps. Let N(x) : R3 \u2192R3 denote the normal estimation function that uses ball queries around the point x to compute the normal vector [57, 58]. The normal loss is then given as follows, LN = X x\u2208B( \u02c6 D) ( \u02dc RN(x)\u00b7( \u02dc Rx+\u02dc t\u2212argmin y\u2208B(Dwarp) \u2225\u02dc Rx+\u02dc t\u2212y\u22252))2 (6) Conceptually, the normal loss minimizes the distance between planes spanning from points in the original depth map \u02c6 D against the nearest points in the warped image\u2019s depth map Dwarp. Note this is similar in spirit to loss functions used in point-to-plane ICP [59]. We further verify the effectiveness of each loss function in Section 5. 3.2. Data Augmentation We propose a data augmentation scheme that increases the number of test-time training data for offline adaptation scenarios where only a small number of target domain data is available in a real-world deployment. For example, a robot agent may need to quickly adapt to the new environment after observing a few samples, or AR/VR applications may want to quickly build an accurate 3D map of the environment using a small set of images. Key to our augmentation scheme is the panorama synthesis from stretching and novel view generation. Given a single panorama image I and its associated depth prediction \u02c6 D = F\u0398(I), the augmentation scheme A(I, \u02c6 D) for generating a new panorama is given as follows, A(I, \u02c6 D)= \u001a W(I, \u02c6 D; \u02dc R, \u02dc t) if avg( \u02c6 D) \u2208[\u03b41, \u03b42] Sk img(I) otherwise, (7) where \u02dc R, \u02dc t are random poses sampled near the origin and k is randomly sampled from U(\u03c32, \u03c3) if avg( \u02c6 D) > \u03b42 and U(1/\u03c3, 1/\u03c32) if avg( \u02c6 D) < \u03b41. The values for \u03b41, \u03b42, \u03c3 \f(a) Robot Agent Setup Odometry Sensor sss Figure 4 Image buffer collection Adaptation Global map\u3147\u3147 Panorama RGB Local map ss Depth Estimator s Pose Estimator s (b) Test-time Adaptation for Robot Navigation Figure 3: Robot agent with panoramic perception (top) and application of panoramic depth calibration on robot navigation task (bottom). are identical to those used in Section 3.1. Conceptually, our augmentation scheme generates novel views at random poses if the average depth values are within a range [\u03b41, \u03b42] and applies stretching otherwise, where the scene size determines the stretch factor. Despite the simple formulation, our augmentation scheme enables test-time adaptation only using a small number of image data (at the extreme case, even with a single training sample), where we further demonstrate its effectiveness by illustrating its applications in Section 4. 4. Applications In this section, we show applications of our panoramic depth calibration on two downstream tasks, robot navigation, and map-free localization. 4.1. Robot Navigation Navigation Agent Setup We assume a navigation agent equipped with a panorama camera and noisy odometry sensor, similar to the setup of recently proposed navigation agents [36, 37, 38]. As shown in Figure 3a, for each time step t the navigation agent first creates a local 2D occupancy grid map \u02c6 Lt based on the depth estimation results from the panorama, namely F\u0398(It). Then, the pose estimation network observes the previous and current local map (\u02c6 Lt\u22121, \u02c6 Lt) along with the noisy odometry sensor reading ot to produce a pose estimate \u02c6 pt = C\u03a6(\u02c6 Lt\u22121, \u02c6 Lt, ot). The pose estimate is further used to stitch the local map Lt against the previous global map Gt\u22121 to form an updated global map Gt. Finally, the policy network takes the global grid map and the current image observation as input to output an action policy, namely at = P\u03a8(Gt, It), where the possible actions are to move forward by 0.25m or turn left or right by 10\u25e6. Depth Calibration for Robot Navigation We begin each navigation episode by applying our test-time training to calibrate the panoramic depth estimates from a small number of visual observations collected. As shown in Figure 3b, the agent caches the first Nfwd panoramic views seen after it makes a forward action. Then, using the data augmentation from Section 3.2 Naug times for each cached image, the agent performs test-time training with the Nfwd\u00d7Naug set of images. Once the calibration is completed, the agent uses the updated depth estimation network to create the global map and compute policy for the subsequent steps remaining in the episode. Note that the calibration process for navigation terminates very quickly, with the total number of training steps for each episode being smaller than 300 steps. Nevertheless, the quick calibration results in significant performance improvements for various downstream navigation tasks, which is further verified in Section 5. 4.2. Map-Free Visual Localization Localization Process Overview First introduced by Arnold et al. [35], map-free visual localization aims at finding the camera pose with respect to a 3D scene where the conventional Structure-from-Motion (SfM) mapping process is omitted, hence the name \u2018map-free\u2019. Instead, the 3D scene is represented using a 3D point cloud obtained from monocular depth estimation, which in turn greatly reduces the computational burden required for obtaining SfM maps. We adapt the original map-free localization framework designed for perspective cameras to panoramas, and validate our calibration scheme on the task. As shown in Figure 4, given a single reference image Iref and its associated depth prediction \u02c6 Dref = F\u0398(Iref), map-free localization initiates with generating a 3D map from the depth map, namely B( \u02c6 Dref). Then we generate synthetic panoramas for the Nt \u00d7 Nr poses {(Ri, ti)} and extract global/local feature descriptors, where Nt translations and Nr rotations uniformly sampled from the bounding box of B( \u02c6 Dref). During localization, global and local descriptors are first similarly extracted for the query image Iq. Then, the top-K poses from the pool of Nt \u00d7Nr poses are chosen whose euclidean distances of the global descriptors are closest to that of the query image fq. The selected poses are further ranked with local feature matching using SuperGlue [60], where the candidate pose with the largest number of matches is refined for the final prediction. Here, for each local feature match between the query image and synthetic view we retrieve the corresponding 3D point from the point cloud B( \u02c6 Dref) and apply PnP-RANSAC, as shown in Figure 4. Depth Calibration for Map-Free Localization For each 3D scene, we assume only a small handful of images (between 1 \u223c5) are available for adaptation, to reflect AV/VR application scenarios where the user wants to quickly local\fFigure 4: Description of map-free localization task (top) and its test-time adaptation pipeline (bottom). Fig 6. 9 domain changes (a) Low Lighting (b) White Balance (c) Gamma (d) Speckle Noise (e) Gaussian Noise (f) Salt&Pepper Noise (g) Large Scene (h) Small Scene (i) Rotation Figure 5: Visualization of domain changes. ize in a new environment. Depth calibration is then applied to fine-tune the depth estimator, where we increase the number of training samples using data augmentation similar to robot navigation. After calibration, the modified network is applied to create a 3D map from an arbitrary reference image captured from the same environment, which could then be used for localizing new query images. 5. Experimental Results We first evaluate how our calibration scheme enhances depth prediction (Section 5.1). We then validate its effect on the aforementioned applications, namely robot navigation (Section 5.2) and map-free visual localization (Section 5.3). Implementation Details We implement our method using PyTorch [61], and use the pre-trained UNet from Albanis et al. [10] as the original network for adaptation. The network is trained using the depth-annotated panorama images from the Matterport3D dataset [62]. For test-time training, we optimize the loss function from Equation 1 using Adam [63] for 1 epoch, with a learning rate of 10\u22124 and batch size of 4. In all our experiments, we use the RTX 2080Ti GPU for acceleration. Additional details about the implementation is deferred to the supplementary material. Datasets Unlike the common practice of panoramic depth estimation [11, 7, 8] where the train/test splits are created from the same dataset, we consider entirely different datasets from the training dataset for evaluation. Specifically, we use the Stanford 2D-3D-S dataset [64] and OmniScenes [65] dataset for the depth estimation and mapfree localization experiments, and the Gibson [66] dataset equipped with the Habitat simulator [67] for robot navigation experiments. Both Stanford 2D-3D-S and OmniScenes datasets contain a diverse set of 3D scenes, with 1413 panoramas captured from 272 rooms for Stanford 2D-3DS dataset and 7614 panoramas captured from 18 rooms for OmniScenes. The Gibson dataset contains 14 scenes for the validation split, which is used on top of the Habitat simulator [67] to evaluate various robot navigation tasks. Baselines As our task has not been studied in previous works, we adapt existing test-time adaptation and unsupervised domain adaptation methods to panoramic depth estimation and implement six baselines. The four test-time adaptation baselines only use the test data for adaptation. Tent [20] only updates the batch normalization layer during adaptation, where we implement a variant that minimizes the loss function from Equation 1. Flip consistency-based approach (FL) inspired by Li et al [68] enforces the depth predictions between the original and flipped image to be similar. Mask consistencybased approach (MA) inspired by Mate [44] enforces depth consistency against a randomly masked panorama image. Pseudo Labeling (PS) [69] imposes losses against the pseudo ground-truth depth map by averaging predictions made from multiple rotated panoramas. The two unsupervised domain adaptation methods additionally use the labeled source domain dataset for adaptation, where we use AdaIN [51] to perform style transfer between the source and target domain images. Vanilla T2Net minimizes the discrepancy between the depth predictions of the source domain image transferred to the target domain and the ground truth. CrDoCo [16] additionally makes the target domain predictions to follow the predictions of the target-to-source transferred images. We provide detailed expositions of each baseline on the supplementary material. \f0.45 0.40 0.35 0.30 0.25 0.8 0.7 0.6 0.5 0.4 0.3 Offline(5%) Offline(10%) Dataset Shift Low Lighting White Balance Gamma Speckle Gaussian Salt and Pepper Large Scene Small Scene Rotation TENT Flip Mask Pseudo Label Vanilla T2Net CrDoDo No Adaptation Ours Figure 6: Plot of online adaptation result. The result of our method compared to the baselines with various domain changes (top) and image noises (bottom). Ground-Truth No Adaptation Ours Flip Mask Pseudo Label (b) Grid Maps Generated from Fixed Trajectory Robot Navigation (a) Depth Maps Generated amidst Gaussian Noise Input Image No Adaptation Ours Figure 7: Qualitative result of depth maps (top) and grid maps from navigation task (bottom). 5.1. Depth Estimation Online Adaptation As shown in Figure 5 we evaluate our method on 10 target domains: generic dataset change, 3 global lighting changes (image gamma, white balance, average intensity), 3 image noises (gaussian, speckle, salt & pepper), 3 geometric changes (scene scale change to large/small scenes, camera rotation). For scene scale change we use the large rooms manually selected from the evaluation datasets, and for other domains we use the scikitimage library [70] to generate the image-level changes. We Method MAE Abs. Rel. RMSE Sq. Rel. No Adatation 0.4343 0.1949 0.6025 0.1428 Ours w/o Stretch Loss 0.3650 0.1500 0.5450 0.1001 Ours w/o Chamfer Loss 0.3260 0.1468 0.4819 0.1115 Ours w/o Normal Loss 0.3373 0.1512 0.5022 0.0953 Ours w/o Augmentation 0.3972 0.1790 0.5566 0.1247 Ours 0.3192 0.1432 0.4683 0.0906 Table 1: Ablation study of key components of our calibration scheme. \u2018Abs. Rel.\u2019 and \u2018Sq. Rel.\u2019 denote the absolute and squared relative error from Eigen et al. [25]. provide additional implementation details about the domain setups in the supplementary material. Figure 6 summarizes the mean absolute error (MAE) of various adaptation methods aggregated from the Stanford2D-3D-S [64] and OmniScenes [65] datasets. We report the full evaluation results in the supplementary material. Our method outperforms the baselines across all tested domain shifts, with more than 10cm decrease in MAE in most shifts. The loss functions presented in Section 3.1 thus enables effective depth network calibration. For large scene adaptation, the tested baselines fail to make sufficient performance improvements, whereas our method can largely reduce the error via stretch loss. In addition, note that our method can perform adaptation even in photometric domain shifts such as speckle noise or white balance change, despite the geometry-centric formulation. The multi-view consistency imposed by normal and Chamfer loss ensures the network to make more robust depth predictions amidst these adversaries. A few exemplary depth visualizations are shown in Figure 7, where our online calibration results in depth maps with more accurate depth scales and detail preservation. We report the full results against other depth estimation metrics in the supplementary material. Offline Adaptation We additionally experiment with offline adaptation scenarios, where the depth network is first trained on a small set of images and then tested on a heldout set. To cope with the data scarcity during training, we apply data augmentation for all the tested methods with Naug = 10. For evaluation, we apply our calibration method separately for each room in the Stanford2D-3D-S [64] and OmniScenes [65] datasets, where the panoramas captured for each room are split for training and testing. Figure 6 shows the adaptation results, where the results are reported after using 5% or 10% of panoramas at each room for training. In both evaluations, our method incurs large amounts of performance enhancement while outperforming all the tested baselines. Ablation Study To further validate the effectiveness of the various components in our calibration scheme, we perform an ablation study on the offline adaptation setup. We use the OmniScenes [65] dataset for evaluation and use 10% \fExploration Point Goal Method Exp. Ratio Coll. Rate Success Rate Coll. Rate No Adaptation 0.8835 0.2793 0.4107 0.3864 Flip Consistency 0.9027 0.2305 0.5000 0.3516 Mask Consistency 0.8758 0.2677 0.4643 0.3460 Pseudo Labeling 0.8701 0.8701 0.3393 0.4058 Ours 0.9288 0.2352 0.4643 0.3135 (a) Exploration and Point Goal Navigation Localization Mapping Method t-Error (m) R-Error (\u25e6) Cmf. Dist. MAE No Adaptation 0.1450 10.2731 0.1959 0.4963 Flip Consistency 0.1329 10.1872 0.1687 0.4743 Mask Consistency 0.1336 10.1356 0.1585 0.4637 Pseudo Labeling 0.1347 10.3247 0.1735 0.4711 Ours 0.1177 10.2044 0.1558 0.4453 (b) Localization and Mapping under Fixed Trajectory Table 2: Robot navigation evaluation against existing methods. Method t-error (m) R-error (\u25e6) Accuracy (0.1m, 5\u25e6) Accuracy (0.2m, 10\u25e6) No Adatation 0.16 0.91 0.32 0.62 Flip Consistency 0.14 0.79 0.36 0.67 Mask Consistency 0.12 0.88 0.41 0.77 CrDoCo 0.11 1.00 0.46 0.78 Ours 0.09 0.87 0.52 0.86 Table 3: Map-Free visual localization compared against the baselines. Note that the translation and rotation error thresholds for calculating accuracy is denoted as (d m, \u03b8\u25e6). of panoramas in each room for training and the rest for testing. As shown in Table 1, omitting any one of the loss functions leads to suboptimal performance. In addition, the data augmentation scheme incurs a large amount of performance boost, which indicates that despite its simplicity, data augmentation plays a crucial role in data-scarce offline adaptation scenarios. 5.2. Robot Navigation We consider three tasks for evaluating robot navigation using panoramic depth estimation, following prior works [36, 71, 72]: point goal navigation, exploration, and simultaneous localization and mapping (SLAM) from a fixed robot trajectory. First, point goal navigation aims to navigate the robot agent towards a goal specified from the agent\u2019s starting location, e.g. \u201cmove to the location 5m forward and 10m right from the origin\u201d. Second, the objective of exploration is to explore the given 3D scene as much as possible under a fixed number of action steps. Finally, the SLAM task evaluates the accuracy of the occupancy grid map and pose estimates under a fixed robot trajectory. We use 4 random starting points at each of 14 scenes in the Gibson [66] dataset totaling 56 episodes per task, and set the maximum number of action steps to 500. Table 2 compares the robot navigation tasks against three baselines (Flip consistency, Mask consistency, and Pseudo Labeling) where our method outperforms the baselines in most metrics. For exploration, our calibration scheme results in largest exploration areas and rates while attaining a small collision rate, which is the total collision divided by the total number of action steps. A similar trend is present for point goal navigation, where our agent attains the highest success rate with the smallest number of collisions. Note that the success rate is computed as the ratio of navigation episodes where the robot reached within 0.2m of the designated point goal. Finally for fixed-trajectory SLAM, our method exhibits higher localization and mapping accuracy than its competitors. The translation error for localiztion drops largely after adaptation, while the rotation error is similar across all the baselines which is due to the 360 deg field-of-view that makes rotation estimation fairly accurate even prior to localization. On the mapping size, our method attains the smallest 2D Chamfer distance and image error metrics (MAE) measured between the estimated global map and the ground-truth. In addition, as shown in Figure 7 the grid maps resulting from our method best aligns with the ground truth when compared against the maps from the baselines. Thus, the training objectives along with the lightweight augmentation enables quick and effective adaptation for various navigation tasks. 5.3. Map-Free Visual Localization Similar to the offline evaluation explained in Section 5.1, for each room in the OmniScenes [65] dataset we select 5% of the panorama images for test-time training and the rest for evaluating localization. Then, we treat each evaluation image as the reference image Iref from Section 4.2 and generating a 3D map via depth estimation. To finally evaluate localization we query 10 images that are captured within 2m of each reference image, where we use the dataset\u2019s 6DoF pose annotations to determine the criterion. Table 3 shows the localization performance compared against three baselines (Flip consistency, Mask consistency, and CrDoCo [16]). Following prior works in visual localization [73, 74, 75], we report the median translation and rotation errors along with accuracy where a prediction is considered correct if its translation and rotation error is below a designated threshold. Our method outperforms the baselines in both tested datasets, with almost a 20% increase in accuracy. The geometry correction of our method as shown in Figure 7 leads to more accurate PnP-RANSAC solutions, which in turn results in enhanced localization performance. 6." + }, + { + "url": "http://arxiv.org/abs/2308.13989v1", + "title": "LDL: Line Distance Functions for Panoramic Localization", + "abstract": "We introduce LDL, a fast and robust algorithm that localizes a panorama to a\n3D map using line segments. LDL focuses on the sparse structural information of\nlines in the scene, which is robust to illumination changes and can potentially\nenable efficient computation. While previous line-based localization approaches\ntend to sacrifice accuracy or computation time, our method effectively observes\nthe holistic distribution of lines within panoramic images and 3D maps.\nSpecifically, LDL matches the distribution of lines with 2D and 3D line\ndistance functions, which are further decomposed along principal directions of\nlines to increase the expressiveness. The distance functions provide coarse\npose estimates by comparing the distributional information, where the poses are\nfurther optimized using conventional local feature matching. As our pipeline\nsolely leverages line geometry and local features, it does not require costly\nadditional training of line-specific features or correspondence matching.\nNevertheless, our method demonstrates robust performance on challenging\nscenarios including object layout changes, illumination shifts, and large-scale\nscenes, while exhibiting fast pose search terminating within a matter of\nmilliseconds. We thus expect our method to serve as a practical solution for\nline-based localization, and complement the well-established point-based\nparadigm. The code for LDL is available through the following link:\nhttps://github.com/82magnolia/panoramic-localization.", + "authors": "Junho Kim, Changwoon Choi, Hojun Jang, Young Min Kim", + "published": "2023-08-27", + "updated": "2023-08-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Estimating the location of a mobile device or agent with respect to a 3D map, widely referred to as visual localization, has vast applications in robotics and AR/VR. Compared to perspective images, which are more widely used for localization, panorama images provide a 360\u25e6field of view that contains ample visual evidence from the holistic scene context. In this light, there have been recent advances in visual localization using panoramic images [7, 8, 26, 27] Figure 1. Overview of our approach. LDL assumes a 3D map equipped with lines and local features, and similarly preprocesses the 2D panorama prior to localization. LDL then selects candidate poses by matching 2D, 3D line distance functions through decomposition along principal directions that effectively represent the sparse geometry of lines. Finally, the selected poses are refined via local feature matching [44] and PnP-RANSAC [15,29]. arXiv:2308.13989v1 [cs.CV] 27 Aug 2023 \fthat demonstrate reasonably stable localization, with stateof-the-art methods leveraging a two-step process of candidate pose selection and refinement [27, 43]. Nevertheless, many existing methods for this task have limitations in computational efficiency and robustness, mainly stemming from the costly or unstable pose selection process. As global feature descriptors [3, 23] or a large number of colored points [26, 27] are the main components for this step, the pipelines can be memory and compute intensive or fragile to large illumination changes [26,27]. To overcome such limitations, we explore the alternative direction of using lines as the major cue for panoramic localization. Lines have a number of desirable properties compared to commonly used raw color, semantic labels or learned global features [8, 26, 43]. First, due to the longstanding work in line segment extraction [18, 19, 55, 59], it is cheap and stable to extract line segments even amidst dramatic changes in illumination or moderate motion blur. Second, lines are sparse representations of a scene and can potentially lead to small memory consumption and computation. Nevertheless, line segments alone are visually ambiguous compared to other localization cues (color, global features, etc.), which makes them harder to tailor for successful localization. While there exist prior works in linebased visual localization [16, 33, 57], many focus on using lines for pose refinement after finding coarse poses from conventional global feature comparisons [16, 57] or exhibit unstable performance compared to conventional pointbased methods [33]. Further, prior works often involve expensive line-specific feature extraction to distinguish contexts and establish one-to-one line correspondences [57]. LDL is a fast and robust localization method that leverages the holistic context from lines in panoramas and 3D maps to effectively find the camera pose. In contrast to previous works [16, 57], we retain our focus on using line segments for pose search based on the observation that conventional point-based matching [12, 44] performs stably once given a good initial pose. As shown in Figure 1, given a panoramic image of an unknown location, we utilize the distribution of extracted line segments and compare it against those in the pre-captured 3D map. First, the candidate pose selection step rapidly evaluates an immense set of poses within a matter of milliseconds and selects the coarse poses to further optimize. Here LDL compares the distribution of lines in 2D and 3D evaluated on their spherical projections using distance functions, as shown in Figure 1. The distance function imbues relative spatial context even in featureless regions and quickly matches poses without establishing explicit correspondences between detected lines. We further enhance the discriminative power of distance functions by decomposition, and separately evaluate lines aligned with each principal directions. Once a small set of initial poses are found, LDL refines them with PnPRANSAC [15, 29], where we leverage powerful local features from recent works [12, 44] to establish good 2D-3D correspondences. We evaluate LDL in various indoor scenes where it performs competitively against all tested baselines while demonstrating robust performance in scenes with object changes or large illumination shifts. Further, LDL exhibits an order-of-magnitude faster runtime compared to global feature comparison [3, 17, 23] due to the efficient formulation. By only using the geometric information of lines and pre-trained visual features, we expect LDL to serve as a practical localization algorithm that could enhance and complement existing visual localization techniques. 2. Related Work Line-Based Localization Inspired by abundant straightlines and rectangular structures in man-made objects, many works attempt visual localization with line segments [2, 16, 33, 52, 57, 58]. Micusik et al. [33] utilize the line segments extracted from the 3D model to directly match line segments in images by comparing the Chamfer distance in 2D and 3D. However, lines, even when perfectly matched, are inherently subject to ambiguity along the line direction. Yoon et al. [57] suggest removing such ambiguities by treating points on a line segment as verbal tokens in natural language processing, where line features are learned using Transformers [53]. Such learning-based approaches are trained with a database of pose-annotated images or require additional computation [16, 57, 58]. Further, these approaches only use lines for pose refinement, assuming a coarse pose estimate to be given via global feature comparisons [3, 17]. LDL takes a different approach and focuses on robust pose selection based on lines. We compare LDL against existing approaches for line-based localization, where LDL performs competitively against these methods while balancing robustness and efficiency. Point-Based Localization Most visual localization algorithms follow a point-based paradigm, focusing on sparse feature point correspondences [10,24,30,36,41\u201343,45\u201348], dense matching via coordinate regression of scene points [5, 30], or minimizing color discrepancies of dense 3D points via gradient descent [26,27]. Conventional approaches using a perspective camera input take a two-step approach, where coarse poses are first estimated using global feature descriptors [3,17] and refined with PnP-RANAC from local feature matches [12, 31, 44] or dense matches from scene coordinate regression [30, 48]. Recent panoramic localization methods [7, 8, 26, 27] also follow a similar two-step approach, where exemplary methods find candidate poses via color distribution matching and refine them using gradient descent optimization [26,27]. While these algorithms can robustly handle a modest range of scene changes due to \fFigure 2. Motivation for (a) utilizing and (b) decomposing line distance functions. (a) Line distance functions disambiguate regions with dense lines. Given two candidate poses close (A) and far (B) from ground truth, Chamfer distance falsely favors B near dense lines, whereas distance functions correctly rank the poses. (b) Decomposition further reduces ambiguities from rotation by separately considering line segments with varying directions. Given an original view close to the ground truth (green) and a rotated view (red), the decomposition better distinguishes the two views by correctly selecting the original view over the rotated view. the holistic view from panoramas, the algorithms can still fail with significant changes in illumination. We compare LDL against exemplary point-based methods and demonstrate that line segments could be effectively utilized for accurate and robust localization even without the costly calculation of global features or color matching. 3. Method LDL aims at finding the pose at which the query image I is taken with respect to a 3D scene, where Figure 1 depicts the localization steps taken by LDL. We first represent the 3D scene using a line map equipped with local feature descriptors for keypoint locations, and similarly acquire line segments and local descriptors for the query image prior to localization (Section 3.1). We then estimate the three principal directions for 2D and 3D by voting, from which we can deduce a set of rotations considering the sign and permutation ambiguity (Section 3.2). Given the fixed set of candidate rotations, we construct an initial set of possible poses incorporating translations. We generate the decomposed line distance functions at each pose and choose the promising poses by comparing the distance functions with a robust loss function (Section 3.3). As the final step, the selected poses are refined by performing PnP-RANSAC [15] using feature matches [44] with the query image (Section 3.4). 3.1. Localization Input Preparation Map Building LDL operates using a 3D map consisting of line segments and local features. We build such a map starting from a colored point cloud P = {X,C}. To obtain the 3D line segments we use the line extraction method from Xiaohu et al. [54], which can quickly process point clouds containing millions of points within a few seconds. We further remove short, noisy line segments from the raw detection with a simple filtering step: given the point cloud bounding box of size bx \u00d7 by \u00d7 bz, we filter out 3D line segments shorter than \u03bb(bx + by + bz)/3 with \u03bb = 0.1 in all our experiments. The 2D line segments are then filtered with an adaptive length threshold to match the filtering rate of 3D line segments. Specifically, we choose the threshold value such that the ratio of lines filtered in 2D equals that in 3D. To obtain local features embedded in the 3D map, we first render synthetic views at various locations using the point cloud color values. Specifically, we project the input point cloud P={X, C} at a virtual camera and assign the measured color Y (u, v)=Cn at the projected location of the corresponding 3D coordinate (u, v)=\u03a0(RXn + t) to create the synthetic view Y . We then extract local features for each synthetic view Y using SuperPoint [12], and backproject the local features to their 3D locations, which in turn results in keypoint descriptors embedded in 3D space. Note that while we illustrate map building using a colored point cloud, our setup can also work with line-based SfM maps [32,39,40] since the input to our pipeline is lines and associated local features. Panorama Pre-processing Similar to map building, we extract line segments and local features from the query panorama image. We use LSD [18] to acquire line segments, which is a robust line detection algorithm that can stably extract lines even under motion blur or lighting changes. To remove noisy line detections as in the 3D case, we filter 2D line segments with an adaptive length threshold to match the filtering rate of 3D line segments. Specifically, for each scene we choose the threshold value such that the ratio of lines filtered in 2D equals that in 3D. Then, we extract local feature descriptors using SuperPoint [12], where the results will later be used for pose refinement in Section 3.4. \f3.2. Candidate Rotation Estimation Given the detected line segments, LDL first estimates a set of feasible rotations by extracting principal directions, which we define as the most common line directions in 2D and 3D. Let L2D = {l} denote the line segments in 2D, where l = (s, e) is a tuple of start point s \u2208S2 and end point e \u2208S2. Note that we operate on the spherical projection space and treat lines and points on panoramas as arcs and points on the unit sphere S2 respectively. Similarly, let L3D = {\u02dc l} denote the line segments in 3D, with \u02dc l = (\u02dc s, \u02dc e) being a tuple containing start and end points \u02dc s, \u02dc e \u2208R3. LDL estimates the vanishing point and votes for the principal directions in 2D and 3D. In 2D we first extract vanishing points by finding the points of intersection of extended 2D line segments. The 2D principal directions P2D={p} are defined as the top k2D vanishing points containing the most incident lines, where p \u2208R3 is a unit norm vector denoting the vanishing point location in the sphere. Similarly, the 3D principal directions P3D={\u02dc p} are defined as the top k3D most common line directions from 3D line segments obtained via voting. Note that the 3D direction \u02dc p \u2208R3 is also normalized. LDL estimates the feasible candidate rotations up to uncertainty in the combinatorial ambiguities when matching the principal directions in 2D and 3D. Specifically, we select triplets of directions from P2D and P3D, yielding a total of \u0000k2D 3 \u0001 \u00d7 \u0000k3D 3 \u0001 \u00d73!\u00d723 possible combinations, additionally considering the sign and permutation ambiguity. For each pair of triplets, we apply the Kabsch algorithm [25] to find the optimal rotation that aligns the 2D directions to 3D directions. Discarding infeasible rotations that have large mean squared error, we obtain Nr rotations. The possible rotations are further filtered using line distance function presented in the next section. 3.3. Line Distance Functions for Pose Selection We propose line distance functions to efficiently evaluate a large pool of poses and select promising candidate poses. The initial pool of poses is the combination of possible translations with the rotations found in the previous section. To this end, Nt translations are chosen within grid partitions of the 3D point cloud, where details are explained in the supplementary material. The resulting Nt \u00d7Nr poses are ranked using line distance functions. Distance Function Definition Distance functions are designed to compare the holistic spatial context captured from the large field of view in panorama images. They are defined for every point including void regions without any lines and can quickly rank poses. Compared to Chamfer distance or learned line embeddings used in prior work [33, 57], LDL does not attempt pairwise matching between lines, which is often costly and can incur failure modes. For example, it is ambiguous to correctly match between densely packed lines as shown in Figure 2a. A line distance function is a dense field of distance values to detect lines in the 2D query image or the spherical projection at an arbitrary pose in 3D. For a point x on the unit sphere S2, the 2D line distance function is given as f2D(x; L2D) = min l\u2208L2D D(x, l). (1) Here D(x, l) is the spherical distance from x to line segment l = (s, e), namely D(x, l) = \uf8f1 \uf8f2 \uf8f3 sin\u22121 |\u27e8x, s \u00d7 e \u2225s \u00d7 e\u2225\u27e9| if x \u2208Q(s, e) min(cos\u22121\u27e8x, e\u27e9, cos\u22121\u27e8x, s\u27e9) otherwise, (2) where Q(s, e) is the spherical quadrilateral formed from {s, e, \u00b1(s \u00d7 e)/\u2225s \u00d7 e\u2225}. Similarly, the 3D line distance function is defined for each candidate rotation R \u2208SO(3) and translation t \u2208R3. Using the spherical projection function \u03a0(\u00b7) : R3 \u2192S2 that maps a point in 3D to a point on the unit sphere, the 3D line segment \u02dc l = (\u02dc s, \u02dc e) is projected to 2D under the candidate transformation as l = (\u03a0(R\u02dc s+t), \u03a0(R\u02dc e+t)). For simplicity, let \u03a0L(\u02dc l; R, t) denote the projection of a line segment in 3D to the spherical surface. Then the 3D line distance function is defined as follows, f3D(x; L3D, R, t) = min \u02dc l\u2208L3D D(x, \u03a0L(\u02dc l; R, t)). (3) As shown in Figure 3, one can expect poses closer to the ground truth to have similar 2D and 3D line distance functions. Therefore, we evaluate Nt \u00d7 Nr poses according to the similarity of line distance functions. We apply a robust loss function that measures inlier counts to quantify the affinity of the line distance functions. For each candidate pose {R, t} we count the number of points whose distance function differs below a threshold \u03c4, L(R, t) =\u2212 X q\u2208Q 1{|f2D(q; L2D)\u2212f3D(q; L3D, R, t)| < \u03c4}, (4) where 1{\u00b7} is the indicator function and Q \u2282S2 is a set of query points uniformly sampled from a sphere. The loss function only considers inlier counts, and thus is robust to outliers from scene changes or line misdetections. We validate the efficacy of the robust loss function in Section 4.2. Distance Function Decomposition To further enhance pose search using line distance functions, we propose to decompose the distance functions along three principal directions. While line distance functions provide useful evidence for line-based localization, they lack a sense of direction as \fFigure 3. Line distance function visualization and decomposition at the ground truth pose R\u2217, t\u2217. LDL decomposes distance functions using principal directions and enhances their expressiveness. in Figure 2b, where the distance functions alone cannot effectively distinguish rotated views at a fixed translation. We split line segments along the principal directions used for rotation estimation and define separate line distance functions for each group of lines, as shown in Figure 3. Recall from Section 3.2 that each candidate rotation R is obtained from a pair of triplets in 2D and 3D principal directions denoted as \u02c6 P R 2D={p1, p2, p3} and \u02c6 P R 3D={\u02dc p1, \u02dc p2, \u02dc p3}. We associate line segments that are parallel to directions in \u02c6 P R 2D, \u02c6 P R 3D, leading to three groups of line segments LR 2D={L1 2D, L2 2D, L3 2D} and LR 3D={L1 3D, L2 3D, L3 3D} in 2D and 3D, respectively. We separately define line distance functions for the three groups using Equation 2, namely f2D(x; Li 2D) and f3D(x; Li 3D, R, t) for i = 1, 2, 3. Then the robust loss function in Equation 4 can be modified to accommodate the decomposed distance functions, L(R,t)=\u2212 3 X i=1 X q\u2208Q 1{|f2D(q;Li 2D)\u2212f3D(q;Li 3D,R,t)|<\u03c4}. (5) We validate the importance of distance function decomposition in Section 4.2. 3.4. Candidate Pose Refinement After we select the top K poses from the pool of Nt\u00d7Nr poses with the loss function values from Equation 5, we refine them using local feature matching as shown in Figure 1. Here we utilize the cached local features from Section 3.1. Specifically, for each selected pose we first retrieve the set of visible 3D keypoints at that pose and perform local feature matching against the 2D keypoints in the query image. In this process we use SuperGlue [44] for feature matching and select the candidate pose with the most matches. Finally, we apply PnP-RANSAC [15, 21, 29] on the matched 2D and 3D keypoint coordinates to obtain a refined pose estimate. Backed by local feature matching that stably operates given decent coarse estimates from line distance functions, LDL can robustly function as an effective localization method which we further verify in Section 4. 4. Experiments We evaluate LDL in various localization scenarios and analyze its performance. Our method is mainly implemented using PyTorch [35], and is accelerated with a single RTX 2080 GPU. In all our experiments we set the number of principal directions as k2D=20, k3D=3, the inlier threshold \u03c4=0.1, and the number of query points as |Q|=42. We report the full hyperparameter setup in the supplementary material. Following prior works [7, 8, 26], we report the median translation and rotation errors along with the localization accuracy where a prediction is considered correct if the translation error is below 0.1m and the rotation error is below 5\u00b0. Datasets We evaluate LDL in two indoor localization datasets: Stanford 2D-3D-S [4] and OmniScenes [26]. Stanford-2D-3D-S [4] contains 1413 panorama images from 272 rooms subdivided into six areas. Each area has diverse indoor scenes such as offices, hallways, and auditoriums where repetitive structure and featureless regions are present. OmniScenes contains 4121 panorama images from seven 3D scans, where the panorama images are captured with cameras either handheld or robot mounted, and at different times of day including large changes in furniture configurations. The dataset has three splits (Robot, Handheld, Extreme) that are recorded in scenes with/without changes, where images in the Extreme split are captured under large camera motion. Baselines We compare LDL against three point-based baselines (PICCOLO, CPO, structure-based) and two linebased baselines (Chamfer distance-based, Line Transformer [57]). PICCOLO (PC) [26] and the follow-up work CPO [27] is an optimization-based algorithm that finds pose by minimizing the color discrepancy between the point cloud and the query image. Structure-based approach [43, 51] (SB) is one of the most prominent methods for visual localization using perspective cameras. We implement a method for panorama images, where candidate poses are retrieved from an image database using a global feature extractor [17] and further refined using SuperGlue [44] matches. For fair comparison, we undistort the \ft-error (m) R-error (\u25e6) Accuracy Dataset PC SB CD LT CPO LDL PC SB CD LT CPO LDL PC SB CD LT CPO LDL Area 1 0.02 0.02 0.12 0.02 0.01 0.02 0.46 0.62 1.14 0.62 0.25 0.54 0.66 0.89 0.50 0.90 0.90 0.86 Area 2 0.76 0.04 1.16 0.04 0.01 0.02 2.25 0.72 11.54 0.72 0.27 0.66 0.45 0.76 0.35 0.74 0.81 0.77 Area 3 0.02 0.03 0.79 0.02 0.01 0.02 0.49 0.57 4.54 0.55 0.24 0.54 0.57 0.92 0.36 0.88 0.78 0.89 Area 4 0.18 0.02 0.33 0.02 0.01 0.02 4.17 0.57 1.97 0.56 0.28 0.48 0.49 0.91 0.46 0.91 0.83 0.88 Area 5 0.50 0.03 0.95 0.03 0.01 0.02 14.64 0.69 41.84 0.65 0.27 0.54 0.44 0.80 0.36 0.79 0.74 0.81 Area 6 0.01 0.02 0.50 0.02 0.01 0.02 0.31 0.63 1.20 0.60 0.18 0.50 0.69 0.88 0.47 0.87 0.90 0.83 Total 0.03 0.03 0.73 0.02 0.01 0.02 0.63 0.63 2.30 0.63 0.24 0.53 0.54 0.85 0.39 0.84 0.83 0.83 Table 1. Localization performance evaluation in Stanford 2D-3D-S [4], compared against PICCOLO (PC) [26], structure-based approach (SB), Chamfer distance-based approach (CD), Line Transformer (LT) [57], and CPO [27]. Accuracy Dataset PC SB CD LT CPO LDL Original 0.45 0.69 0.21 0.68 0.72 0.89 Gamma 0.00 0.63 0.47 0.59 0.00 0.82 Intensity 0.00 0.56 0.40 0.58 0.80 0.76 White Balance 0.00 0.62 0.32 0.67 0.74 0.91 Table 2. Localization accuracy on synthetic color variations applied to Room 3 in the Extreme split from OmniScenes [26]. panorama image into cubemaps and perform feature matching, where the results are then fed to PnP-RANSAC for refinement. In addition, we construct the database of poseannotated images by rendering synthetic views at various locations in the colored point cloud. Chamfer distance-based approach (CD), inspired from Micusik et al. [33], ranks candidate poses by comparing the spherical Chamfer distance of line segments in the synthetic views against the query image. Line Transformer by Yoon et al. [57] (LT) ranks candidate poses using Transformerbased [53] matching learned for each line segment. As this baseline also requires a pose-annotated database, we construct a synthetic database similar to the structure-based approach, and apply the undistortion process for fair comparison. We provide additional details about the baselines in the supplementary material. 4.1. Localization Evaluation Stanford 2D-3D-S We first assess the localization performance of LDL against the baselines in the Stanford 2D-3DS dataset, as shown in Table B.1. LDL performs competitively against the strong baselines (Structure-based and Line Transformer) that apply powerful neural networks for candidate pose search. While the dataset contains hallways and auditoriums with large featureless regions or repetitive structure, LDL leverages the holistic distribution of lines using distance functions and shows stable performance without resorting to costly neural network computations. Further, LDL shows superior performance when compared against the Chamfer distance-based method, which indicates that solely focusing on line matches for ranking candidate poses can lead to suboptimal performance. Figure 4. Color variations for evaluating illumination robustness. OmniScenes We additionally compare LDL against baselines in the OmniScenes dataset, as shown in Table 3. Unlike the Stanford 2D-3D-S dataset, all images exhibit blur from camera motion and approximately half of the images contain changes in object layout. In splits not containing changes, LDL performs competitively against the baselines, which supports our claim that line distance functions enable effective pose search without using neural networks. Further, LDL attains the highest accuracy in splits containing scene changes and notably in the extreme split that contains the largest amount of motion blur. This is due to the stable line extraction [18, 19, 55, 59] that enables resilience against motion blur, and the robust distance function comparison (Equation 4) that rejects outliers for handling scene changes. We further verify the importance of each components in LDL in Section 4.2. Illumination Robustness Evaluation To validate the illumination robustness of LDL, we measure localization performance after applying synthetic color variations. We select Room 3 from the Extreme split in OmniScenes for evaluation. As shown in Figure 4, the image gamma, white balance, and average intensity are modified to an arbitrary value, where further details are deferred to the supplementary material. We report the results of LDL along with the baselines in Table 2. CPO, PICCOLO, and the structurebased baseline all suffer from performance degradation, as the color values are directly utilized for finding initial poses. Notably, Yoon et al. [57] also shows performance drop, as Transformer line features are affected by the illumination changes of the image. As LDL relies on the spatial structure of line segments for candidate pose search, it is robust to illumination variations, leading to stable performance across all color variations. Further, note that while all the methods excluding PICCOLO [26] and CPO [27] use local feature \ft-error (m) R-error (\u25e6) Accuracy Split Change PC SB CD LT CPO LDL PC SB CD LT CPO LDL PC SB CD LT CPO LDL Robot \u2717 0.02 0.03 1.74 0.03 0.01 0.02 0.27 0.58 89.23 0.59 0.12 0.49 0.69 0.99 0.31 0.99 0.89 0.98 Hand \u2717 0.01 0.03 2.10 0.03 0.01 0.03 0.23 0.63 89.02 0.64 0.13 0.54 0.81 0.95 0.29 0.95 0.80 0.97 Robot \u2713 1.07 0.04 1.78 0.04 0.02 0.03 21.03 0.64 89.27 0.65 1.46 0.58 0.41 0.93 0.30 0.94 0.59 0.95 Hand \u2713 0.53 0.04 1.70 0.04 0.02 0.03 7.54 0.71 88.50 0.70 0.37 0.64 0.47 0.92 0.30 0.90 0.60 0.92 Extreme \u2713 1.24 0.04 1.55 0.04 0.03 0.03 23.71 0.83 88.54 0.84 0.37 0.72 0.41 0.89 0.29 0.88 0.59 0.92 Table 3. Localization performance evaluation in OmniScenes [26], considering both scenes with and without object layout changes. Figure 5. Pose error recall and runtime comparison between candidate pose search using LDL and NetVLAD [3]. Method t-error R-error Acc. (m) (\u25e6) SB (K=10) 0.06 1.18 0.63 SB (K=20) 0.05 1.07 0.71 LDL (K=10) 0.07 1.36 0.63 LDL (K=20) 0.07 1.27 0.69 (a) Multi-Room Localization Component CPU GPU Line Segment Extraction 0.141 0.141 Rotation Estimation 1.124 0.009 Distance Function Computation 0.052 0.001 Candidate Pose Refinement 5.573 0.587 Total Runtime (sec) 6.890 0.738 (b) Runtime on CPU and GPU Table 4. Multi-room localization compared against StructureBased method (SB) with various number of candidate poses (K) and runtime analysis of LDL. matching for pose refinement, there is a large performance gap between LDL and the other methods. This validates our focus on designing a stable candidate pose selection method, as modern feature descriptors and matching algorithms [12, 13, 43, 44] are fairly robust against adversaries such as illumination changes. 4.2. Performance Analysis Candidate Pose Search Evaluation To evaluate the efficacy of line distance functions for candidate pose search, we compare the retrieval accuracy of LDL against NetVLAD [3], which is a widely used global feature extractor [23, 43, 57]. Note that NetVLAD is used as the candidate pose selection module in the structure-based baseline. We use the Extreme split from OmniScenes for evaluation, where the translation and rotation error recall curve along with the runtime for processing a single candidate pose is reported in Figure 5. For fair comparison we use the identical pool of translations for both methods as Nt = 50 and assign a large number of candidate rotations for NetVLAD with Nr = 216. Additional setup details are reported in the supplementary material. While neural network-based pose search methods can perform city-scale search [3, 17, 20], the line distance functions in LDL exhibit competitive performance to NetVLAD in indoor environments. The distance functions provide highly discriminative spatial context, which enables effective pose search. Furthermore, the runtime for pose search in LDL is much shorter than NetVLAD, due to the highly efficient computation of distance functions only conducted on sparse sphere points. This is in contrast to NetVLAD where visual features are computed with a neural network for each view. The line distance functions enable quick and effective pose initialization, which in turn allow LDL to be usable in various practical localization scenarios. Runtime Analysis We analyze the runtime of LDL in Table 4b where we decompose the runtime for localizing a single query image from OmniScenes [26]. We assume that 3D scanning along with map building is done offline and only consider the computation time for online operations, namely 2D line segment extraction, candidate pose selection and refinement. Overall, the pose selection process including rotation estimation and distance function computation exhibits a small runtime for both CPU and GPU, which validates the efficiency of our proposed line-based pose search. Nevertheless, the pose refinement exhibits a relatively larger runtime, which is mainly due to the large number of features in panoramas compared to normal images with a smaller field of view. While we attained our focus in pose search and used the off-the-shelf local feature matching algorithms for pose refinement [12, 44], devising highly efficient feature matching algorithms tailored specifically for panoramas is left as future work. Scalability Analysis We assess the scalability of LDL to large-scale indoor scenes using the OmniScenes [26] dataset. While the previous set of experiments assume room-scale localization scenarios, here we test LDL us\fing the entire OmniScenes dataset as the 3D map. Table 4a shows the localization results, where LDL is compared against the structure-based method at various number of candidate poses (K). LDL exhibits performance on a par with the structure-based method, which indicates that line distance functions can scalably handle large scenes consisting of multiple rooms. Nevertheless, scaling LDL to even larger scale scenes (e.g. building-scale scenes as in InLoc [51]) is left as future work. Privacy Preservation Analysis While the main goal of LDL is to offer fast and robust localization based on lines, we find that with a small modification our method can offer light-weight privacy protection in client-server localization scenarios [6,11,14,49,50]. Following prior works [34,50], we consider the case where a client using an edge device wants to localize oneself against a 3D line map stored in the cloud. Privacy breaches occur if the service provider maliciously tries to view the visual data captured by the client. This is possible even when only the local feature descriptors are shared between the client and server, by using feature inversion methods [37] that reconstruct the original image from a sparse set of local features as shown in Figure 6. By changing LDL to only exploit local features near lines during refinement, we can prevent privacy breaches including feature inversion attacks without largely sacrificing localization performance. First, as LDL uses line segments for candidate pose selection the clients only need to share the extracted line segments with the service providers for initial pose search, instead of the entire view that would be needed for global feature-based methods. Second, as local features near line segments are shared with the service provider for pose refinement, feature inversion methods cannot faithfully recover the original visual content. We validate this claim with a small set of experiments performed in the Stanford 2D-3D-S dataset [4], where we filter descriptors whose spherical distances to the nearest line segment are over 0.05 rad. As shown in Figure 6, this linebased filtering degrades the quality of feature inversion attacks by hiding objects that potentially contain sensitive information while only incurring small drops in localization accuracy. We report additional details and results regarding the potential of LDL for privacy preservation in the supplementary material. 4.3. Ablation Study We ablate the distance function decomposition, number of query points, and robust loss function, which are key components of LDL in the OmniScenes Extreme split. In Table 5a, LDL is first compared against the baseline that does not apply decomposition and use the loss function in Equation 4. Decomposition leads to a large performance gain, as the distance functions are further disambiguated Method t-error R-error Acc. (m) (\u25e6) w/o Decomposition 1.00 3.97 0.37 w/ |Q| = 10 0.04 0.85 0.77 w/ |Q| = 21 0.04 0.71 0.88 w/ |Q| = 84 0.03 0.66 0.95 Ours (|Q| = 42) 0.03 0.72 0.92 (a) Decomposition & Query Points Method t-error R-error Acc. (m) (\u25e6) w/ L1 Loss 0.08 1.38 0.55 w/ L2 Loss 0.17 1.48 0.34 w/ Huber Loss 0.11 1.39 0.50 w/ Median Loss 0.08 1.22 0.55 Ours 0.07 1.22 0.68 (b) Choice of Loss Function Table 5. Ablation study of various components of LDL. Figure 6. Visualization of feature inversion attacks on panoramic inputs along with the localization accuracy before and after linebased feature filtering and split into each principal direction. We further test the effect of the number of query points |Q| on evaluating the robust loss function. While increasing the number of query points enhances performance, the improvement is not as significant and incurs additional computation. Conversely, using a smaller number of query points lead to ambiguities in distance function matching, exhibiting poor performance. The number of query points |Q| = 42 balances both the computational efficiency and localization accuracy of LDL. We finally validate the robust loss function in Equation 5 by comparing LDL against variants using other loss functions: L1, L2, Huber, and Median loss. Here we report results from the Wedding Hall scene, as this scene contains drastic scene changes with large amounts of outliers. As shown in Table 5b, inlier counting proposed in Equation 5 attenuates outliers and exhibits optimal performance, demonstrating the effectiveness of the robust loss function. 5." + }, + { + "url": "http://arxiv.org/abs/2303.01052v1", + "title": "Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression", + "abstract": "The origin of adversarial examples is still inexplicable in research fields,\nand it arouses arguments from various viewpoints, albeit comprehensive\ninvestigations. In this paper, we propose a way of delving into the unexpected\nvulnerability in adversarially trained networks from a causal perspective,\nnamely adversarial instrumental variable (IV) regression. By deploying it, we\nestimate the causal relation of adversarial prediction under an unbiased\nenvironment dissociated from unknown confounders. Our approach aims to\ndemystify inherent causal features on adversarial examples by leveraging a\nzero-sum optimization game between a casual feature estimator (i.e., hypothesis\nmodel) and worst-case counterfactuals (i.e., test function) disturbing to find\ncausal features. Through extensive analyses, we demonstrate that the estimated\ncausal features are highly related to the correct prediction for adversarial\nrobustness, and the counterfactuals exhibit extreme features significantly\ndeviating from the correct prediction. In addition, we present how to\neffectively inoculate CAusal FEatures (CAFE) into defense networks for\nimproving adversarial robustness.", + "authors": "Junho Kim, Byung-Kwan Lee, Yong Man Ro", + "published": "2023-03-02", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ME" + ], + "main_content": "Introduction Adversarial examples, which are indistinguishable to human observers but maliciously fooling Deep Neural Networks (DNNs), have drawn great attention in research \ufb01elds due to their security threats used to compromise machine learning systems. In real-world environments, such potential risks evoke weak reliability of the decision-making process for DNNs and pose a question of adopting DNNs in safety-critical areas [4,57,65]. To understand the origin of adversarial examples, seminal works have widely investigated the adversarial vulnerability through numerous viewpoints such as excessive linearity in a hyperplane [25], aberration of statistical \ufb02uctuations [58, 62], and phenomenon induced from frequency *Equal contribution. \u2020 Corresponding author. \ud835\udc4d\ud835\udc4d \ud835\udc4c\ud835\udc4c \ud835\udc48\ud835\udc48 \ud835\udc47\ud835\udc47 \u210e Confounder Outcome Treatment Instrument \ud835\udc4b\ud835\udc4badv Feature Variation \ud835\udc87\ud835\udc87 Causal Estimator \u210e \ud835\udc4d\ud835\udc4d \ud835\udc87\ud835\udc87\ud835\udc4b\ud835\udc4badv Unobserved Confounders Causal Features Figure 1. Data generating process (DGP) with IV. By deploying Z, it can estimate causal relation between treatment T and outcome Y under exogenous condition for unknown confounders U. information [72]. Recently, several works [33, 34] have revealed the existence and pervasiveness of robust and non-robust features in adversarially trained networks and pointed out that the non-robust features on adversarial examples can provoke unexpected misclassi\ufb01cations. Nonetheless, there still exists a lack of common consensus [21] on underlying causes of adversarial examples, albeit comprehensive endeavors [31,63]. It is because that the earlier works have focused on analyzing associations between adversarial examples and target labels in the learning scheme of adversarial training [41, 53, 66, 71, 76], which is canonical supervised learning. Such analyses easily induce spurious correlation (i.e., statistical bias) in the learned associations, thereby cannot interpret the genuine origin of adversarial vulnerability under the existence of possibly biased viewpoints (e.g., excessive linearity, statistical \ufb02uctuations, frequency information, and non-robust features). In order to explicate where the adversarial vulnerability comes from in a causal perspective and deduce true adversarial causality, we need to employ an intervention-oriented approach (i.e., causal inference) that brings in estimating causal relations beyond analyzing merely associations for the given data population of adversarial examples. One of the ef\ufb01cient tools for causal inference is instrumental variable (IV) regression when randomized controlled trials (A/B experiments) or full controls of unknown confounders are not feasible options. It is a popular approach used to identify causality in econometrics [13, 15, 46], and it provides an unbiased environment from unknown confounders that raise the endogeneity of causal inference [54]. In IV regression, the instrument is utilized 1 arXiv:2303.01052v1 [cs.LG] 2 Mar 2023 \fto eliminate a backdoor path derived from unknown confounders by separating exogenous portions of treatments. For better understanding, we can instantiate a case of \ufb01nding causal relations [9] between education T and earnings Y as illustrated in Fig. 1. Solely measuring correlation between the two variables does not imply causation, since there may exist unknown confounders U (e.g., individual ability, family background, etc.). Ideally, conditioning on U is the best way to identify causal relation, but it is impossible to control the unobserved variables. David Card [9] has considered IV as the college proximity Z, which is directly linked with education T but intuitively not related with earnings Y . By assigning exogenous portion to Z, it can provide an unbiased environment dissociated from U for identifying true causal relation between T and Y . Speci\ufb01cally, once regarding data generating process (DGP) [52] for causal inference as in Fig. 1, the existence of unknown confounders U could create spurious correlation generating a backdoor path that hinders causal estimator h (i.e., hypothesis model) from estimating causality between treatment T and outcome Y (T \u2190U \u2192Y ). By adopting an instrument Z, we can acquire the estimand of true causality from h in an unbiased state (Z \u2192T \u2192Y ). Bringing such DGP into adversarial settings, the aforementioned controversial perspectives (e.g., excessive linearity, statistical \ufb02uctuations, frequency information, and non-robust features) can be regarded as possible candidates of unknown confounders U to reveal adversarial origins. In most observational studies, everything is endogenous in practice so that we cannot explicitly specify all confounders and conduct full controls of them in adversarial settings. Accordingly, we introduce IV regression as a powerful causal approach to uncover adversarial origins, due to its capability of causal inference although unknown confounders remain. Here, unknown confounders U in adversarial settings easily induce ambiguous interpretation for the adversarial origin producing spurious correlation between adversarial examples and their target labels. In order to uncover the adversarial causality, we \ufb01rst need to intervene on the intermediate feature representation derived from a network f and focus on what truly affects adversarial robustness irrespective of unknown confounders U, instead of model prediction. To do that, we de\ufb01ne the instrument Z as feature variation in the feature space of DNNs between adversarial examples and natural examples, where the variation Z is originated from the adversarial perturbation in the image domain such that Z derives adversarial features T for the given natural features. Note that regarding Z as instrument is reasonable choice, since the feature variation alone does not serve as relevant information for adversarial prediction without natural features. Next, once we \ufb01nd causality-related feature representations on adversarial examples, then we name them as causal features Y that can encourage robustness of predicting target labels despite the existence of adversarial perturbation as in Fig. 1. In this paper, we propose adversarial instrumental variable (IV) regression to identify causal features on adversarial examples concerning the causal relation of adversarial prediction. Our approach builds an unbiased environment for unknown confounders U in adversarial settings and estimates inherent causal features on adversarial examples by employing generalized method of moments (GMM) [27] which is a \ufb02exible estimation for non-parametric IV regression. Similar to the nature of adversarial learning [5, 24], we deploy a zero-sum optimization game [19, 40] between a hypothesis model and test function, where the former tries to unveil causal relation between treatment and outcome, while the latter disturbs the hypothesis model from estimating the relation. In adversarial settings, we regard the hypothesis model as a causal feature estimator which extracts causal features in the adversarial features to be highly related to the correct prediction for the adversarial robustness, while the test function makes worst-case counterfactuals (i.e., extreme features) compelling the estimand of causal features to signi\ufb01cantly deviate from correct prediction. Consequently, it can further strengthen the hypothesis model to demystify causal features on adversarial examples. Through extensive analyses, we corroborate that the estimated causal features on adversarial examples are highly related to correct prediction for adversarial robustness, and the test function represents the worst-case counterfactuals on adversarial examples. By utilizing feature visualization [42, 49], we interpret the causal features on adversarial examples in a human-recognizable way. Furthermore, we introduce an inversion of the estimated causal features to handle them on the possible feature bound and present a way of ef\ufb01ciently injecting these CAusal FEatures (CAFE) into defense networks for improving adversarial robustness. 2. Related Work In the long history of causal inference, there have been a variety of works [23, 26, 35] to discover how the causal knowledge affects decision-making process. Among various causal approaches, especially in economics, IV regression [54] provides a way of identifying the causal relation between the treatment and outcome of interests despite the existence of unknown confounders, where IV makes the exogenous condition of treatments thus provides an unbiased environment for the causal inference. Earlier works of IV regression [2, 3] have limited the relation for causal variables by formalizing it with linear function, which is known as 2SLS estimator [70]. With progressive developments of machine learning methods, researchers and data scientists desire to deploy them for nonparametric learning [12,13,15,46] and want to overcome the linear constraints in the functional relation among the vari2 \fables. As extensions of 2SLS, DeepIV [28], KernelIV [60], and Dual IV [44] have combined DNNs as non-parametric estimator and proposed effective ways of exploiting them to perform IV regression. More recently, generalized method of moments (GMM) [7, 19, 40] has been cleverly proposed a solution for dealing with the non-parametric hypothesis model on the high-dimensional treatments through a zerosum optimization, thereby successfully achieving the nonparametric IV regression. In parallel with the various causal approaches utilizing IV, uncovering the origin of adversarial examples is one of the open research problems that arouse controversial issues. In the beginning, [25] have argued that the excessive linearity in the networks\u2019 hyperplane can induce adversarial vulnerability. Several works [58,62] have theoretically analyzed such origin as a consequence of statistical \ufb02uctuation of data population, or the behavior of frequency information in the inputs [72]. Recently, the existence of non-robust features in DNNs [33,34] is contemplated as a major cause of adversarial examples, but it still remains inexplicable [21]. Motivated by IV regression, we propose a way of estimating inherent causal features in adversarial features easily provoking the vulnerability of DNNs. To do that, we deploy the zero-sum optimization based on GMM between a hypothesis model and test function [7,19,40]. Here, we assign the role of causal feature estimator to hypothesis model and that of generating worst-case counterfactuals to test function disturbing to \ufb01nd causal features. This strategy results in learning causal features to overcome all trials and tribulations regarded as various types of adversarial perturbation. 3. Adversarial IV Regression Our major goal is estimating inherent causal features on adversarial examples highly related to the correct prediction for adversarial robustness by deploying IV regression. Before identifying causal features, we \ufb01rst specify problem setup of IV regression and revisit non-parametric IV regression with generalized method of moments (GMM). Problem Setup. We start from conditional moment restriction (CMR) [1, 11] bringing in an asymptotically ef\ufb01cient estimation with IV, which reduces spurious correlation (i.e., statistical bias) between treatment T and outcome of interest Y caused by unknown confounders U [50] (see their relationships in Fig. 1). Here, the formulation of CMR can be written with a hypothesis model h, so-called a causal estimator on the hypothesis space H as follows: ET [\u03c8T (h) | Z] = 0, (1) where \u03c8T : H \u2192Rd denotes a generalized residual function [13] on treatment T, such that it represents \u03c8T (h) = Y \u2212h(T) considered as an outcome error for regression task. Note that 0 \u2208Rd describes zero vector and d indicates the dimension for the outcome of interest Y , and it is also equal to that for the output vector of the hypothesis model h. The treatment is controlled for being exogenous [48] by the instrument. In addition, for the given instrument Z, minimizing the magnitude of the generalized residual function \u03c8 implies asymptotically restricting the hypothesis model h not to deviate from Y , thereby eliminating the internal spurious correlation on h from the backdoor path induced by confounders U. 3.1. Revisiting Non-parametric IV regression Once we \ufb01nd a hypothesis model h satisfying CMR with instrument Z, we can perform IV regression to endeavor causal inference using h under the following formulation: ET [h(T) | Z] = R t\u2208T h(t)dP(T = t | Z), where P indicates a conditional density measure. In fact, two-stage least squares (2SLS) [2,3,70] is a well-known solver to expand IV regression, but it cannot be directly applied to more complex model such as non-linear model, since 2SLS is designed to work on linear hypothesis model [51]. Later, [28] and [60] have introduced a generalized 2SLS for non-linear model by using a conditional mean embedding and a mixture of Gaussian, respectively. Nonetheless, they still raise an ill-posed problem yielding biased estimates [7,19,44,77] with the non-parametric hypothesis model h on the high dimensional treatment T, such as DNNs. It stems from the curse nature of two-stage methods, known as forbidden regression [3] according to Vapnik\u2019s principle [16]: \u201cdo not solve a more general problem as an intermediate step\u201d. To address it, recent studies [7, 19, 40] have employed generalized method of moments (GMM) to develop IV regression and achieved successful one-stage regression alleviating biased estimates. Once we choose a moment to represent a generic outcome error with respect to the hypothesis model and its counterfactuals, GMM uses the moment to deliver in\ufb01nite moment restrictions to the hypothesis model, beyond the simple constraint of CMR. Expanding Eq. (1), the formulation of GMM can be written with a moment, denoted by m : H \u00d7 G \u2192R as follows (see Appendix A): m(h, g) = EZ,T [\u03c8T (h) \u00b7 g(Z)] = EZ[ET [\u03c8T (h) | Z] | {z } CMR \u00b7g(Z)] = 0, (2) where the operator \u00b7 speci\ufb01es inner product, and g \u2208G denotes test function that plays a role in generating in\ufb01nite moment restrictions on test function space G, such that its output has the dimension of Rd. The in\ufb01nite number of test functions expressed by arbitrary vector-valued functions {g1, g2, \u00b7 \u00b7 \u00b7 } \u2208G cues potential moment restrictions (i.e., empirical counterfactuals) [8] violating Eq. (2). In other words, they make it easy to capture the worst part of IV which easily stimulates the biased estimates for hypothesis model h, thereby helping to obtain more genuine causal 3 \frelation from h by considering all of the possible counterfactual cases g for generalization. However, it has an analogue limitation that we cannot deal with in\ufb01nite moments because we only handle observable \ufb01nite number of test functions. Hence, recent studies construct maximum moment restriction [19, 43, 77] to ef\ufb01ciently tackle the in\ufb01nite moments by focusing only on the extreme part of IV, denoted as supg\u2208G m(h, g) in a closedform expression. By doing so, we can concurrently minimize the moments for the hypothesis model to fully satisfy the worst-case generalization performance over test functions. Thereby, GMM can be re-written with min-max optimization thought of as a zero-sum game between the hypothesis model h and test function g: min h\u2208H sup g\u2208G m(h, g) \u2248min h\u2208H max g\u2208G EZ,T [\u03c8T (h) \u00b7 g(Z)], (3) where the in\ufb01nite number of test functions can be replaced with the non-parametric test function in the form of DNNs. Next, we bridge GMM of Eq. (3) to adversarial settings and unveil the adversarial origin by establishing adversarial IV regression with maximum moment restriction. 3.2. Demystifying Adversarial Causal Features To demystify inherent causal features on adversarial examples, we \ufb01rst de\ufb01ne feature variation Z as the instrument, which can be written with adversarially trained DNNs denoted by f as follows: Z = fl(X\u03f5) \u2212fl(X) = Fadv \u2212Fnatural, (4) where fl outputs a feature representation in lth intermediate layer, X represents natural inputs, and X\u03f5 indicates adversarial examples with adversarial perturbation \u03f5 such that X\u03f5 = X + \u03f5. In the sense that we have a desire to uncover how adversarial features Fadv truly estimate causal features Y which are outcomes of our interests, we set the treatment to T = Fadv and set counterfactual treatment with a test function to TCF = Fnatural + g(Z). Note that, if we na\u00a8 \u0131vely apply test function g to adversarial features T to make counterfactual treatment TCF such that TCF = g(T), then the outputs (i.e., causal features) of hypothesis model h(TCF) may not be possibly acquired features considering feature bound of DNNs f. In other words, if we do not keep natural features in estimating causal features, then the estimated causal features will be too exclusive features from natural ones. This results in non-applicable features considered as an imaginary feature we cannot handle, since the estimated causal features are signi\ufb01cantly manipulated ones only in a speci\ufb01c intermediate layer of DNNs. Thus, we set counterfactual treatment to TCF = Fnatural + g(Z). This is because above formation can preserve natural features, where we \ufb01rst subtract natural features from counterfactual treatment such that T \u2032 = TCF \u2212Fnatural = g(Z) and add the output Y \u2032 of hypothesis model to natural features for recovering causal features such that Y = Y \u2032 + Fnatural = h(T \u2032) + Fnatural. In brief, we intentionally translate causal features and counterfactual treatment not to deviate from possible feature bound. Now, we newly de\ufb01ne Adversarial Moment Restriction (AMR) including the counterfactuals computed by the test function for adversarial examples, as follows: ET \u2032[\u03c8T \u2032(h) | Z] = 0. Here, the generalized residual function \u03c8T \u2032|Z(h) = Y \u2032 \u2212h(T \u2032) in adversarial settings deploys the translated causal features Y \u2032. Bring them together, we re-formulate GMM with counterfactual treatment to \ufb01t adversarial IV regression, which can be written as (Note that h and g consist of a simple CNN structure): min h\u2208H max g\u2208G EZ[ET \u2032[\u03c8T \u2032(h) | Z] | {z } AMR g(Z)] = EZ[\u03c8T \u2032|Z(h)g(Z)], (5) where it satis\ufb01es ET \u2032[\u03c8T \u2032(h) | Z] = \u03c8T \u2032|Z(h) because Z corresponds to only one translated counterfactual treatment T \u2032 = g(Z). Here, we cannot directly compute the generalized residual function \u03c8T \u2032|Z(h) = Y \u2032 \u2212h(T \u2032) in AMR, since there are no observable labels for the translated causal features Y \u2032 on high-dimensional feature space. Instead, we make use of onehot vector-valued target label G \u2208RK (K : class number) corresponding to the natural input X in classi\ufb01cation task. To utilize it, we alter the domain of computing GMM from feature space to log-likelihood space of model prediction by using the log-likelihood function: \u2126(\u03c9) = log fl+(Fnatural + \u03c9), where fl+ describes the subsequent network returning classi\ufb01cation probability after lth intermediate layer. Accordingly, the meaning of our causal inference is further re\ufb01ned to \ufb01nd inherent causal features of correctly predicting target labels even under worst-case counterfactuals. To realize it, Eq. (5) is modi\ufb01ed with moments projected to the log-likelihood space as follows: min h\u2208H max g\u2208G EZ[\u03c8\u2126 T \u2032|Z(h) \u00b7 (\u2126\u25e6g)(Z)] = EZ[{Glog \u2212(\u2126\u25e6h)(T \u2032)} \u00b7 (\u2126\u25e6g)(Z)], (6) where \u03c8\u2126 T \u2032|Z(h) indicates the generalized residual function on the log-likelihood space, the operator \u25e6symbolizes function composition, and Glog is log-target label such that satis\ufb01es Glog = log G. Each element (k = 1, 2, \u00b7 \u00b7 \u00b7 , K) of log-target label has G(k) log = 0 when it is G(k) = 1 and has G(k) log = \u2212\u221ewhen it is G(k) = 0. To implement it, we just ignore the element G(k) log = \u2212\u221eand use another only. So far, we construct GMM based on AMR in Eq. (6), namely AMR-GMM, to behave adversarial IV regression. However, there is absence of explicitly regularizing the test function, thus there happens generalization gap between ideal and empirical moments (see Appendix B). 4 \fThereby, it violates possible feature bounds of the test function and brings in imbalanced predictions on causal inference (see Fig. 4). To become a rich test function, previous works [7, 19, 40, 67] have employed Rademacher complexity [6, 36, 73] that provides tight generalization bounds for a family of functions. It has a strong theoretical foundation to control a generalization gap, thus it is related to various regularizers used in DNNs such as weight decay, Lasso, Dropout, and Lipschitz [20, 64, 68, 75]. In AMRGMM, it plays a role in enabling the test functions to \ufb01nd out the worst-case counterfactuals within adversarial feature bound. Following Appendix B, we build a \ufb01nal objective of AMR-GMM with rich test function as follows: min h\u2208H max g\u2208G EZ[\u03c8\u2126 T \u2032|Z(h)\u00b7(\u2126\u25e6g)(Z)]\u2212|EZ[Z\u2212g(Z)]|2. (7) Please see more details of AMR-GMM algorithm attached in Appendix D due to page limits. 4. Analyzing Properties of Causal Features In this section, we \ufb01rst notate several conjunctions of feature representation from the result of adversarial IV regression with AMR-GMM as: (i) Adversarial Feature (Adv): Fnatural + Z, (ii) CounterFactual Feature (CF): Fnatural + g(Z), (iii) Counterfactual Causal Feature (CC): Fnatural + (h \u25e6g)(Z), and (iv) Adversarial Causal Feature (AC): Fnatural + h(Z). By using them, we estimate adversarial robustness computed by classi\ufb01cation accuracy for which the above feature conjunctions are propagated through fl+, where standard attacks generate feature variation Z and adversarial features T. Note that, implementation of all feature representations is treated at the last convolutional layer of DNNs f as in [34], since it mostly contains the high-level object concepts and has the unexpected vulnerability for adversarial perturbation due to high-order interactions [17]. Here, average treatment effects (ATE) [32], used for conventional validation of causal approach, is replaced with adversarial robustness of the conjunctions. 4.1. Validating Hypothesis Model and Test Function After optimizing hypothesis model and test function using AMR-GMM for adversarial IV regression, we can then control endogenous treatment (i.e., adversarial features) and separate exogenous portion from it, namely causal features, in adversarial settings. Here, the hypothesis model \ufb01nds causal features on adversarial examples, highly related to correct prediction for adversarial robustness even with the adversarial perturbation. On the other hand, the test function generates worst-case counterfactuals to disturb estimating causal features, thereby degrading capability of hypothesis model. These learning strategy enables hypothesis model to estimate inherent causal features overcoming all trials and tribulations from the counterfactuals. Therefore, VGG-16 ResNet-18 \u221e \u221e \u221e \u221e (a) CIFAR-10 (b) ImageNet Figure 2. Adversarial robustness of Adv, CF, CC, AC on VGG-16 and ResNet-18 under three attack modes: FGSM [25], PGD [41], CW\u221e[10] for CIFAR-10 [37] and ImageNet [18]. the \ufb01ndings of the causal features on adversarial examples has theoretical evidence by nature of AMR-GMM to overcome various types of adversarial perturbation. Note that, our IV setup posits homogeneity assumption [30], a more general version than monotonicity assumption [2], that adversarial robustness (i.e., average treatment effects) consistently retains high for all data samples despite varying natural features Fnatural depending on data samples. As illustrated in Fig. 2, we intensively examine the average treatment effects (i.e., adversarial robustness) for the hypothesis model and test function by measuring classi\ufb01cation accuracy of the feature conjunctions (i.e., Adv, CF, CC, AC) for all dataset samples. Here, we observe that the adversarial robustness of CF is inferior to that of CC, AC, and even Adv. Intuitively, it is an obvious result since the test function violating Eq. (7) forces feature representation to be the worst possible condition of extremely deviating from correct prediction. For the prediction results for CC and AC, they show impressive robustness performance than Adv with large margins. Since AC directly leverages the feature variation acquired from adversarial perturbation, they present better adversarial robustness than CC obtained from the test function outputting the worst-case counterfactuals on the feature variation. Intriguingly, we notice that both results from the hypothesis model generally show constant robustness even in a high-con\ufb01dence adversarial attack [10] fabricating unseen perturbation. Such robustness demonstrates the estimated causal features have ability to overcome various types of adversarial perturbation. 4.2. Interpreting Causal Effects and Visual Results We have reached the causal features in adversarial examples and analyzed their robustness. After that, our next question is \u201dCan the causal features per se have semantic information for target objects?\u201d. Recent works [22,34,39] 5 \fImage Natural Adv Image Natural Adv \ud835\udc4c\ud835\udc4c: Frog \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: Frog \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Deer CF \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: Car \ud835\udc4c\ud835\udc4c: Bird \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: Bird \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Frog \ud835\udc4c\ud835\udc4c: 8 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: 8 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: 6 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: 3 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: 8 \ud835\udc4c\ud835\udc4c: 1 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: 1 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: 4 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: 3 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: 1 AC \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Frog AC \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Bird CF \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: Airplane \ud835\udc4c\ud835\udc4c: Croquet ball \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: Croquet ball \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Maypole \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: Langur \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Croquet ball \ud835\udc4c\ud835\udc4c: Worm fence \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41\ud835\udc41: Worm fence \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Greenhouse \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36: Hen \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc34\ud835\udc34\ud835\udc34\ud835\udc34: Worm fence Figure 3. Feature visualization results of representing natural features, Adv, AC, and CF. From the top row, CIFAR-10, SVHN, and ImageNet are sequentially used for the feature visual interpretation. have investigated to \ufb01gure out the semantic meaning of feature representation in adversarial settings, we also utilize the feature visualization method [42, 47, 49] on the input domain to interpret the feature conjunctions in a humanrecognizable manner. As shown in Fig. 3, we can generally observe that the results of natural features represent semantic meaning of target objects. On the other hand, adversarial features (Adv) compel its feature representation to the orient of adversarially attacked target objects. As aforementioned, the test function distracts treatments to be worst-case counterfactuals, which exacerbates the feature variation from adversarial perturbation. Thereby, the visualization of CF is remarkably shifted to the violated feature representation for target objects. For instance, as in ImageNet [18] examples, we can see that the visualization of CF displays Hen and Langur features, manipulated from Worm fence and Croquet ball, respectively. We note that red \ufb02owers in original images have changed into red cockscomb and patterns of hen feather, in addition, people either have changed into distinct characteristics of langur, which accelerates the disorientation of feature representation to the worst counterfactuals. Contrastively, the visualization of AC displays a prominent exhibition and semantic consistency for target objects, where we can recognize their semantic information by themselves and explicable to human observers. By investigating visual interpretations, we reveal that feature representations acquired from the hypothesis model and test function both have causally semantic information, and their roles are in line with the theoretical evidence of our causal approach. In brief, we validate semantic meaning of causal features immanent in high-dimensional space despite the counterfactuals. 4.3. Validating Conditions of IV Setup The instrumental variable needs to satisfy the following three valid conditions in order to successfully achieve nonVGG ResNet WRN CIFAR SVHN Tiny CIFAR SVHN Tiny CIFAR SVHN Tiny fl+(T) 44.8 52.1 21.5 46.5 55.4 24.2 48.7 56.7 25.5 fl+(Z) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 \u03c1 0.9 0.8 0.8 0.9 0.8 0.7 0.9 0.9 0.8 Table 1. Empirical validation for three conditions of our IV setup. fl+(T) and fl+(Z) indicates model performance (%) of adversarial robustness by propagating adversarial features T and feature variation Z with subsequent network, respectively. The last row represents Pearson correlation: \u03c1 = Cov(Z, T)/\u03c3Z\u03c3T . parametric IV regression based on previous works [28,44]: independent of the outcome error such that \u03c8 \u22a5Z (Unconfoundedness) where \u03c8 denotes outcome error, and do not directly affect outcomes such that Z \u22a5Y | T, \u03c8 (Exclusion Restriction) but only affect outcomes through a connection of treatments such that Cov(Z, T) \u0338= 0 (Relevance). For Unconfoundedness, various works [41,53,66,71,76] have proposed adversarial training robustifying DNNs f with adversarial examples inducing feature variation that we consider as IV to improve robustness. In other words, when we see them in a perspective of IV regression, we can regard them as the efforts satisfying CMR in DNNs f for the given feature variation Z. Aligned with our causal viewpoints, the \ufb01rst row in Tab. 1 shows the existence of adversarial robustness with adversarial features T. Therefore, we can say that our IV (i.e., feature variation) on adversarially trained models satis\ufb01es valid condition of Unconfoundedness, so that IV is independent of the outcome error. For Exclusion Restriction, feature variation Z itself cannot serve as enlightening information to model prediction without natural features, because only propagating the residual feature representation has no effect to model prediction by the learning nature of DNNs. Empirically, the second row in Tab. 1 demonstrates that Z cannot be helpful representation for prediction. Thereby, our IV is not encouraged to be correlated directly with the outcome, and it satis\ufb01es valid condition of Exclusion Restriction. 6 \fMethod CIFAR-10 SVHN Tiny-ImageNet Natural FGSM PGD CW\u221e AP DLR AA Natural FGSM PGD CW\u221e AP DLR AA Natural FGSM PGD CW\u221e AP DLR AA VGG ADV 78.5 49.8 44.8 42.6 43.2 42.9 40.7 91.9 64.8 52.1 48.9 48.0 48.5 45.2 53.2 25.3 21.5 21.0 20.2 20.8 19.6 ADVCAFE 78.4 52.2 47.9 44.1 46.4 44.5 42.7 91.5 67.0 55.3 50.0 51.3 49.6 46.1 52.6 26.0 22.8 22.1 21.8 22.0 21.0 TRADES 79.5 50.4 45.7 43.2 44.4 42.9 41.8 91.9 66.4 53.6 49.1 49.1 47.7 45.2 52.8 25.9 22.5 21.9 21.5 21.8 20.7 TRADESCAFE 77.0 51.6 47.9 44.0 47.0 43.9 42.7 90.3 67.8 56.1 50.0 53.6 49.1 47.5 52.1 26.5 23.6 22.6 22.5 22.6 21.6 MART 79.7 52.4 47.2 43.4 45.5 43.8 42.0 92.6 66.6 54.2 47.9 49.6 47.1 44.4 53.1 25.0 21.5 21.2 20.4 21.0 19.9 MARTCAFE 78.3 54.2 49.7 43.9 48.1 44.5 42.7 91.3 67.6 57.3 49.5 54.2 48.3 46.4 53.0 25.6 22.3 21.6 21.3 21.5 20.5 AWP 78.0 51.7 48.2 43.5 47.2 43.4 42.6 90.8 65.5 56.6 50.4 54.0 49.7 48.6 52.6 28.0 25.7 23.6 24.8 23.5 22.8 AWPCAFE 77.4 54.8 51.4 44.2 50.2 44.9 43.5 91.9 67.9 58.6 51.2 55.9 51.1 49.7 52.9 28.8 26.4 24.2 25.6 24.1 23.4 HELP 77.4 51.8 48.3 43.9 47.3 43.9 42.9 91.2 65.8 56.6 50.9 53.9 50.2 48.8 53.0 28.3 25.9 23.9 25.1 23.8 23.1 HELPCAFE 75.6 54.4 51.4 44.6 50.4 44.8 43.7 91.5 67.3 58.5 51.6 56.2 51.4 50.0 52.6 29.4 27.1 24.7 26.4 24.4 23.9 ResNet ADV 82.0 52.1 46.5 44.8 44.8 44.8 43.0 92.8 70.4 55.4 51.3 50.9 51.0 47.5 57.2 27.3 24.2 23.2 22.8 23.2 21.8 ADVCAFE 82.6 55.9 50.7 47.6 49.0 47.7 46.2 92.5 73.6 58.9 53.8 54.9 52.6 49.8 56.3 28.6 25.7 24.7 24.4 24.6 23.5 TRADES 83.0 55.0 49.8 47.5 48.3 47.3 46.1 93.2 72.8 57.7 52.6 53.0 51.5 48.9 56.5 28.4 25.3 24.4 24.2 24.3 23.2 TRADESCAFE 80.7 56.6 51.4 48.5 50.4 48.3 46.7 91.3 73.9 59.6 54.1 56.7 53.2 51.3 54.5 29.6 27.4 26.3 26.5 26.2 25.4 MART 83.5 56.1 50.1 47.1 48.3 47.0 45.5 93.7 74.2 58.3 51.7 53.2 50.8 47.8 57.1 27.4 24.2 23.2 22.9 23.2 22.2 MARTCAFE 82.1 57.3 51.9 48.1 50.2 48.0 46.2 92.2 74.9 61.0 53.4 57.3 51.8 49.7 55.9 28.6 25.9 24.6 24.7 24.5 23.5 AWP 81.2 55.3 51.6 48.0 50.5 47.8 46.9 92.2 71.1 59.8 54.3 56.8 53.6 52.0 56.2 30.5 28.5 26.2 27.6 26.2 25.5 AWPCAFE 81.5 57.8 54.2 49.4 52.9 49.0 47.8 93.4 74.0 60.9 55.0 57.8 54.8 52.7 56.6 31.4 29.2 27.1 28.4 27.0 26.5 HELP 80.5 55.8 52.1 48.4 51.1 48.5 47.4 92.6 72.0 59.8 54.4 56.6 53.9 52.0 56.1 31.0 28.6 26.3 27.7 26.3 25.7 HELPCAFE 80.6 57.8 54.5 49.4 53.1 49.5 48.5 92.9 73.9 61.3 55.3 58.8 54.6 52.8 55.4 32.0 29.7 27.4 29.2 27.8 27.3 WRN ADV 84.3 54.5 48.7 47.8 47.0 47.9 45.6 94.0 71.8 56.7 53.2 51.9 52.8 49.0 60.9 29.8 25.5 25.8 24.2 26.0 23.9 ADVCAFE 85.7 58.5 53.3 51.3 51.8 51.5 49.5 93.7 75.7 59.1 54.9 54.0 54.1 50.2 60.6 31.1 27.3 27.2 25.8 27.4 25.4 TRADES 86.3 57.1 52.1 50.8 50.6 50.7 49.0 93.8 74.0 58.1 53.9 53.0 53.4 49.9 60.8 30.5 26.4 26.7 25.0 26.8 24.6 TRADESCAFE 83.7 58.6 54.5 52.0 53.2 52.0 50.1 92.4 75.6 61.0 55.7 58.0 58.0 53.0 60.3 31.7 28.2 28.3 27.0 28.5 26.5 MART 86.5 58.5 52.6 50.0 50.7 49.9 48.0 94.2 75.0 58.0 53.1 52.8 52.8 48.9 60.7 29.9 25.6 25.9 24.0 25.5 23.6 MARTCAFE 85.7 59.8 54.6 51.4 52.7 50.9 49.3 93.0 76.5 61.9 54.9 57.2 53.8 50.7 60.4 31.2 27.5 26.8 25.5 27.0 25.1 AWP 83.7 58.0 54.7 51.3 53.7 51.2 50.1 93.2 73.4 60.8 55.9 57.5 55.5 53.6 61.9 35.5 32.8 31.0 31.6 31.1 29.6 AWPCAFE 84.6 60.6 56.9 52.4 55.5 52.3 51.1 94.2 76.9 62.7 57.5 59.2 57.1 54.6 61.4 36.6 34.2 32.3 33.2 32.5 30.8 HELP 83.8 58.6 54.9 51.6 53.8 51.6 50.3 93.5 73.4 60.8 56.5 57.6 56.1 54.0 61.8 35.9 33.0 31.3 31.8 31.3 29.8 HELPCAFE 83.1 60.5 57.1 52.7 56.0 52.6 51.3 94.0 76.6 62.6 57.7 58.8 57.2 55.0 61.1 37.0 34.7 32.6 33.8 32.8 31.2 Table 2. Comparison of adversarial robustness and improvement from CAFE on \ufb01ve defense baselines: ADV, TRADES, MART, AWP, HELP, trained with VGG-16, ResNet-18, WideResNet-34-10 for three datasets under six attacks: FGSM, PGD, CW\u221e, AP, DLR, AA. For Relevance, when taking a look at the estimation procedure of adversarial feature T such that T = Z + Fnatural, feature variation Z explicitly has a causal in\ufb02uence on T. This is because, in our IV setup, the treatment T is directly estimated by instrument Z given natural features Fnatural. By using all data samples, we empirically compute Pearson correlation coef\ufb01cient to prove existence of highly related connection between them as described in the last row of Tab. 1. Therefore, our IV satis\ufb01es Relevance condition. 5. Inoculating CAusal FEatures for Robustness Next, we explain how to ef\ufb01ciently implant the causal features into various defense networks for robust networks. To eliminate spurious correlation of networks derived from the adversary, the simplest approach that we can come up with is utilizing the hypothesis model to enhance the robustness. However, there is a realistic obstacle that it works only when we already identify what is natural inputs and their adversarial examples in inference phase. Therefore, it is not feasible approach to directly exploit the hypothesis model to improve the robustness. To address it, we introduce an inversion of causal features (i.e., causal inversion) re\ufb02ecting those features on input domain. It takes an advantage of well representing causal features within allowable feature bound regarding network parameters of the preceding sub-network fl for the given adversarial examples. In fact, causal features are manipulated on an intermediate layer by the hypothesis model h, thus they are not guaranteed to be on possible feature bound. The causal inversion then serves as a key in resolving it without harming causal prediction much, and its formulation can be written with causal perturbation using distance metric of KL divergence DKL as: \u03b4causal = arg min \u2225\u03b4\u2225\u221e\u2264\u03b3 DKL (fl+(FAC) || f(X\u03b4)) , (8) where FAC indicates adversarial causal features distilled by hypothesis model h, and \u03b4causal denotes causal perturbation to represent causal inversion Xcausal such that Xcausal = X+\u03b4causal. Note that, so as not to damage the information of natural input during generating the causal inversion Xcausal, we constraint the perturbation \u03b4 to l\u221ewithin \u03b3-ball, as known as perturbation budget, to be human-imperceptible one such that \u2225\u03b4\u2225\u221e\u2264\u03b3. Appendix C shows the statistical distance away from con\ufb01dence score for model prediction of causal features, compared with that of causal inversion, natural input, and adversarial examples. As long as being capable of handling causal features using the causal inversion such that \u02c6 FAC = fl(Xcausal), we can now develop how to inoculate CAusal FEatures (CAFE) to defense networks as a form of empirical risk minimization (ERM) with small population of perturbation \u03f5, as follows: min f\u2208F ES \u0014 max \u2225\u03f5\u2225\u221e\u2264\u03b3 LDefense + DKL(fl+( \u02c6 FAC) || fl+(Fadv)) \u0015 , (9) 7 \fwhere LDefense speci\ufb01es a pre-de\ufb01ned loss such as [41, 53, 66, 71, 76] for achieving a defense network f on network parameter space F, and S denotes data samples such that (X, G) \u223cS. The rest term represents a causal regularizer serving as causal inoculation to make adversarial features Fadv assimilate causal features FAC. Speci\ufb01cally, while LDefense robusti\ufb01es network parameters against adversarial examples, the regularizer helps to hold adversarial features not to stretch out from the possible bound of causal features, thereby providing networks to backdoor path-reduced features dissociated from unknown confounders. More details for training algorithm of CAFE are attached in Appendix E. 6. Experiments 6.1. Implementation and Experimental Details We conduct exhaustive experiments on three datasets and three networks to verify generalization in various conditions. For datasets, we take low-dimensional datasets: CIFAR-10 [37], SVHN [45], and a high-dimensional dataset: Tiny-ImageNet [38]. To train the three datasets, we adopt standard networks: VGG-16 [59], ResNet-18 [29], and an advanced large network: WideResNet-34-10 [74]. For attacks, we use perturbation budget 8/255 for CIFAR-10, SVHN and 4/255 for Tiny-ImageNet with two standard attacks: FGSM [25], PGD [41], and four strong attacks: CW\u221e[10], and AP (Auto-PGD: step size-free), DLR (Auto-DLR: shift and scaling invariant), AA (Auto-Attack: parameter-free) introduced by [14]. PGD, AP, DLR have 30 steps with random starts where PGD has step sizes 0.0023 and 0.0011 respectively, and AP, DLR have momentum coef\ufb01cient \u03c1 = 0.75. CW\u221euses gradient clamping for l\u221e with CW objective [10] on \u03ba = 0 in 100 iterations. For defenses, we adopt a standard defense baseline: ADV [41] and four strong defense baselines: TRADES [76], MART [66], AWP [71], HELP [53]. We generate adversarial examples using PGD [41] on perturbation budget 8/255 where we set 10 steps and 0.0072 step size in training. Especially, adversarially training for Tiny-ImageNet is a computational burden, so we employ fast adversarial training [69] with FGSM on the budget 4/255 and its 1.25 times step size. For all training, we use SGD [56] with 0.9 momentum and learning rate of 0.1 scheduled by Cyclic [61] in 120 epochs [55,69]. 6.2. Comparing Adversarial Robustness We align the above \ufb01ve defense baselines with our experiment setup to fairly validate adversarial robustness. From Eq. (8), we \ufb01rst acquire causal inversion to straightly deal with causal features. Subsequently, we employ the causal inversion to carry out causal inoculation to all networks by adding the causal regularizer to the pre-de\ufb01ned loss of the defense baselines from scratch, as described in Eq. (9). Tab. 2 demonstrates CAFE boosts the \ufb01ve de(a) Rademacher Distance (b) Imbalance Ratio Figure 4. Displaying box distribution statistics of Rademacher Distance and Imbalance ratio for prediction results, compared with w/ Regularizer and w/o Regularizer on two datasets for VGG-16. fense baselines and outperforms them even on the large network and large dataset, so that we verify injecting causal features works well in all networks. Appendix F shows ablation studies for CAFE without causal inversion to identify where the effectiveness comes from. 6.3. Ablation Studies on Rich Test Function To validate that the regularizer truely works in practice, we measure Rademacher Distance and display its box distribution as illustrated in Fig. 4 (a). Here, we can apparently observe the existence of the regularization ef\ufb01ciency through narrowed generalization gap. Concretely, both median and average of Rademacher Distance for the regularized test function are smaller than the non-regularized one. Next, in order to investigate how rich test function helps causal inference, we examine imbalance ratio of prediction results for the hypothesis model, which is calculated as # of minimum predicted classes divided by # of maximum predicted classes. If the counterfactual space deviates from possible feature bound much, the attainable space that hypothesis model can reach is only restricted areas. Hence, the hypothesis model may predict biased prediction results for the target objects. As our expectation, we can observe the ratio with the regularizer is largely improved than nonregularizer for both datasets as in Fig. 4 (b). Consequently, we can summarize that rich test function acquired from the localized Rademacher regularizer serves as a key in improving the generalized capacity of causal inference. 7." + }, + { + "url": "http://arxiv.org/abs/2212.03177v2", + "title": "Privacy-Preserving Visual Localization with Event Cameras", + "abstract": "We present a robust, privacy-preserving visual localization algorithm using\nevent cameras. While event cameras can potentially make robust localization due\nto high dynamic range and small motion blur, the sensors exhibit large domain\ngaps making it difficult to directly apply conventional image-based\nlocalization algorithms. To mitigate the gap, we propose applying\nevent-to-image conversion prior to localization which leads to stable\nlocalization. In the privacy perspective, event cameras capture only a fraction\nof visual information compared to normal cameras, and thus can naturally hide\nsensitive visual details. To further enhance the privacy protection in our\nevent-based pipeline, we introduce privacy protection at two levels, namely\nsensor and network level. Sensor level protection aims at hiding facial details\nwith lightweight filtering while network level protection targets hiding the\nentire user's view in private scene applications using a novel neural network\ninference pipeline. Both levels of protection involve light-weight computation\nand incur only a small performance loss. We thus project our method to serve as\na building block for practical location-based services using event cameras. The\ncode and dataset will be made public through the following link:\nhttps://github.com/82magnolia/event_localization.", + "authors": "Junho Kim, Young Min Kim, Yicheng Wu, Ramzi Zahreddine, Weston A. Welge, Gurunandan Krishnan, Sizhuo Ma, Jian Wang", + "published": "2022-12-04", + "updated": "2022-12-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Visual localization is a versatile localization method widely used in AR/VR that aims to \ufb01nd the camera pose only using image input. While recent visual localization methods successfully provide robust camera pose estimation in a variety of scenes [54,64,67,71], privacy concerns may arise due to the requirement of image capture for localization [10,12]. As shown in Figure 1, the localization service user may be concerned with sharing the current view *Work done during an internship at Snap Research \u2020Co-corresponding authors Figure 1. Overview of our approach. We target concerns in clientserver localization where the user with limited compute shares visual information with the service provider. Our privacy protection operates at sensor and network level, which enables hiding sensitive details for event-to-image conversion. The results are then used for our localization pipeline which involves converting events to images and applying image-based localization. with the service provider, which is inevitable in edge devices with a limited amount of compute (e.g. smartphones, AR glasses). Further concerns can arise in the observed arXiv:2212.03177v2 [cs.CV] 8 Dec 2022 \fperson side, who can be unknowingly captured in the localization process of another person\u2019s camera. This concern in particular has been widely addressed in mobile applications by notifying when camera capture is happening [1,4,5], but mere noti\ufb01cation is insuf\ufb01cient to fully alleviate the concerns of observed people. Event cameras, which are visual sensors that only record brightness changes [19,39], have the potential to provide robust, privacy-preserving visual localization. Unlike normal cameras that capture the absolute scene brightness, these cameras encode brightness changes as a stream of events. The sensors have a high temporal resolution and dynamic range, which is bene\ufb01cial for robust localization in challenging scenarios such as low lighting or fast camera motion. Further, as the power consumption of the sensor is far lower than normal cameras [19], performing machine vision using these sensors is amenable for applications in AR/VR. In a privacy perspective, since only a fraction of visual information is captured, the sensors can naturally hide \ufb01negrained visual details at the expense of relatively unstable visual features compared to normal cameras. We propose an event-based visual localization method that can perform robust localization while preserving privacy. Our proposed scenario assumes a light-weight capture and computation from the user side and heavy processing from the service provider side. For localization, we employ event-to-image conversion to adapt powerful image-based localization methods [24,51,53] on the data captured from an edge device equipped with an event camera. Here the service provider performs the computationally expensive conversion [49, 56, 57, 63, 68] and applies visual localization on the recovered image. Our resulting pipeline inherits the advantages of event cameras and state-of-the-art image features [14, 52]: we can perform stable and accurate localization in fast camera motion or low lighting, where visual localization using normal cameras typically fails. We additionally integrate privacy protection speci\ufb01cally tailored for event cameras in two levels, namely sensor level and network level as shown in Figure 1b. In the sensor level, we propose a novel \ufb01ltering method that blurs facial landmarks without explicitly detecting them, while preserving important landmarks for localization as shown in Figure 4. This process reduces people\u2019s concern about being recorded by edge device users, and is suf\ufb01ciently light-weight for implementation on sensor chips. In the network level, we propose to split neural network inference during localization so that the costly intermediate computation is performed on the service provider side, while simultaneously preventing the service provider from reconstructing images from events. The technique targets users willing to use location-based services in private spaces (e.g. apartment rooms), where hiding the entire user view may be solicited as shown in Figure 4. Both levels of privacy protection are light-weight and incur only a small drop in localization performance, which are further veri\ufb01ed in our experiments. We evaluate our method on a wide range of localization scenarios including scenes captured with moving people, low-lighting, or fast camera motion. Experiments show that our approach can robustly localize in such challenging scenarios and the privacy protection pipeline can effectively hide sensitive visual content while preserving localization performance. As our task is fairly new, we record new datasets called EvRooms and EvHumans which will be partially released in public to spur further research in event-based visual localization. To summarize, our key contributions are: (i) robust localization in challenging conditions using event cameras, (ii) sensor level privacy protection for relieving observed people\u2019s concerns, and (iii) network level privacy protection for mitigating user\u2019s concerns. Equipped with robust localization and privacy protection, we expect our method to offer a practical solution to camera pose estimation using event cameras. 2. Related Work Event-Based Mapping and Localization Due to the high dynamic range and small motion blur, event cameras are suitable for visual odometry (VO) or SLAM tasks involving sequential pose estimation and depth prediction. Existing works in this direction [9,20,23,27,31,32] propose various event aggregation and measurement update methods to effectively utilize event data with minimal latency. Based on the \ufb01ndings from event-based VO and SLAM literature, recent works leverage event cameras for novel view synthesis [26,35,50] where the hardware-level bene\ufb01ts enable view synthesis robust to low lighting or motion blur. However, re-localizing an event camera with respect to a pre-built 3D map, namely event-based visual localization, is a fairly understudied problem. Prior works perform direct camera pose estimation using neural networks [28,46]. While these methods can quickly localize with a single neural network inference, the networks should be separately trained for each test scene. Also, it is widely known in image-based localization that structure-based methods outperform direct methods [30, 55, 67]. These methods leverage correspondences in 2D and 3D by comparing feature descriptors [7, 14, 25, 51\u201353]. We employ the structurebased paradigm on event cameras, leading to stable localization while also bene\ufb01ting from the sensor-level strengths. Privacy-Preserving Machine Vision As many machine vision applications take the entire image view as input, privacy breaches could occur [10, 12]. Recent works propose to apply additional transformations on the input image data [16, 36, 70] to hide the user identity, or encrypt the visual data in the sensor level by incorporating specially designed optics [48, 65, 66, 72]. Many prior works \fin privacy-preserving visual localization follow the former approach, where existing methods suggest lifting the 2D, 3D keypoints to lines [21, 61, 62], or training a new set of feature descriptors hiding sensitive details [12,15,45]. Our method takes a hybrid approach, where we propose to use event cameras as privacy-preserving sensors for localization and apply dedicated transformations on the event data to hide sensitive visual details. 3. Event-Based Localization Pipeline Given a short stream of events recorded by an event camera, our method aims to \ufb01nd the 6DoF camera pose within a 3D map as shown in Figure 1. Event cameras are visual sensors that track brightness changes as a stream of events, E = {ei = (xi, yi, ti, pi)}, where ei indicates the brightness change of polarity pi \u2208{+1, \u22121} at pixel location (xi, yi) and timestamp ti. Our localization method utilizes images reconstructed from voxelized events. Given an input event stream E, let E denote the event voxel grid [49,57,75] obtained by taking weighted sums of event polarities within spatio-temporal bins. Event-to-image conversion methods [42,49,57,63,68] take the event voxels as input and produce images using neural networks, namely F\u0398(E) = I where \u0398 denotes the neural network parameters. Below we describe the steps taken by our method to perform event-based localization. Structure-Based Localization Figure 1c shows our localization process. Given event streams from a scene Se={E1, . . . , EN} with each stream spanning a short time, we \ufb01rst convert events into images Si={I1, . . . , IN}. Then we run the off-the-shelf structure-from-motion pipeline COLMAP [58] on Si. The result is a map containing 3D points and 6DoF pose-annotated images. As the next step, we use global features vectors from NetVLAD [7] for candidate pose selection. Given a captured query event stream Eq, we \ufb01rst reconstruct the image Iq and extract its NetVLAD feature vector fq \u2208R4096. Similarly, for each pose-annotated reference image Ii in the 3D map, we extract its feature vector fi. We compute the L2 distances between the query and reference image features and select the top-K nearest poses for further re\ufb01nement. Finally, for re\ufb01nement, we \ufb01rst perform local feature matching [14,52] between the query and selected reference images. We count the number of matches found for each query-reference pair and choose the reference view Ir with the largest number of matches. Then we obtain the re\ufb01ned 6DoF pose by retrieving the 3D points visible from Ir and performing PnP-RANSAC [18, 22, 37, 38] between the 2D points in Iq and retrieved 3D points. By leveraging event-to-image conversion, we can effectively deploy powerful image-based localization methods on events. Nevertheless, for high-quality image recovFigure 2. Sensor-level privacy protection. We attenuate temporally inconsistent regions via median \ufb01ltering and curvy regions via maximum re\ufb02ection \ufb01ltering. To reduce artifacts, the averaged voxels Eavg=(Emed+Eref)/2 are blended with the original voxels. ery the conversion solicits repetitive neural network inferences [49, 63], which can be costly for edge devices. This necessitates the transmission of visual information from edge devices to service providers, where we propose various techniques for preserving privacy in Section 4. 4. Privacy-Preserving Localization In this section, we describe the procedures for privacy preservation. As in Figure 1, we consider the case where the localization service user, equipped with an edge device, has limited computing power and shares the visual information with the service provider. We propose two levels of privacy protection to prevent possible breaches during information sharing. Sensor-level privacy protection focuses on hiding facial details and could be easily applied with small additional computations. Network-level privacy protection targets localization in private scenes, where the user would want to completely hide what they are looking at. 4.1. Sensor-Level Privacy Protection Sensor-level privacy protection removes temporally inconsistent or curvy regions and blends the result with the \foriginal voxel. This low-level operation preserves static structure while blurring out dynamic or facial information. Median Filtering We \ufb01rst \ufb01lter temporally inconsistent regions via median \ufb01ltering along the temporal axis as shown in Figure 2a. Given a voxel grid E\u2208RB\u00d7H\u00d7W where B denotes the number of temporal bins and H, W denote the height and width of the sensor resolution, we replace each voxel E(l, m, n) with the median value from E(l\u2212kt:l+kt, m, n) where kt is the temporal window size. Since dynamic entities, including human faces, may deform over time, the resulting voxel regions will show irregularities in the temporal domain. Median \ufb01ltering perturbs voxel entries with temporally inconsistent intensity or motion as shown in Figure 2b, where detailed expositions on this notion are given in the supplementary material. As a result, events from faces after \ufb01ltering lead to low quality image reconstructions. Maximum-Re\ufb02ection Filtering We propose maximumre\ufb02ection \ufb01ltering on the spatial domain to attenuate event accumulations from curvy regions. For each voxel E(l, m, n) we \ufb01rst \ufb01nd the location (l, m\u2217, n\u2217) that attains the maximum event count within the spatial neighborhood |E(l, m\u2212ks:m+ks, n\u2212ks:n+ks)|, where ks is the spatial window size. We then replace E(l, m, n) as the voxel value at the re\ufb02ected location with respect to (l, m\u2217, n\u2217), namely E(l, 2m\u2217\u2212m, 2n\u2217\u2212n). The maximum-re\ufb02ection \ufb01ltering preserves event accumulation near lines while replacing other regions with arbitrary values. As shown in Figure 2b, if the original intensities follow a step function event accumulations near lines are symmetrical with respect to the local maximum. Although lines from real-world scenes are not strictly a step function, we \ufb01nd that in practice, the maximum-re\ufb02ection \ufb01ltering can well-preserve events near lines while attenuating other regions including faces. Voxel Blending For voxel grid regions with an insuf\ufb01cient amount of accumulations, the \ufb01ltering process can incur artifacts as the signal-to-noise ratio is low. Therefore, we blend the \ufb01ltered voxels with the original event voxel using binary thresholding as depicted in Figure 2b. To elaborate, the binary mask U \u2208RB\u00d7H\u00d7W is de\ufb01ned as follows, U(l, m, n) = \u001a 1 if P i |E(i, m, n)| > \u00b5 + \u03c3 0 otherwise, (1) where \u00b5, \u03c3 is the mean and standard deviation of the temporally-summed event accumulations, P l |E(l, m, n)|. Then, the blended voxel is given as Eblend = U \u00b7 (Emed + Emax 2 ) + (1 \u2212U) \u00b7 E, (2) where Emed, Emax denote the median and maximumre\ufb02ection \ufb01ltered voxels respectively. Figure 3. Network level privacy protection targeting users in private scenes. To save compute while hiding sensitive visual information, the inference is split between the user and service provider where the users deploy a privately-trained reconstruction network F\u0398\u2032. The network is trained using special loss functions and noiseinfused event voxels to prevent possible attacks from the service provider to reconstruct images using shared information. 4.2. Network-Level Privacy Protection As shown in Figure 1, network level privacy protection completely hides the user\u2019s view from the service provider in private spaces while saving user-side compute. Here we suggest splitting the event-to-image conversion process between the service provider and user, where the inference is done with a privately re-trained reconstruction network F\u0398\u2032. We retain our focus on making the event-to-image conversion process privacy-preserving, as once the images are securely reconstructed one can apply existing privacypreserving visual localization methods [15,45,61,62] to \ufb01nd the camera pose. Splitted Inference and Possible Attacks The entire splitted inference process is summarized in Figure 3a. Prior to inference, the users re-train a private version of the conversion network F\u0398\u2032. The network learns to reconstruct im\fages from noise-infused voxel grids \u02dc E=E+Enoise where the noise Enoise is \ufb01xed for each private scene and unknown to the service provider. We describe the detailed training process in the preceding paragraphs. After training, the users divide the network to three parts F 1 \u0398\u2032, F 2 \u0398\u2032, F 3 \u0398\u2032, where F 2 \u0398\u2032 contains the majority of the inference computation and is the only shared part with the service provider. During inference, (i) the user performs inference on F 1 \u0398\u2032 using the noise-infused voxel grid \u02dc E, (ii) the result is sent to the service provider to perform F 2 \u0398\u2032, and (iii) the user retrieves the result to \ufb01nally perform F 3 \u0398\u2032. Since the frontal and rear inference is done on-device, we can make the conversion process privacy preserving if the service provider cannot \u2018eavesdrop\u2019 on the intermediate inference results. As shown in Figure 3b, we have identi\ufb01ed three possible attacks from the service provider: swapped layer inference, generic network re-training, and targeted network retraining. First, in swapped layer inference one takes the intermediate inference results made with \u0398\u2032 and runs the rest of the reconstruction using the original network parameters \u0398. The other two attacks involve re-training a new set of networks using large amounts of event data presumably available to the service provider. Generic network retraining trains a randomly initialized neural network using the same training objectives as in the private training. Targeted network re-training similarly trains a neural network using the same objectives, but initializes the intermediate parts of the network with the shared parameter values from F 2 \u0398\u2032. Using the re-trained networks, the service provider could try swapped layer inference as shown in Figure 3b. Private Network Training for Attack Prevention To prevent possible attacks, we propose a novel training process usable by any person within the private space to obtain a new set of event-to-image conversion network F\u0398\u2032. Here the training starts from random weight initialization and is quickly done using a small amount of event data captured within the private space.The trained network is then shared between the trusted parties in the private space as shown in Figure 3a. During training, we impose the new network to learn the reconstruction from the original network F\u0398 while taking a \ufb01xed noise Enoise as additional input. This obfuscates the network weights learned from training and prevents the service provider from reverse-engineering the trained results, namely generic/targeted network re-training attacks. For training we impose two losses L=Lrecon+Ladv, where Lrecon, Ladv are the reconstruction and adversarial losses respectively. Formally, the reconstruction loss is given as follows, Lrecon = d(F\u0398(E), F\u0398\u2032( \u02dc E)), (3) where d(\u00b7, \u00b7) is the LPIPS distance [74] and \u02dc E=E+Enoise is the noise-infused event voxel. The adversarial loss enforces the new network weights to deviate from the original network F\u0398, which in turn offers prevention against swapped layer inference attacks. We \ufb01rst split the neural network into three parts F 1 \u0398, F 2 \u0398, F 3 \u0398 where each part corresponds to the frontal, middle, and rear parts of the network. As shown in Figure 3a, the loss is de\ufb01ned as the sharpness of the image reconstructions made by swapping parts of the new network layers with the original weights, Ladv = s(F 3 \u0398 \u25e6F 2 \u0398 \u25e6F 1 \u0398\u2032( \u02dc E))+s(F 3 \u0398 \u25e6F 2 \u0398\u2032 \u25e6F 1 \u0398\u2032( \u02dc E)), (4) where s(\u00b7) is the average image sharpness calculated by applying Sobel \ufb01lters [29] on the reconstructions. 5. Experiments We \ufb01rst validate our choice of the event-based localization pipeline in Section 5.1 and further validate the two privacy protection methods in Section 5.2. Dataset and Implementation Details We use three datasets for evaluation: DAVIS240C [43], EvRooms, and EvHumans. DAVIS240C consists of scenes captured using the DAVIS camera [8] which simultaneously outputs events and frames. We use six scenes from the dataset that are suitable for localization. EvRooms is a newly collected dataset to evaluate the robustness of event-based localization algorithms amidst challenging external conditions. The dataset is captured in 20 scenes and divided into recordings containing fast camera motion (EvRoomsF) and low lighting (EvRoomsL). EvHumans is another newly collected dataset for evaluating privacy-preserving localization amidst moving people. The dataset is captured with 22 volunteers moving in 12 scenes. Both datasets are captured using the DAVIS346 [2] camera. Additional details on dataset preparation are deferred to the supplementary material. In all our experiments we use the RTX 2080 GPU and Intel Core i7-7500U CPU. For event-to-image conversion, we adopt E2VID [49], which is a conversion method widely used in event-based vision applications [17, 44]. Unless speci\ufb01ed otherwise, we use K=3 candidate poses for re\ufb01nement in our localization pipeline from Section 3. For results reporting accuracy, a prediction is considered correct if the translation error is below 0.1 m and the rotation error is below 5.0\u25e6. All translation and rotation values are median values, following [33,51,53]. 5.1. Localization Performance Analysis Event-Based Localization Comparison We use the DAVIS240C dataset [43] for evaluation, and consider six baselines: direct methods (PoseNet [30], SP-LSTM [46]), and structure-based methods taking various event representations as input (binary event image [11], event his\fMethod Description t-error (m) R-error (\u25e6) Acc. Direct PoseNet [30] 0.15 15.94 0.05 SP-LSTM [46] 0.19 20.30 0.03 Structure-Based Binary Event Image [11] 0.07 3.77 0.54 Event Histogram [41] 0.06 3.02 0.62 Timestamp Image [47] 0.06 3.18 0.58 Sorted Timestamp Image [6] 0.06 3.19 0.59 Ours Event-to-Image Conversion 0.05 2.06 0.72 (a) Event-Based Localization Comparison Dataset Split Method t-error (m) R-error (\u25e6) Acc. Normal Image-Based 0.04 1.77 0.72 Event-Based 0.05 2.00 0.73 Low Lighting Image-Based 0.26 10.90 0.26 Event-Based 0.05 2.53 0.68 Fast Motion Image-Based 0.18 6.25 0.26 Event-Based 0.05 1.82 0.72 (b) Image-Based Localization Comparison Table 1. Localization evaluation against existing methods. togram [41, 69], timestamp image [47], and sorted timestamp image [6]). We provide detailed explanations about the baselines in the supplementary material. Table 1a shows the localization results of our method and the baselines. All structure-based methods outperform direct methods, as the pose re\ufb01nement step using PnPRANSAC [18,38] enables accurate localization. Among the structure-based methods, our method outperforms the baselines by a large margin as the event-to-image conversion mitigates the domain gap and allows our method to fully leverage the robustness of image feature descriptors [7,14]. Image-Based Localization Comparison We implement an exemplary image-based localization method by replacing the input modality in our pipeline from events to images. We create three splits from DAVIS240C and EvRooms based on the target scenario: normal, low lighting, and fast motion. Normal consists of four scenes from DAVIS240C recorded in slow camera motion and average lighting. The other two splits are more challenging, with (i) low lighting containing two scenes from DAVIS240C and EvRoomsL recorded in low lighting, and (ii) fast motion containing EvRoomsF captured with fast motion. Table 1b compares localization results under various settings. The performance of the two methods are on par in the normal split, as image-based localization can con\ufb01dently extract good global/local features in benevolent conditions. However, the performance gap largely increases in the low lighting, and fast motion splits, as the motion blur and low exposure make feature extraction dif\ufb01cult. Due to the high dynamic range and temporal resolution of event cameras, our method can perform robust localization even in these challenging conditions. Figure 4. Qualitative results of privacy protection. 5.2. Privacy Preservation Evaluation 5.2.1 Sensor Level Privacy Protection We use the EvHumans dataset to assess how the sensor level protection can hide facial landmarks. In all experiments, we set the spatial/temporal window size as ks=23, kt=13. Other details on the sensor-level protection evaluation is reported in the supplementary material. Face Blurring Assessment We examine face blurring in terms of low-level image characteristics and high-level semantics. For evaluation, we generate 9,755 image reconstruction pairs from the event streams with/without sensor level protection. Also, we use the publicly available FaceNet [3] and DeepFace [59, 60] libraries for high-level evaluation. Table 2b reports the average sharpness of the faces detected from the reconstructed images. For a fair comparison, we \ufb01rst run face detection on the non-\ufb01ltered image reconstruction and use the detection results to crop both \ufb01ltered/non-\ufb01ltered versions. The sharpness largely drops after \ufb01ltering, which indicates that our sensor level protection can effectively blur facial landmarks. Table 2c further supports this claim, where we measure the image similarity between the two image reconstructions separately for facial and background regions. The similarity metrics are much higher for background regions, meaning that our method can keep important localization cues ample in the background while blurring out faces. Some exemplary results are shown in Figure 4a, where the faces are blurred out from \ufb01ltering while the background features remain intact. For high-level analysis, Table 2b reports the face detec\fMethod t-error R-error Acc. # of Faces Sharpness Re-ID Acc. No Protection 0.04 0.99 0.84 1034 0.0956 0.9387 Protection 0.05 1.28 0.73 192 0.0475 0.5377 (a) Localization and Face Protection Evaluation Region PSNR (\u2193) SSIM (\u2193) MAE (\u2191) Face 18.1620 0.3572 0.1974 Background 21.1243 0.6677 0.1391 (b) Reconstruction Quality after Protection Table 2. Sensor Level Privacy Preservation Evaluation. Method t-error R-error Acc. # of Faces Sharpness Re-ID Acc. w/o Blending 0.06 1.61 0.64 106 0.0286 0.4670 w/o Max Re\ufb02ection 0.05 1.17 0.77 354 0.0483 0.5708 w/o Median Filtering 0.05 1.23 0.75 231 0.0461 0.5613 Ours 0.05 1.28 0.73 192 0.0475 0.5377 Table 3. Ablation study on sensor level protection. Figure 5. User study results. The insecurity scores range between 1 and 5. We make an initial measurement on how users feel about being captured using normal cameras in various scenarios. Then, we query about event cameras by sequentially showing raw events, event-to-image reconstructions, and privacy protection results. tion and grouped face re-identi\ufb01cation results. We apply a face detection algorithm [73] on the image reconstructions, where the number of detected faces largely decreases after \ufb01ltering. Note however that even without \ufb01ltering, the number of faces detected is fairly small (\u223c1, 000) given that on average two faces are present for the \u223c10, 000 test images. As event cameras only capture a fraction of visual data, it can naturally offer a minimal level of face blurring, which could also be veri\ufb01ed from the reconstructions in Figure 4. We further analyze how the \ufb01ltering obfuscates facial features with grouped face re-identi\ufb01cation. In this task, we \ufb01rst divide the faces of volunteers in EvHumans to disjoint groups and apply face re-identi\ufb01cation [13] on the detected faces to check whether it belongs to a certain group or not. Additional details regarding the task are explained in the supplementary material. Similar to face detection, reidenti\ufb01cation accuracy largely drops after \ufb01ltering, indicating the ef\ufb01cacy of our method to obfuscate facial semantics. User Study Along with the low-level and high-level analysis, we conduct a user study to examine how the actual users would feel about our face blurring. We request 39 volunteers to answer a survey that assesses how insecure people feel about various capturing scenarios, where the insecurity is scored from 1 to 5. As shown in Figure 5a, the survey makes an initial assessment of being captured with normal cameras for situations such as tourist spots and CCTV. Then the survey evaluates event cameras in three steps: the users \ufb01rst observe raw event measurements, then the image reconstructions from events, and \ufb01nally the image reconstructions after sensor level protection. We share the details about the survey along with the detailed answers of the subjects in the supplementary material. Figure 5 displays the survey results. In the initial assessment from Figure 5a, people have varying levels of insecurity depending on the capturing scenario and the results give a rough translation between the insecurity scores in the follow-up questions using event cameras. In Figure 5b, the subjects \ufb01rst give a low insecurity score when they see the raw events but increase their score once they observe that image reconstruction is possible. The scores drop after people observe the face blurring, to a level roughly equivalent to \u2019being captured on CCTV / friend\u2019s camera\u2019. The results show that our method can indeed alleviate the concerns presented by users when using AR/VR services. Localization Evaluation and Ablation Study We evaluate localization while using sensor level protection, where we pass the \ufb01ltered voxels to our main localization pipeline. As shown in Table 2a, only a small drop in accuracy occurs. While attenuating facial features, sensor level protection can preserve important features for localization. We \ufb01nally perform an ablation study on the key components of sensor level protection. As shown in Table 3, using the two \ufb01lters along with voxel blending makes an optimal trade-off between privacy protection and localization performance. If we ablate the median or max re\ufb02ection \ufb01lters, the number of detected faces increases which indicates that the faces are less protected. However, if we ablate voxel blending, the localization accuracy drastically decreases. Each component in the sensor level protection is necessary for effective privacy-preserving localization. 5.2.2 Network Level Privacy Preservation We use the six scenes from the DAVIS240C dataset to evaluate how network level protection can hide scene details in private spaces. For each scene, we re-train an event-to-image conversion network following the procedure from Section 4.2. The re-training is quickly done using Adam [34] with learning rate 1e\u22124 and batch size 2 for 10 epochs. Then for each trained model, we perform generic and targeted network re-training, where the models are trained using events generated from MS-COCO [40] \ffollowing [49,57]. During inference, from the E2VID [49] architecture we use the \ufb01rst two layers as the frontal part (F 1 \u0398\u2032), the last two layers as the rear part (F 3 \u0398\u2032), and the rest as the middle part (F 2 \u0398\u2032). Additional details about the evaluation is deferred to the supplementary material. Attack Protection Assessment and User Study We \ufb01rst assess how our method can prevent possible attacks (swapped layer inference, generic/targeted network retraining) from the service provider to recover images using the shared visual information. For each attack type, we simulate the procedure from Section 4.2 by performing image reconstruction where the frontal part of the inference uses the client\u2019s network F\u0398\u2032 and the latter part uses the service provider\u2019s network. The splitting is done at various locations in the network, and we show in Table 4 the averaged similarity metrics between the attacks and the reconstructions from the original network F\u0398. The full results are reported in the supplementary material. As shown in Table 4, the reconstruction quality after network level protection is constantly low for all three attack scenarios. The adversarial loss (Equation 4) plays a key role in blocking swapped layer inference (\u2018ours\u2019 vs \u2018ours w/o adversarial loss\u2019). Simply re-training with random initialization offers a defense against generic network re-training, as all the methods constantly show large image deviations. For targeted network re-training, a large similarity gap occurs from applying noise watermarking (\u2018ours\u2019 vs \u2018ours w/o noise watermarking\u2019) which indicates the crucial role of this procedure for preventing the attack. We show exemplary visualizations of the three attacks in Figure 4, where all attacks fail after network level protection. Finally, we conduct a user study to see how people feel about the network level protection. Here we survey 23 volunteers using a questionnaire similar to the sensor level evaluation, but with an additional question in the end showing image reconstructions from network level protection. Figure 5 displays the results, where the insecurity score is lower than the sensor level privacy protection. While the network level protection requires additional re-training, it can offer an added layer of privacy preservation compared to the sensor level method as the entire view is obscured. Compute Ef\ufb01ciency and Localization Assessment In addition to privacy protection, network level protection reduces the computational burden of running the entire eventto-image conversion on-device. To assess the computational ef\ufb01ciency, we compare the inference runtime on CPU, GPU, and our method that performs splitting between the two. Here the CPU and GPU are used to model the edge device and service provider respectively. As shown in Table 4, the runtime of our method is signi\ufb01cantly lower than only using CPU, and comparable to the case only using GPU. While the results may differ from the actual runMethod t-error (m) R-error (\u25e6) Acc. No Protection 0.04 2.29 0.69 Sensor Level Protection 0.05 2.50 0.66 Network Level Protection 0.05 2.58 0.64 Joint Protection 0.06 2.88 0.62 (a) Localization Evaluation Including Joint Protection Attack Type Swapped Layer Inference Generic Re-Training Targeted Re-Training Random Initialization 0.4613 0.4579 0.2165 Ours w/o Noise Watermark 0.9760 0.4249 0.3237 Ours w/o Adversarial Loss 0.4746 0.4456 0.3517 Ours 0.9587 0.4492 0.4415 (b) Reconstruction Quality (MAE) of Possible Attacks Method Pure CPU Pure GPU Splitted (Ours) Runtime (s) 0.8239 0.0119 0.0760 (c) Runtime Comparison Table 4. Network level privacy preservation evaluation. time characteristics of edge devices and service providers, our method can ef\ufb01ciently distribute the computation and reduce the burden on the edge device side. We further evaluate the localization performance while using network level protection. Here we use the retrained network for image reconstruction in our localization pipeline. Note it is also possible to apply any other privacypreserving image-based localization methods [12,15,45,61, 62] on the securely reconstructed images. Similar to sensor level protection, accuracy only drops mildly as shown in Table 3. Our network level protection offers secure image reconstruction while enabling stable localization. Joint Sensor and Network Level Protection While network level protection hides the visual content from the service provider, residents in private spaces may still feel uncomfortable about being captured. To this end, in Table 4a we evaluate the localization performance using both protection methods. Only a small performance decrease occurs even when applying both levels of protection. Thus our method can handle a wide variety of privacy concerns, while not signi\ufb01cantly sacri\ufb01cing the utility of localization. 6." + }, + { + "url": "http://arxiv.org/abs/2204.02735v1", + "title": "Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck", + "abstract": "Adversarial examples, generated by carefully crafted perturbation, have\nattracted considerable attention in research fields. Recent works have argued\nthat the existence of the robust and non-robust features is a primary cause of\nthe adversarial examples, and investigated their internal interactions in the\nfeature space. In this paper, we propose a way of explicitly distilling feature\nrepresentation into the robust and non-robust features, using Information\nBottleneck. Specifically, we inject noise variation to each feature unit and\nevaluate the information flow in the feature representation to dichotomize\nfeature units either robust or non-robust, based on the noise variation\nmagnitude. Through comprehensive experiments, we demonstrate that the distilled\nfeatures are highly correlated with adversarial prediction, and they have\nhuman-perceptible semantic information by themselves. Furthermore, we present\nan attack mechanism intensifying the gradient of non-robust features that is\ndirectly related to the model prediction, and validate its effectiveness of\nbreaking model robustness.", + "authors": "Junho Kim, Byung-Kwan Lee, Yong Man Ro", + "published": "2022-04-06", + "updated": "2022-04-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Deep neural networks (DNNs) have achieved remarkable performances in a wide variety of machine learning tasks. Despite the breakthrough outcomes, DNNs are easily fooled from adversarial attacks, with crafted perturbations [1, 2, 3, 4, 5, 6, 7]. These perturbations are imperceptible to human eyes, but simply adding them to clean images (i.e., adversarial examples) can effectively deceive classi\ufb01ers. Such a vulnerability affects security problems [8, 9, 10, 11], bringing in the weak reliability of DNNs. Previous works have broadly investigated the reason for the widespread of such adversarial examples. Goodfellow et al. [2] have argued that adversarial vulnerability is induced from the excessive linearity nature of DNNs in high-dimensional spaces. Several works [1, 12] have regarded the primary cause of the adversarial examples as statistical variation with aberrations in data manifold. Schmidt et al. [13] have suggested that the pervasiveness of the examples should not be considered as a drawback of training methods for DNNs, since the available dataset may not be large enough to train them robustly. In recent years, Tsipras et al. [14] have suggested an intriguing analysis that the disagreement between standard and adversarial accuracy stems from differently trained feature representation. In this literature, Ilyas et al. [15] further have demonstrated the adversarial examples are inevitable results of standard supervised training and arisen from well-generalized features in the dataset. They have analyzed the adversarial examples are originated from brittle and unintelligible features (i.e., non-robust features) that are arbitrarily manipulated with the imperceptible noise, and shown that the \u2217Equal contribution. \u2020 Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). arXiv:2204.02735v1 [cs.LG] 6 Apr 2022 \frobust features still can provide precise accuracy even in the existence of adversarial perturbation. They have argued that the non-robust features cannot show reliable accuracy in the adversarial setting and could provoke incomprehensible properties. Nonetheless, the underlying reason for the existence and pervasiveness of adversarial examples cannot derive common consensus in the research \ufb01eld and still remains unclear [16]. To clarify where the adversarial brittleness truly comes from, we need to \ufb01gure out how the robust and non-robust features in data manifold subtly manipulate feature representation and fool model prediction, by directly handling them in the feature space. To address it, we propose a way to precisely distill intermediate features into robust and non-robust features by employing Information Bottleneck (IB) [17, 18, 19]. In the sense that semantic information is included in the units of intermediate feature representation [20, 21, 22, 23], we utilize the bottleneck to regulate the information \ufb02ow in the feature space by explicitly adding noise to the feature units. Then, we estimate how each feature unit contaminated with the noise affects model prediction with assigned information. Based on the prediction sensitivity of the noise intervention, we assort the feature units either robust or brittle, and disentangle the feature representation into robust or non-robust features, respectively. Through extensive analysis of the distilled features, we corroborate that the pervasiveness of the adversarial brittleness is derived from the non-robust features, and they have a high correlation with the adversarial prediction. In addition, in order to understand the semantic information of distilled features, we directly visualize them in the feature space and provide their visual interpretation. Consequently, we reveal that both of the robust and non-robust features indeed have semantic information in terms of human-perception by themselves. Based on our observation, we theoretically describe the negative impact of the non-robust features for the model prediction and introduce an approach of amplifying the gradients of non-robust features to break the model prediction. In this paper, our contributions can be summarized into three-fold as follows: \u2022 We propose a novel way to explicitly distill intermediate features into the robust and nonrobust features using Information Bottleneck, and interpret the disentangled features in terms of human-perception by directly visualizing them in the feature space. \u2022 By analyzing how the distilled features affect the intermediate feature representation under adversarial perturbation, we demonstrate that the non-robust features are highly correlated with the adversarial prediction. \u2022 We present an attack mechanism manipulating the non-robust features by strengthening their gradients, and validate its effectiveness of breaking model prediction. 2 Distilling Robust and Non-robust Features in Intermediate Feature Space Problem Setup and Notations. Let X denote clean images and Y denote (one-hot encoded) target labels corresponding to the clean images. Then, adversarial examples Xadv can be created by the following equations: max \u03b4 E(X,Y )[L(f(X + \u03b4), Y )], where \u03b4 denotes an adversarial perturbation, and L denotes a pre-de\ufb01ned loss for machine learning tasks. The adversarial examples can be made by Xadv = X + \u03b4. When a given model f is adversarially trained against PGD attack [7], it can be written as follows: min w max \u2225\u03b4\u2225\u221e\u2264\u03b3 E(X,Y ) [L (f(X + \u03b4), Y )] , (1) where w represents the parameters of f, which are learned to be robust against adversarial attacks. Here, \u2225\u00b7\u2225\u221e\u2264\u03b3 describes L\u221enorm, and \u03b3-ball means the perturbation magnitude. In this paper, we adversarially train the model f on \u03b3 = 0.03 for the standard adversarial attack. Note that once adversarially trained, the parameters of the model f are no longer covered.1 As notation of variables that we will use in this paper, Z and \u00af Z indicate the intermediate features of the model f such that Z = fl(X) and \u00af Z = fl(Xadv), where fl(\u00b7) describes l-th layer outputs of the given model. Similarly, fl+(\u00b7) represents subsequent network after the l-th layer, thus intermediate 1Previous works [14, 15] have demonstrated that the distinction between the robust and non-robust features arises in adversarial settings. In the sense that adversarially trained networks learn robust representation [24], we set the robust classi\ufb01er as default. Please see the analysis of the standard training in Appendix F. 2 \f\ud835\udc4d\ud835\udc4d\ud835\udc3c\ud835\udc3c= \ud835\udc53\ud835\udc53 \ud835\udc59\ud835\udc59(\ud835\udc4b\ud835\udc4b) + \ud835\udf0e\ud835\udf0e\u22c5\ud835\udf16\ud835\udf16 \ud835\udf16\ud835\udf16~\ud835\udca9\ud835\udca90, \ud835\udc3c\ud835\udc3c Informative Feature \ud835\udc36\ud835\udc36 \ud835\udc36\ud835\udc36 \ud835\udf0e\ud835\udf0e \ud835\udc4d\ud835\udc4d\ud835\udc3c\ud835\udc3c \ud835\udc4d\ud835\udc4d \ud835\udc3b\ud835\udc3b \ud835\udc4a\ud835\udc4a Optimizing Parameters \ud835\udc53\ud835\udc53 \ud835\udc59\ud835\udc59 \ud835\udc53\ud835\udc53 \ud835\udc59\ud835\udc59+ \ud835\udc4b\ud835\udc4b \u2112\ud835\udc36\ud835\udc36\ud835\udc36\ud835\udc36 \u2112\ud835\udc3c\ud835\udc3c \ud835\udf0e\ud835\udf0e \ud835\udf0e\ud835\udf0e\ud835\udc4d\ud835\udc4d \ud835\udefd\ud835\udefd \u2112\ud835\udc4b\ud835\udc4b(\ud835\udf0e\ud835\udf0e) Noise intervention \ud835\udc36\ud835\udc36 Given Parameters Figure 1: Diagrams of our Information Bottleneck (IB), minimizing LCE + \u03b2LI to \ufb01nd noise variation \u03c3 that can estimate information \ufb02ow in the intermediate features Z. Here, \u03c3z represents nature feature variation of intermediate features Z for each unit, and \u03f5 indicates Gaussian noise sampled from N(0, I). More implementation details are described in Appendix A. features can be propagated to the last output layer, such that \u02c6 Y = fl+(Z) and \u02c6 Yadv = fl+( \u00af Z). The model f can be expressed as f = fl+ \u25e6fl, satisfying \u02c6 Y = f(X) for the given clean images X. Also, \u02c6 Yadv = f(Xadv) is denoted by model propagation of the adversarial examples Xadv. Note that we designate l-th layer as the last convolutional layer, and regard l+ as the rest of the layers in the model. 2.1 Information Bottleneck for Distilling Informative Features In adversarial settings, we focus on separating robust and non-robust features in the intermediate layer. Recall that robust features are literally robust on the noise (variation) and invariant to the existence of the adversarial perturbation, but non-robust features are not. Our approach aims to distill feature units that affect target prediction under the noise perturbation in the intermediate feature space. We follow that the semantic information is inherently included in the feature units of DNNs [20, 22, 25]. From this perspective, we utilize Information Bottleneck (IB) to distill the robust and non-robust features on the given intermediate features Z. Information Bottleneck [17, 18] proposed to encode maximally informative representation for target labels, restraining input information, concurrently. Using the bottleneck, we suggest a way to assess feature importance and quantify information \ufb02ow for the target prediction. The objective function of IB can be written as follows: max Z I(Z, Y ) \u2212\u03b2I(Z, X), (2) where I denotes mutual information, and \u03b2 represents the degree of restraining input information. The \ufb01rst term I(Z, Y ) allows the intermediate features Z to be predictive on the target label Y , and the second term I(Z, X) encourages Z to compress the information of the given images X in the bottleneck. Here, the second term requires a true feature probability p(Z) = R X p(Z | X)p(X)dX to expand it, but it is computationally intractable due to a high dimensional dependency of the dataset probability p(X). Thus, several works [18, 19] modi\ufb01ed the IB\u2019s objective function to make it possible to learn DNNs without the true feature probability as follows: min LCE + \u03b2LI (see Appendix B). In this formulation, LCE indicates cross-entropy loss, and LI represents information loss computed by KL divergence [26] between a feature likelihood p(Z | X) and an approximate feature probability q(Z). It is radically a closed-form approximation for the true feature probability p(Z). Firstly, we deliberately inject noise variation into Z to estimate the prediction sensitivity of each feature unit along the noise intervention. To do so, we newly design an approximate feature probability using noise variation \u03c3, such that q\u03c3(Z) = N(fl(X), \u03c32). Then, we sample random variables from q\u03c3(Z) and de\ufb01ne informative features ZI as follows: ZI = fl(X) + \u03c3 \u00b7 \u03f5, (3) where ZI \u223cq\u03c3(Z). Note that the operator \u00b7 denotes Hadamard product, and \u03f5 stands for Gaussian noise sampled from N(0, I). Here, the noise variation measures a correlation between intermediate features and model prediction based on the fact that robustness means high correlation on model prediction and non-robustness are opposite [14, 15] in adversarial settings. Since the correlation 3 \fE(X,Y )[Y \u00b7f(X)] in output layer can be expressed as a variance measure Cov(Y, f(X)), we consider the correlation in intermediate layer as the noise variation. From this perspective, if a feature unit is highly predictive despite large noise variation (high correlation), the unit can robustly predict target labels, while a non-robust unit cannot. Once the informative features ZI are acquired from the noise variation \u03c3, we propagate ZI to the last output layer and estimate the feature importance of each unit for model prediction. Then, we deal with the information loss LI in order to alleviate feature heterogeneity between Z and ZI. Through aforementioned modi\ufb01cation [18, 19], our objective function can be written as follows: min \u03c3 LX(\u03c3) = \u2212Y log fl+(fl(X) + \u03c3 \u00b7 \u03f5) | {z } LCE +\u03b2 DKL[p(Z | X) || q\u03c3(Z)] | {z } LI , (4) where the feature likelihood p(Z | X) is set to N(fl(X), \u03c32 z). Here, \u03c3z indicates inherent feature variation of the intermediate features Z for each unit. The second term LI makes the informative features ZI resemble the intermediate features Z, while minimizing the cross-entropy LCE. This second term can be written as LI = 1 2 PC k=1[ \u03c32 zk \u03c32 k + log \u03c32 k \u03c32 zk \u22121], where k denotes an index of the noise variation \u03c3 = [\u03c31, \u03c32, \u00b7 \u00b7 \u00b7 , \u03c3C] (the optimizing parameters) and the feature variation \u03c3z = [\u03c3z1, \u03c3z2, \u00b7 \u00b7 \u00b7 , \u03c3zC] (the given parameters). Here, only of the variation \u03c3 is updated such that \u03c3 \u2190\u03c3 \u2212 \u2202 \u2202\u03c3LX(\u03c3). In brief, we summarize overall procedure of our bottleneck concept in Fig. 1. Moreover, we mention that \u03b2 in Eq. (4) controls the amount of information that \ufb02ows into the feature representation. Speci\ufb01cally, when \u03b2 is set to zero, IB loss is equivalent to cross-entropy loss, which means that ZI can accommodate even unimportant features to predict target labels. In contrast, excessively large \u03b2 only focuses on compressing input information, thus IB may cannot \ufb01lter out important features to predict target labels. Accordingly, we empirically control \u03b2 to distill informative features ZI based on the noise variation \u03c3 (Please see section 3.4 for analysis of information \ufb02ow). 2.2 Separating Informative Features by Tolerance of Feature Variation After optimizing the informative features ZI, we compare the noise variation \u03c3 for the informative feature units and dichotomize each unit either robust or non-robust based on their prediction sensitivity. We set the criterion for comparison as T = max(\u03c32 z). Here, T represents the maximum tolerance of the noise variation. It is a reasonable choice to set T as a criterion, because it indicates the maximal variation with respect to the changes of the given image X, in the feature space. In the following procedure, we explicitly disentangle intermediate features Z into the robust Zr and non-robust features Znr. Firstly, once the noise variation \u03c3 is larger than the maximum tolerance T in speci\ufb01c units, it indicates that their corresponding features are highly predictive on the model prediction, despite the noise intervention. Thus, we de\ufb01ne their conjunction as robust features. On the other hand, if the variation of a speci\ufb01c unit is smaller than T, their corresponding features can be represented as non-robust features. This is because the small variation behaves as a strict restriction to retain model prediction of target labels. We assume that once a strong adversarial perturbation comes in, the feature variation of non-robust features becomes to be larger than acceptable tolerance, thereby easily breaking the model classi\ufb01er and leading to misclassi\ufb01ed prediction. In this respect, we sort the noise variation according to their magnitude, and cluster them by assigning robust or non-robust channel indexes to each feature unit. The robust channel index, ir = [ir1, ir2, \u00b7 \u00b7 \u00b7 , irC] can be computed as follows: irk = 1(\u03c32 k > T) = \u001a 1 \u03c32 > T 0 \u03c32 \u2264T , (5) where 1(\u00b7) represents the indicator function. The non-robust channel index inr is simply reversed from the robust channel index such that inrk = 1 \u2212irk. Then, we estimate robust features Zr by multiplying the robust channel index to the intermediate features element-wisely such that Zr = ir \u00b7Z. Similarly, non-robust features Znr are presented as Znr = inr \u00b7 Z. In this way, the intermediate features Z are fully disentangled into the two types of feature representation satisfying Z = Zr +Znr. To sum it up, we regard the robust features Zr that have the larger noise variation as invariant features from the adversarial perturbation. On the other hand, non-robust features Znr are considered as easily manipulated features, which harmonize the smaller noise variation. Now, we analyze their impacts to 4 \fTable 1: Classi\ufb01cation accuracy of model performance attacked by FGSM [2], PGD [7], and CW [4] on VGG-16 [29] and WRN-28-10 [30], adversarially trained with \u03b3 = 0.03 for CIFAR-10, SVHN, and Tiny-ImageNet. We selectively propagate each feature (i.e., intermediate features (Int.), robust (R.), and non-robust features (NR.)) to measure classi\ufb01cation accuracy. Model Example CIFAR-10 SVHN Tiny-ImageNet Int. Acc R. Acc NR. Acc Int. Acc R. Acc NR. Acc Int. Acc R. Acc NR. Acc VGG Clean 79.73 99.87 34.82 90.35 99.76 57.09 33.98 83.11 7.50 FGSM [2] 51.28 99.58 22.82 63.71 98.72 40.12 17.45 77.74 4.94 PGD [7] 44.71 99.38 20.64 48.92 97.91 32.18 16.13 77.59 4.69 CW [4] 40.32 99.85 13.66 33.26 99.60 16.75 12.00 75.69 4.03 WRN Clean 82.56 98.66 44.67 93.53 99.44 70.43 43.13 96.35 6.07 FGSM [2] 56.43 96.80 31.65 73.93 97.90 53.96 20.38 91.58 3.01 PGD [7] 51.63 96.61 29.44 61.09 96.33 45.79 18.84 90.37 2.83 CW [4] 45.47 97.74 17.28 40.61 97.58 22.78 13.51 95.78 2.06 \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b (a) Clean Examples (b) PGD Examples Figure 2: The result of t-SNE plot [31] in CIFAR-10 dataset for VGG-16 network. Each cluster indicates high-dimensional distributions of feature representation for 10 object labels in CIFAR-10 dataset. Additional t-SNE results for other adversarial attacks are illustrated in Appendix D. the robustness by expanding the model prediction of ZI to Taylor approximation (see Appendix C.) with its convergence of local minima [27, 28] as follows: fl+(fl(X) + \u03c3 \u00b7 \u03f5) = fl+(fl(X) + \u03c3r \u00b7 \u03f5) + \u0014 \u2202 \u2202\u03c3r fl+(fl(X) + \u03c3r \u00b7 \u03f5) \u0015T \u03c3nr | {z } \u2206 , (6) where robust noise variation \u03c3r = ir \u00b7 \u03c3 and non-robust noise variation \u03c3nr = inr \u00b7 \u03c3. Since the variation of the robust features does not degrade the model prediction, when \u03c3nr is small enough, the erroneous term \u2206in Eq. (6) closes to zero. That is, the model retains having signi\ufb01cant robustness against the adversarial perturbation interrupting robust channel index in Z. Conversely, once we force \u03c3nr to increase, its output becomes inaccurate for the robust prediction (i.e., \ufb01rst term in Eq. (6)) due to high \u2206. Here, we theoretically demonstrate how the brittleness of non-robust features affects the model robustness. We will thoroughly analyze the properties of the two distilled features by empirically showing the robustness of Zr and brittleness of Znr in the following sections. 3 Analysis of Distilled Features and Visual Interpretation 3.1 Property of Distilled Feature Units under Adversarial Perturbation After we distill the robust and non-robust features using the bottleneck concept, our next question is How will the target prediction change under the adversarial attacks? In our posit, if the bottleneck successfully disentangles the robust and non-robust channel index (i.e., ir and inr) from the given examples, we should identify the consequential classi\ufb01cation accuracy changes under the adversarial perturbation. That is, after applying the robust index to the attacked feature representation, the selected adversarial features with ir denoted by \u00af Zr (i.e., \u00af Zr = ir \u00b7 \u00af Z) should have invariant accuracy changes for the target labels. Here, ir is robust channel index obtained from the given clean examples X, and \u00af Z represents the intermediate features of the adversarial examples, which means \u00af Z = fl(Xadv). Contrarily, the selected features satisfying \u00af Znr = inr \u00b7 \u00af Z will show inaccurate accuracy due to their brittleness under the existence of adversarial perturbation. 5 \fSVHN CIFAR-10 Clean Img Adv Img Int. Feature R. Feature NR. Feature Int. Feature R. Feature NR. Feature \ud835\udc4c\ud835\udc4c: Bird \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e: Deer \ud835\udc4d\ud835\udc4d: Bird \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f: Bird \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b: Deer \u0305 \ud835\udc4d\ud835\udc4d: Deer \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \ud835\udc4e\ud835\udc4e: Bird \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b \ud835\udc4e\ud835\udc4e: Deer \ud835\udc4c\ud835\udc4c: 4 \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e: 1 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b: 1 \u0305 \ud835\udc4d\ud835\udc4d: 1 \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b \ud835\udc4e\ud835\udc4e: 1 \ud835\udc4c\ud835\udc4c: Ladybugs \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e: Sulfur Butterfly \ud835\udc4d\ud835\udc4d: Ladybugs \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f: Ladybugs \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b: Sulfur Butterfly \u0305 \ud835\udc4d\ud835\udc4d: Sulfur Butterfly \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \ud835\udc4e\ud835\udc4e: Ladybugs \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5b\ud835\udc5b \ud835\udc4e\ud835\udc4e: Sulfur Butterfly Tiny-ImageNet \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f: 4 \ud835\udc4d\ud835\udc4d: 4 \u0305 \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \ud835\udc4e\ud835\udc4e: 4 (a) Clean Examples (b) PGD Examples Figure 3: Feature visualization [25] for the intermediate feature (Int.), robust feature (R.), and non-robust feature (NR.). The class labels under each image indicate the predicted results of the corresponding features propagated by fl+(\u00b7). Note that the visualization of the non-robust features displays semantic similarity of the misclassi\ufb01ed classes of the adversarial examples. Please see more visualization results in Appendix E. In Table 1, we analyze evaluation results of the disentangled features under standard attack algorithms [2, 4, 7] in publicly available datasets [32, 33, 34]. As aforementioned, we apply the robust and non-robust channel index optimized from the clean examples to the adversarial features \u00af Z, and estimate their accuracy (i.e., fl+( \u00af Zr) and fl+( \u00af Znr)). As in the table, \u00af Zr still shows constant robust accuracy regardless of the adversarial perturbation, even in the high-con\ufb01dence adversarial attack [4]. On the other hand, we can \ufb01nd that the classi\ufb01cation accuracy of \u00af Znr steeply degrades as the attacks get stronger, which coincides with the properties of the robust and non-robust features. To further support our experiments, we illustrate the correlation between the disentangled features and true labels, using 2D t-SNE plot [31]. In the case of clean examples as in Fig. 2(a), the robust features Zr exhibit separable clusters on the target labels, while the non-robust features Znr show a partially disorganized tendency. When adversarial perturbation [7] exists, the attacked features \u00af Znr represents more collapsed t-SNE visualization as shown in Fig. 2(b). Notably, we can observe that \u00af Zr still sustain highly clustered results even in the attacked condition. 3.2 Feature Visualization of Robust and Non-robust Features We have identi\ufb01ed the existence of the robust and non-robust features using the bottleneck. Then, we wonder about a way of interpreting the semantic information in the feature space with respect to human-perception. Analyzing semantic representation of DNNs is a wide research area to understand their decision [20, 22, 35]. In adversarial settings, several studies [14, 36] argued that a robust classi\ufb01er has more meaningful (i.e., perceptually-aligned) loss gradients in the input space. Engstrom et al. [24] further endeavored to interpret robust feature representation using feature visualization [25, 37, 38]. In this manner, we explore whether the disentangled feature representation from the bottleneck indeed has human-perceptible information in the intermediate feature space. Feature visualization is an optimization-based method that maximizes speci\ufb01c activation of feature units [38, 39], such that X\u2032 = argmaxX(al i(X)), where al i(\u00b7) indicates feature activation of i-th unit in the l-th layer. To understand what conjunction of the robust and non-robust feature units truly interacts with the target labels, we optimize each distilled feature and create their visual explanations. We adopt direct visualization method [25] that has various regularization techniques (e.g., frequency penalization and transformations) to yield better representative visual quality. We employ ia r and ia nr for the adversarial examples Xadv, which is obtained by optimizing LXadv(\u03c3) instead of Eq. (4). Then, we de\ufb01ne \u00af Za r and \u00af Za nr as robust and non-robust features of the adversarial examples, satisfying \u00af Za r = ia r \u00b7 \u00af Z and \u00af Za nr = ia nr \u00b7 \u00af Z. After distilling robust and non-robust features with their corresponding index, we maximize the selected feature unit activation, respectively. The feature visualization results of distilled features are illustrated in Fig. 3. 6 \fTable 2: The prediction accuracy of the non-robust features \u02c6 Za nr for attacked labels \u02c6 Yadv. The input \u02c6 Za nr is the non-robust features of the corresponding attack methods. To clearly show the correlation between adversarial examples and the non-robust features, we evaluate the accuracy under the condition of successfully attacked examples (i.e., Y \u0338= \u02c6 Yadv). Attack CIFAR-10 SVHN Tiny-ImageNet VGG WRN VGG WRN VGG WRN FGSM [2] 92.78 94.35 94.90 96.13 63.39 60.82 PGD [7] 93.43 95.06 96.21 96.44 65.84 63.64 CW [4] 93.72 94.42 95.75 97.65 55.68 56.84 \ud835\udc4d\ud835\udc4d\ud835\udc5b\ud835\udc5b\ud835\udc5f\ud835\udc5f \ud835\udc4e\ud835\udc4e: Bell Pepper Y : Pomegranate \u0de0 \ud835\udc4c\ud835\udc4c \ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e\ud835\udc4e: Bell Pepper \ud835\udc4d\ud835\udc4d\ud835\udc5f\ud835\udc5f \ud835\udc4e\ud835\udc4e: Pomegranate Information Bottleneck R. NR. Adv. Img High Correlation Distilling Int. Features Figure 4: An example of highly correlated adversarial prediction with NR. Both \u00af Yadv and Za nr output same prediction \"Bell Pepper\". As in the \ufb01gure, we can clearly recognize the semantic information of true labels Y and attacked predictions \u02c6 Yadv in the intermediate features (Z and \u00af Z). Interestingly, what we can observe is: (i) the distilled features have semantic information by themselves and maintain their information, even under the adversarial perturbation, (ii) when the adversarial perturbation exists, the brittleness of non-robust features is intensi\ufb01ed and re\ufb02ected onto \u00af Z. Thus, the visualization of \u00af Z and \u00af Za nr looks similar, and they manipulate the target prediction to same adversarial prediction. Unlike the previous work [15] that has argued the non-robust features solely have incomprehensible property, the visualization of the distilled features from our bottleneck represent recognizable outputs even for the adversarial examples, and provides a decisive key to interpret the cause of adversarial examples. 3.3 Adversarial Prediction is Highly Correlated with Non-robust Features We have observed that the non-robust features optimized from the bottleneck are brittle and easily manipulated under the adversarial perturbation, while robust features maintain substantial prediction results for the target labels. Then, if the primary cause of the adversarial examples indeed belongs to non-robust features, it is natural to examine the correlation between the classi\ufb01cation outputs of the non-robust features and the adversarial prediction induced by adversarial attacks. Accordingly, we identically apply our IB loss on the adversarial examples Xadv [2, 7, 4], and \ufb01nd their corresponding robust and non-robust channel index using Eq. (5). To enlighten the correlation of non-robust features and the adversarial prediction, we evaluate the model prediction of \u00af Za nr for the attacked labels \u02c6 Yadv that can be written as follows: \u02c6 Y a nr = fl+( \u00af Za nr). We set the condition of \u02c6 Yadv as successfully attacked labels (i.e., Y \u0338= f(Xadv)) to de\ufb01nitely show the relationship between the prediction \u02c6 Y a nr of non-robust features and the adversarial prediction \u02c6 Yadv of adversarial examples. In Table 2, we summarize the accuracy of \u02c6 Y a nr for the successfully attacked label \u02c6 Yadv in standard attack methods. Generally, we can observe that the non-robust features are highly predictive on \u02c6 Yadv in standard low dimensional datasets such as CIFAR-10 and SVHN. Even in a large dataset (i.e., Tiny-ImageNet), the non-robust features are remarkably correlated with the attacked prediction. A brief explanation of the highly correlated example is described in Fig. 4. 3.4 Bottleneck Controls Information Flow of Robust and Non-robust Features In this analysis, we will investigate how the bottleneck affects the information \ufb02ow of the robust and non-robust features and clarify their relation. Recall that the bottleneck re\ufb01nes informative features from the given image samples, and \u03b2 regulates the total amount of the information that \ufb02ows into Z. We will compare classi\ufb01cation accuracy for the robust and non-robust features along \u03b2 value and analyze the changes of information \ufb02ow assigned to each disentangled feature. In Fig. 5, as \u03b2 value increases, the accuracy of the robust features is getting higher and decreases after a speci\ufb01c threshold. As theoretically mentioned in 2.1, we can infer that a suitable choice of the \u03b2 can \ufb01lter out robust feature units in the intermediate layer. For the excessive \u03b2 value, we can observe that the bottleneck cannot accurately disentangle adversarial features. For example, the accuracy of the robust and non-robust features are reversed after \u03b2 = 5.0 in the particular networks of CIFAR-10 and SVHN datasets. In addition, as in Fig. 5(a) and (b), the accuracy of the non-robust features progressively increases. Such results indicate a few robust feature units that are not distilled 7 \fTable 3: Comparing attack performance for FGSM [2], BIM [40], PGD [7], CW [4], AutoAttack (AA) [41], FAB [42], and non-robust feature attack denoted by NRF. We adversarially train VGG-16 and WRN-28-10 on L\u221enorm \u03b3 = 0.03 perturbation for CIFAR-10, SVHN, and Tiny-ImageNet with PGD adversarial training [7] (ADV) and advanced defense methods: TRADES [43] and MART [44]. Dataset Method VGG-16 WRN-28-10 Clean FGSM BIM PGD CW AA FAB NRF Clean FGSM BIM PGD CW AA FAB NRF CIFAR-10 ADV 79.7 51.3 46.5 44.7 40.3 42.0 40.9 27.4 82.6 56.4 52.8 51.6 45.5 49.8 49.0 17.1 TRADES 78.2 54.5 51.7 50.9 43.0 49.5 46.3 31.2 83.0 57.9 55.0 53.9 46.7 52.4 49.8 26.8 MART 73.5 54.2 52.2 51.7 42.2 50.6 45.1 31.4 83.4 59.0 56.0 54.7 46.5 52.8 50.2 19.6 SVHN ADV 90.4 63.7 52.1 48.8 33.3 39.9 41.6 12.6 93.5 73.9 64.8 61.1 40.7 55.5 56.6 13.4 TRADES 90.4 65.3 59.0 57.0 44.8 53.5 50.0 14.3 93.9 72.9 63.9 60.4 42.0 55.0 55.4 10.1 MART 90.5 65.1 59.7 57.8 46.4 53.0 47.0 16.1 94.1 73.0 64.5 61.1 42.3 55.4 56.0 8.1 Tiny-ImageNet ADV 34.0 17.5 16.5 16.1 12.0 15.4 12.2 6.7 43.1 20.4 19.3 18.8 13.5 18.1 14.2 5.3 TRADES 38.7 20.1 19.1 18.7 13.9 17.8 13.3 7.8 47.2 26.7 25.6 25.2 17.4 24.4 17.7 9.6 MART 38.4 20.6 19.5 19.1 14.1 18.3 14.7 9.2 48.5 27.4 26.1 25.7 17.5 25.0 17.8 9.9 CIFAR-10 Dataset SVHN Dataset (a) VGG-16 (b) WRN-28-10 (c) # of Assigned Channels Figure 5: The accuracy of robust and non-robust features along \u03b2 value, and the number of assigned channels in the l-th feature representation. Note that each color of the lines in (c) corresponds to the same color in the bar plots of (a) and (b). The total number of channels is equivalent to the size of C. from IB are gradually \ufb02owing into the non-robust side and dichotomized as non-robust units, thus producing more higher accuracy. It coincides with the analysis of the number of the assigned robust and non-robust channels in Fig. 5(c). We can observe that the number of robust channels constantly diminishes, whereas that of the non-robust channels increases. It indicates the bottleneck delicately weighs the information \ufb02ow that should be precisely assigned to the robust or non-robust units. From our analysis section, in conclusion, we have demonstrated the existence of the distilled features using the bottleneck: the robust and non-robust features in the intermediate feature representation. In addition, we have revealed that easily manipulated property of non-robust features is the primary cause of adversarial examples through their high correlation with the adversarial prediction. Based on the fact that the non-robust features break the model prediction, we will suggest an effective way of enhancing the adversarial attack, utilizing the gradient of non-robust features in section 4. 4 Amplifying Brittleness of Non-Robust Features for Attack In this section, we intentionally increase the non-robust noise variation \u03c3nr to make model classi\ufb01er fooled based on Eq. (6). Note that increasing \u03c3nr induces high erroneous term \u2206, thereby going to deviate from the robust prediction. However, this variation \u03c3nr is merely an optimizing parameter that cannot be controlled manually. Thus, in order to secondarily have the effect of enlarging \u03c3nr, we alternatively utilize the gradients of the non-robust features directly connected with the model 8 \fTable 4: Comparison of classi\ufb01cation accuracy for adversarial examples generated by maximizing (\u2191) or minimizing (\u2193) the magnitude of robust (Gr) or non-robust feature gradients (Gnr). Dataset Method VGG-16 WRN-28-10 Clean \u2225Gnr\u22252 \u2191\u2225Gnr\u22252 \u2193\u2225Gr\u22252 \u2191\u2225Gr\u22252 \u2193Clean \u2225Gnr\u22252 \u2191\u2225Gnr\u22252 \u2193\u2225Gr\u22252 \u2191\u2225Gr\u22252 \u2193 CIFAR-10 ADV 79.7 27.4 67.5 35.9 74.6 82.6 17.1 74.8 28.9 79.5 TRADES 78.2 31.2 71.6 38.2 77.1 83.0 26.8 73.9 30.3 79.9 MART 73.5 31.4 63.8 39.6 69.1 83.4 19.6 74.3 24.5 79.4 SVHN ADV 90.4 12.6 71.9 20.8 71.5 93.5 13.4 88.0 15.6 93.4 TRADES 90.4 14.3 68.4 27.3 82.3 93.9 10.1 88.1 13.2 93.3 MART 90.4 16.1 66.2 31.6 84.9 94.1 8.1 87.7 9.0 91.5 Tiny-ImageNet ADV 34.0 6.7 26.1 9.7 29.0 43.1 5.3 38.9 15.5 39.9 TRADES 38.7 7.8 30.7 11.8 33.8 47.2 9.6 40.9 16.9 44.4 MART 38.4 9.2 29.0 13.4 32.5 48.5 9.9 41.3 17.2 45.7 prediction. The gradients of non-robust features in adversarial examples can be described as follows: Gnr = \u2202 \u2202\u00af Znr Lbase(f(X + \u03b4), Y ), (7) where we de\ufb01ne a baseline loss as Lbase(f(X), Y ) = \u2225\u03b4\u22252 + c \u00b7 max(max i\u0338=Y (f(X)i) \u2212f(X)i, 0), instead of cross-entropy loss due to its empirical effectiveness of attack performance [4]. In addition, we use a technique of changes of variables [4] from \u03b4 to w for generating an imperceptible yet powerful perturbation, such that \u03b4 = 1 2(tanh(w)+1)\u2212X. It serves to smooth out projected gradient descent that clips prematurely to prevent adversarial examples falling into the extreme image domain. To compute the gradient practically, we \ufb01rstly calculate \u2202 \u2202\u00af Z Lbase, and multiply it to \u2202 \u2202\u00af Znr \u00af Z by chain rule. Here, the latter gradient equals to non-robust channel index inr, because the intermediate features of adversarial examples can be re-written as: \u00af Z = ir \u00b7 \u00af Zr + inr \u00b7 \u00af Znr, and the derivative of \u00af Z over \u00af Znr equals to inr. Thus, the gradients of the non-robust features Gnr can be simpli\ufb01ed as inr \u00b7 \u2202 \u2202\u00af Z Lbase. Using the gradient, we suggest an attack to non-robust features (NRF) by optimizing the following objective: min \u03b4 Lbase(f(X + \u03b4), Y ) \u2212 \r \r \r \rinr \u00b7 \u2202 \u2202\u00af Z Lbase \r \r \r \r 2 . (8) In Table 3, NRF shows more effective attack performance than the other standard adversarial attacks in [7, 43, 44], since NRF strengthens the gradients of non-robust features that contains the same effect of increasing \u03c3nr to disturb accurate model prediction. Moreover, we conduct an ablation study on the gradients of robust and non-robust features to probe their in\ufb02uence of prediction changes in Table 4. As expected, maximizing Gnr shows more effective attack performance than minimizing it. It is because maximizing Gnr has an alternative effect of increasing the non-robust noise variation \u03c3nr (i.e., large erroneous term \u2206\u2191) in Eq. (6). Whereas, manipulating the gradients of robust features Gr shows less attack performance than controlling Gnr, since it cannot directly handle brittle features in the intermediate feature representation. Especially, it seems dif\ufb01cult to break model prediction, while weakening the gradient of robust features, whose noise variation \u03c3r have invariance of target prediction even under the adversarial perturbation. 5 Related Work Various works have been tried to \ufb01gure out the reason for adversarial vulnerability in the intermediate feature representation. Inkawhich et al. [45] analyzed how the intermediate features are changed by adversarial attacks and measured layerand class-wise feature distributions. Engstrom et al. [24] pointed out that there is a shortcoming of DNNs and their embedding, that is, the primary features used in DNNs are contrasting with what human uses. Also, they argued that the robust optimization to learn robust features could address this shortcoming by encoding high-level representations of input data. Jacobsen et al. [46] argued that the reason for adversarial vulnerability lies in invariant characteristics of DNNs to task-relevant features. This invariance makes most regions of input space brittle to adversarial attacks so that the classi\ufb01ers become relying on a few highly predictive features. In addition, recent works [14, 15] have suggested that the existence of the vulnerability is on the non-robust features, which are inherently included in the data and have unrecognizable properties. 9 \fOur work is in line with the concept of the non-robust feature. However, unlike the aforementioned works that directly generated the robust and non-robust datasets to analyze their properties, we reveal that the robust and non-robust features can be completely disentangled in the feature space using Information Bottleneck, and they have semantic information by themselves in fact. 6 Conclusion and Discussion" + }, + { + "url": "http://arxiv.org/abs/2203.12247v2", + "title": "Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition", + "abstract": "We introduce Ev-TTA, a simple, effective test-time adaptation algorithm for\nevent-based object recognition. While event cameras are proposed to provide\nmeasurements of scenes with fast motions or drastic illumination changes, many\nexisting event-based recognition algorithms suffer from performance\ndeterioration under extreme conditions due to significant domain shifts. Ev-TTA\nmitigates the severe domain gaps by fine-tuning the pre-trained classifiers\nduring the test phase using loss functions inspired by the spatio-temporal\ncharacteristics of events. Since the event data is a temporal stream of\nmeasurements, our loss function enforces similar predictions for adjacent\nevents to quickly adapt to the changed environment online. Also, we utilize the\nspatial correlations between two polarities of events to handle noise under\nextreme illumination, where different polarities of events exhibit distinctive\nnoise distributions. Ev-TTA demonstrates a large amount of performance gain on\na wide range of event-based object recognition tasks without extensive\nadditional training. Our formulation can be successfully applied regardless of\ninput representations and further extended into regression tasks. We expect\nEv-TTA to provide the key technique to deploy event-based vision algorithms in\nchallenging real-world applications where significant domain shift is\ninevitable.", + "authors": "Junho Kim, Inwoo Hwang, Young Min Kim", + "published": "2022-03-23", + "updated": "2022-03-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Event cameras are neuromorphic sensors that produce a sequence of brightness changes with high dynamic range and microsecond-scale temporal resolution. The sensor targets conditions where the quality of measurements degrades for standard frame-based cameras. Conventional cameras under extreme measurement conditions produce the prominent artifacts of motion blur or pixel saturation, and the performance deteriorates for a subsequent perceptual module. Being able to acquire visual information in challenging environments, event cameras have the potential to overcome *Young Min Kim is the corresponding author. Figure 1. Visualization of events from N-ImageNet [18] recorded in various environmental conditions. Positive, negative events are shown in blue and red, respectively. Events in low lighting (b) exhibit noise bursts, where a large number of noisy events are triggered from one polarity. Events in extreme motion (c) have denser events triggered along edges compared to normal conditions (a). Both changes lead to a significant domain gap, deteriorating the recognition performance. the limitations of frame-based cameras. Despite the myriad of benefits that event cameras can offer, there is a clear gap between data acquisition and recognition. While event cameras can acquire meaningful information even in challenging environments, events obtained from these conditions are typically noisy and lack visual features. Figure 1 shows that there exists a stark visual contrast between events recorded at normal lighting and regular camera motion with those from very low lighting or extreme camera motion. Event-based object recognition algorithms are directly affected by these changes in input and the performance becomes very unstable. Figure 3b also shows the perturbation in the feature embedding space due to the domain shift. Since it is difficult to manually collect labeled data in a wide variety of external conditions, an adaptation strategy is necessary to fully leverage the potential of event cameras. We propose Ev-TTA, a test-time adaptation algorithm targeted for event-based object recognition. Given a pre\ftrained event classifier, Ev-TTA adapts the classifier at test phase to new, unseen environments with large domain shifts. Our method does not require labeled data from the target domain and can operate in an online manner. Nevertheless, Ev-TTA shows a large amount of performance gain, with more than 10% accuracy increase across all tested representations in datasets such as N-ImageNet [18]. While we mainly investigate domain shifts caused by external variations in camera trajectories and scene brightness, Ev-TTA is also capable of dealing with other domain shifts such as Sim2Real gap. Ev-TTA is composed of two key components that utilize the distinctive characteristics of event data in the spacetime domain. First of all, our test-time adaptation strategy enforces the consistency of the predictions for temporally adjacent streams. Our novel loss function jointly minimizes the discrepancy between pairs of adjacent event fragments while selectively minimizing the entropy of the predictions. Secondly, we propose to remove events that lack spatially neighboring events in the opposite polarity. This is based on the observation that under extreme lighting, severe noise in the event streams is exclusively generated on one polarity, as shown in Figure 1. Since Ev-TTA only intervenes with the input event and output probability distribution, it is versatile to various event representations, datasets, or tasks. In Section 4.1, EvTTA shows universal improvements across all event representations tested for a wide range of external conditions. As there is no consensus in the optimal event representation yet, the flexibility to handle various event representations makes Ev-TTA further suitable for event data. Our formulation is general and is also applicable to other vision-based tasks with minor modifications. We demonstrate that EvTTA could be used for tasks other than classification such as steering angle regression, suggesting the large applicability of Ev-TTA. To summarize, our main contributions are (i) a novel test-time adaptation objective based on temporal consistency, (ii) a noise removal mechanism for low-light conditions utilizing spatial consistency, (iii) comprehensive evaluation of Ev-TTA in event-based object recognition using a wide range of event representations, and (iv) extension of Ev-TTA to event-based regression tasks. Our experiments demonstrate that Ev-TTA can successfully adapt various event-based vision algorithms to a wide range of external conditions. 2. Related Work Robustness in Event-Based Object Recognition While event cameras can operate in harsh environments such as low-lighting and abrupt camera motion, the collected data suffer from a clear domain gap which leads to performance degradation. Previous works have investigated the effects of motion [37, 48] or night-time capture [29] qualitatively or with simulated data. Recently Deng et al. [9] performed one of the first quantitative analyses of robustness amidst variation for a small set of motions. Kim et al. [18] proposed N-ImageNet along with its variants recorded under diverse camera trajectories and illumination, which enable a systematic assessment of classification robustness. The clear performance degradation is observed for all event representations under various recording conditions. Several event representations are hand-crafted to be robust against camera motion. Early approaches such as event histogram [21] and binary event image [7] ignore the temporal aspects and only leverage the spatial distribution of events. This is in contrast to other works that utilize raw timestamp values [20, 27, 37, 49], which may be vulnerable to abrupt changes in camera speed. To utilize the temporal information while factoring out the speed variations, several representations such as DiST [18] and sorted time surface [2] use relative timestamps obtained from sorting instead of absolute timestamps. Learning-based event representations incorporate a learned module for packaging events [6, 14], which in theory can be trained as robust representations if provided with datasets reflecting the diverse external conditions. However, they show competent performance only in small datasets [26, 37] and hand-crafted methods such as DiST [18] have demonstrated performance on par with these methods in large-scale, fine-grained datasets [18]. This is due to the large memory requirement that inhibits large batch training, which is crucial for large-scale datasets such as N-ImageNet [18]. As classification algorithms based on hand-crafted representations are more often used in event-based vision [21, 32, 47, 49] and are sufficiently performant in large-scale datasets, we retain our focus on these class of methods. We extensively evaluate Ev-TTA in numerous hand-crafted event representations [2, 7, 18, 20, 21, 27], and demonstrate universal performance enhancement compared to other baselines in diverse test-time conditions. Test-Time Adaptation Unsupervised domain adaptation [1, 11, 30, 34, 43] aims at transferring models from a labeled source domain to an unlabeled target domain. The objective of test-time adaptation [3,4,15,24,41,46] is similar to unsupervised domain adaptation, while the difference lies in where adaptation takes place: unsupervised domain adaptation usually undergoes an additional training phase with data from the target domain, whereas test-time adaptation mainly intervenes with the test phase. Given the diverse changes in the input event distribution, we propose a test-time adaptation strategy reflecting the current measurement condition more adequately for practical deployment of event-based vision algorithms than collecting training \fdatasets to capture the entire space of possible variations. Ev-TTA takes inspiration from both unsupervised domain adaptation and test-time adaptation. SENTRY [30] is one of the state-of-the-art algorithms for unsupervised domain adaptation that conditionally optimizes entropy by observing the consistency between augmented input samples. While the training objective is effective for adaptation, SENTRY requires altering the training process and network architecture to properly function. Tent [46] is a lightweight approach for test-time adaptation in visual recognition, achieving large performance gain without changing the training nor network architecture. Tent minimizes prediction entropy during the test phase and restrains optimization to only the batch normalization layers for efficient training. Ev-TTA leverages the strengths from both SENTRY [30] and Tent [46], while further incorporating spatio-temporal characteristics of event data for optimal performance gain. 3. Method Ev-TTA adapts a pre-trained event classifier trained on the source domain to a target domain with a significant shift in the measurement setting. The source domain is defined as the original external condition used for training and the target domain is the new condition for testing. For example, the classifiers could be trained with data captured in normal lighting and then tested on data under low lighting. The raw event camera output is composed of a sequence of events, E = {ei = (xi, yi, ti, pi)}, where ei indicates brightness change with polarity pi \u2208{\u22121, 1} at pixel location (xi, yi) at time ti. While there are several approaches that asynchronously process events [23,35,36], we retain our focus on more prevalent approaches that employ image-like event representations. The classification algorithms [21,32,47,49] are composed of a two-step procedure, where events are first aggregated to form an image-like representation, and further processed with conventional image classifier architectures [16] to output class probabilities. Once the input representation is chosen with the classifier F\u03b8(\u00b7) pre-trained in the source domain, the network parameter \u03b8 for the target domain is optimized against the training objective that imposes temporal consistency between adjacent sequences of events. The training objective is elaborated in Section 3.1. Ev-TTA can perform test-time adaptation either in an offline or online manner. In the offline setup, Ev-TTA is first optimized for the entire target domain, and subsequently performs another set of inferences for evaluation using the same samples with the updated model parameters. In the online setup, Ev-TTA is simultaneously evaluated and optimized, thus omitting the second inference phase. EvTTA shows strong performance in both evaluation scenarios, where the detailed results are reported in Section 4. Note that no data from the source domain is used in training, Figure 2. Overview of the training objective. (a) Ev-TTA extracts K random slices of equal length from the input event stream, and fine-tunes a pre-trained classifier to enforce temporal consistency with the anchor event E1 and other event slices Ek. (b) The prediction similarity loss LPS minimizes the discrepancy with respect to the anchor event (c) while the selective entropy loss LSE minimizes the entropy of the anchor prediction when the votes are consistent. which would lead to large amounts of additional computation as source domain data is typically much larger than the target domain. Further, Ev-TTA does not modify the neural network architecture or the training process and thus can be applied in diverse practical settings. The event sequence is also conditionally refined using the spatial consistency between different event polarities, and compiled into an image-like representation to serve as the input to the neural network. The spatial consistency provides an important cue for denoising the data under extreme lighting conditions, which is further described in Section 3.2. 3.1. Training Objective for Temporal Consistency Ev-TTA minimizes a loss function that imposes consistency in the time domain. Given an event stream E, let E1, . . . , EK \u2282E be the K random slices of equal length obtained from E. Note that event-based object recognition often employs input events that span no more than 100ms [18,20,26], and thus we can assume the K random event slices to be temporally adjacent. The training objective enforces the consistency between the network outputs of the event slices F\u03b8(Ei), i = 1, . . . , K, as shown in Figure 2. The loss function is defined as L = LPS+LSE, where LPS is the prediction similarity loss and LSE is the selective \fentropy loss. Prediction Similarity Loss Prediction similarity loss enforces the predicted label distributions for the temporally neighboring events E1, . . . , EK to be similar, which is depicted in Figure 2b. Using the symmetric KL divergence SKL(P, Q) = DKL(P\u2225Q) + DKL(Q\u2225P), prediction similarity loss is defined as follows, \\ l a b e l {e q:sim} \\mat hcal {L}_{\\text {PS}}= \\frac {1}{2} \\sum _{k=2}^{K} S_{\\text {KL}}(F_\\theta (\\mathcal {E}_1), F_\\theta (\\mathcal {E}_k)). (1) Note that the loss minimizes the discrepancy between the prediction for the first event slice and the rest instead of incorporating all possible pairs within the K event slices. Since the extensive pair-wise comparison would lead to a quadratic increase in computation, we instead use the first event slice as an anchor that pulls the predictions of other event slices. We empirically show that using only a single event slice as an anchor is sufficient for successful adaptation, especially when it is paired with the selective entropy loss LSE. We also find that the choice of the anchor does not have a significant effect on performance, where in-depth analysis is deferred to the supplementary material. Selective Entropy Loss While the prediction similarity loss provides a meaningful learning signal for test-time adaptation, the loss heavily depends on the quality of the anchor prediction. To this end, Ev-TTA additionally imposes the selective entropy loss LSE. Inspired from SENTRY [30], we propose to selectively minimize the prediction entropy of the first event slice E1 \u2282E only if the prediction is consistent with other event slices. The consistency is determined by examining whether the predicted class labels are in agreement with the temporally neighboring events, as described in Figure 2c. To elaborate, each event slice Ei casts a vote on the class label with the highest probability, namely vi = argmax F\u03b8(Ei). An anchor is considered consistent if its label vote v1 is equal to the majority vote vmajority from the other event slices Vother = {v2, . . . , vK}. Using the entropy H(p) = \u2212P i pi log pi defined for a discrete probability distribution p \u2208RC where C is number of classes, selective entropy loss is defined as follows, \\ m a thcal {L} _\\ text {SE} = \\ left \\{ \\begin {array}{ll} H(F_\\theta (\\mathcal {E}_{1})) & \\mbox {if consistent} \\\\ 0 & \\mbox {if inconsistent.} \\end {array} \\right . \\label {eq:select} (2) Our loss formulation differs from the selective entropy loss of SENTRY [30] in two aspects. First, the criterion for consistency is determined using temporally neighboring events, unlike the image augmentations used in SENTRY. Further, while SENTRY [30] proposes to maximize the predicted entropy for samples that are inconsistent, we find that Figure 3. t-SNE [44] visualizations for a 3-way event classification task from N-ImageNet [18], trained with data captured in a normal condition and adapted to a variant recorded under extreme camera motion. We delineate the predictions made with each adaptation method in colored circles, where each color corresponds to a label. Even if the classifier is successful in the trained source domain (a), the performance does not transfer to the target domain without adequate adaptation (b). Training all layers fails to adapt in target data (c) as the crucial priors for event data is lost. On the other hand, Ev-TTA (d) successfully adapts to target data and alleviates the performance degradation. simply ignoring these samples as in Equation 2 is more effective for test-time adaptation in event vision. We further validate this claim in the ablation study in Section 4.2. Optimization Strategy Given the total training loss function L, we constrain the optimization to only operate on the batch normalization layers of the pre-trained classifier as suggested by [46]. When the target domain data is scarce, altering the entire set of parameters may divert the model from essential priors obtained from the pre-training. The argument is also supported in our experiment conducted with variants of N-ImageNet [18] shown in Figure 3. Even using the identical objective, training the entire network results in the predicted labels to collapse (Figure 3c), whereas different labels are better separated when only the batch normalization layers are optimized (Figure 3d). Ev-TTA effectively leverages the loss function that reflects the distinctive characteristics of event data and performs fast and successful adaptation, which is further discussed in Section 4. Extension to Regression We demonstrate that Ev-TTA could be utilized for regression, which together with classification constitute a large portion of computer vision tasks. As a typical example, we show an extension to steering angle regression for autonomous driving. The task is to predict the steering angle \u03d5 from a stream of events E. \fSince our loss formulation is composed of KLdivergence and entropy of the predictions, it can be easily extended to other tasks that output a probability distribution. For steering angle regression, we design the regressor to predict both the mean and variance of the steering angle, namely F\u03b8(E) = (\u00b5, \u03c3). Assuming that the output variables follow a Gaussian distribution, the regressor is trained to maximize the log likelihood as in Nix et al. [25], \\mathcal { L }_\\ t e xtg{ l ike lih ood}=-\\log \\sigma \\frac {(\\phi _\\text {gt} \\mu )^2}{2\\sigma ^2}, (3) where \u03d5gt is the ground-truth steering angle from the source domain. Under such conditions, we make three modifications to the loss functions used in Ev-TTA for classification. We first replace the symmetric KL divergence from Equation 1 with the KL divergence of Gaussian distributions, namely S_\\text { KL}(F_\\ t he t a ( \\ m ath c a l {E}_1 ) , F_ \\th eta (\\mathcal {E}_{k})) = \\frac {\\sigma _1^4 + \\sigma _{k}^4 + (\\sigma _1^2+\\sigma _{k}^2)(\\mu _1 \\mu _{k})^2}{2\\sigma _1^2\\sigma _{k}^2}. (4) We also modify the entropy from Equation 2 with the entropy of Gaussian distributions, namely H(F_\\th e ta (\\ m athcal {E}_{1})) = \\log \\sigma _1 \\sqrt {2\\pi {e}}. (5) Finally, the consistency criterion is adapted for continuous network outputs. An anchor event is considered consistent if its predicted variance is within a range of variances predicted from its neighbors. To elaborate, we verify if the ratio of variances \u03c32 1/\u03c32 k for k = 2, . . . , K is bounded within 10\u22121 and 10. We impose constraints using the variance since the predicted mean may deviate largely depending on the driving scenario, whereas the predicted variance should be consistent over a longer time horizon. With the aforementioned modifications, Ev-TTA can lead to performance enhancements in steering angle prediction, which is further discussed in Section 4.1. The result demonstrates that we can impose our adaptation strategy to other vision tasks by examining the entropy and divergence of the output distributions. 3.2. Conditional Denoising with Spatial Consistency The low light condition significantly deteriorates eventbased vision algorithms, as noted by Kim et al. [18], and to the best of our knowledge, it has not been properly handled in previous approaches. The main cause is the \u201cdark currents\u201d [8], which constantly flow through the phototransistors. Under low light, the currents for valid event signals become smaller, and the dark currents trigger large amounts of noise. The severe noise in the extreme lighting condition is beyond the range of adversaries that previous approaches can handle, which are designed for small motion variation or lighting changes [2,18,37]. Figure 4. Illustration of conditional denoising, which is applied to events with a large imbalance in polarity. For each pixel in the channel that contains noise burst (in this case Pneg), Ev-TTA first searches the spatial neighborhood in the opposite polarity. If the neighborhood lacks events, the noise is removed, and the noisy channel Pneg is replaced with the denoised channel \u02dc Pneg. We propose to conditionally remove noise in low-light conditions using a criterion derived from the spatial consistency of events. Interestingly, we observed that the burst of noise is dominant in a single polarity, as shown in Figure 1. We illustrate the noise removal operation using a two-channel event representation P = {Ppos, Pneg} \u2208 RH\u00d7W \u00d72, where Ppos, Pneg are the positive and negative channels respectively. As shown in Figure 4, we denoise the channel with noise burst (in this case Pneg) if a pixel containing events lack spatial neighbors in the opposite polarity. The noise removal operation only takes place if there is a large imbalance in the ratio of positive and negative events. The imbalance is formally determined with the statistical discrepancy between the positive and negative events. Let Npos, Nneg denote the number of pixels containing positive and negative events, respectively. Assuming Npos, Nneg follow a Gaussian distribution, the following transformation to the ratio R = Npos/Nneg follows a standard Gaussian distribution [12], T( R ) = \\frac {\\mu _\\text {neg}R \\mu _\\text {pos}}{\\sqrt {\\sigma _\\text {pos}^2R^2-2\\rho \\sigma _\\text {pos}\\sigma _\\text {neg}R+\\sigma _\\text {neg}^2R^2}}, \\label {eq:transform} g (6) where \u00b5pos, \u00b5neg are the mean, \u03c3pos, \u03c3neg are the standard deviation, and \u03c1 is the cross-correlation of Npos, Nneg. To test whether the data suffers from noise burst, we transform the event ratio of the target domain using the statistics of the source domain {\u00b5pos, \u00b5neg, \u03c3pos, \u03c3neg, \u03c1} that does not suffer from low-light conditions. If the ratio transformed with Equation 6 follows a standard Gaussian distribution, we can assume that the target domain is free from noise burst. The conditional denoising operation enforces spatial consistency of the two polarities on the anchor event E1 \ffrom Section 3.1. Given a batch of anchor events from the target domain, we compute the transformed event ratios T(R) and apply statistical hypothesis testing to determine if the batch is in accordance with the source domain. If the hypothesis test reveals that the batch contains significant polarity imbalance, we remove the detected noisy pixels based on spatial consistency, as shown in Figure 4. The modified channel \u02dc Pneg replaces the original channel Pneg to form a new anchor event representation \u02dc P = {Ppos, \u02dc Pneg}, which is subsequently used to compute the losses defined in Equation 1 and 2. Further details about the hypothesis testing procedure are deferred to the supplementary material. Note that our noise removal method mainly targets noise burst in low light, unlike existing denoising mechanisms [10, 47, 48] which consider a much broader set of noise. Nevertheless, our method is extremely lightweight as it could be implemented with simple masking and effectively enhances performance, which we demonstrate in Section 4.2. 4. Experiments In this section, we empirically validate various aspects of Ev-TTA. In Section 4.1, we show that the proposed test-time adaptation can enhance the performance of eventbased object recognition algorithms and could be extended to steering angle prediction. We further validate the importance of each key constituent of Ev-TTA in Section 4.2. Experimental Setup We implement Ev-TTA using PyTorch [28], and accelerate it with an RTX 2080 GPU. All training is performed only for one epoch, and the evaluation results are made offline unless specified otherwise. We mostly follow the hyperparameter setup from Tent [46], and avoid tuning Ev-TTA as it would involve optimizing results in the test set. Details about the hyperparameters for each dataset is deferred to the supplementary material. Six event representations are used in the experiments: binary event image [7], event histogram [21], timestamp image [27], time surface [20], sorted time surface [2], and DiST [18]. Baselines The results are compared against four baseline methods: Tent [46], SENTRY [30], Mummadi et al. [24] and URIE [39]. Tent [46] and SENTRY [30] optimize predictions by imposing entropy minimization. Tent optimizes only the batch normalization layers to minimize the prediction entropy. SENTRY, on the other hand, conditionally optimizes the prediction entropy by assessing consistency from data augmentation. We adapt SENTRY [30] for testtime adaptation and optimize the proposed training objective only for batch normalization layers. The remaining two baselines focus on transforming the input representation to mitigate domain shift. Mummadi et al. [24] propose to apply a novel input transformation network that is trained at test time to attenuate noise and other artifacts from domain shift. URIE [39] also proposes a similar adaptation mechanism based on input transformation networks but employs a unique attention mechanism to place more weight on salient regions in the image. For a fair comparison with Ev-TTA, all baselines are trained during the test phase. 4.1. Performance Enhancement 4.1.1 Event-Based Object Recognition Controlled Environments We first evaluate Ev-TTA using N-ImageNet [18] to systematically evaluate the robustness enhancement under a vast range of changes. NImageNet is an event-based object recognition dataset that consists of the original train set and nine variants recorded under diverse camera motion and light changes. We train classifiers with six event representations [2,7,18,20,21,27] using the original N-ImageNet dataset, and evaluate the classifiers on the N-ImageNet variants. Table 1 displays the classification accuracy averaged across the six representations. The large domain shift induced by these changes causes a drastic performance drop without adaptation. EvTTA outperforms all other baselines and successfully adapts pre-trained classifiers to new, unseen environments. Notably, the adapted performance is on par with the validation accuracy from the original recording, except for two variants recorded under very low lighting (dataset # 6 and 7). Nevertheless, a large amount of performance gain exists even in these variants, indicating the efficacy of Ev-TTA. Further, the performance enhancement is universal, with all tested event representations showing large improvement. This is verified by comparing \u2018No Adaptation (Max)\u2019 from Table 1, which is the highest accuracy among the event representations for each N-ImageNet variant, with \u2018Ev-TTA (Min)\u2019, which is the lowest accuracy for each variant. Even the best performing representation under no adaptation is inferior to the least performing representation with Ev-TTA. As Ev-TTA only intervenes with the input representation and the output probability distribution, it is effectively applicable to a wide range of event representations. We further report results for the online evaluation scheme, where evaluation is performed simultaneously with training. This reflects the practical scenario where it may not be possible to access the input data twice, and the classifier should adapt to the new environments online. The performance of \u2018Ev-TTA (Online)\u2019 in Table 1 shows that Ev-TTA can successfully perform adaptation where large performance enhancement is universal across all tested representations. While the offline setup provides more cues for adaptation as the data could be seen more than once, the gap between the online and offline evaluation results is not as significant. Such results indicate that Ev-TTA can adapt both offline and online, agnostic of the underlying \fChange None Trajectory Brightness Average Validation Dataset Orig. 1 2 3 4 5 6 7 8 9 All No Adaptation 46.76 43.32 33.78 39.56 24.78 36.16 21.52 30.31 36.60 34.91 33.44 Mummadi et al. [24] 46.27 46.04 46.35 43.27 44.61 25.59 35.23 45.73 45.48 42.07 URIE [39] 42.04 41.45 42.48 38.66 40.43 17.59 29.63 41.77 41.45 37.28 SENTRY [30] 46.63 46.51 46.45 42.11 44.44 21.92 34.78 45.53 45.13 41.50 Tent [46] 43.86 44.96 44.82 41.55 42.81 26.47 34.87 44.10 44.00 40.83 Ev-TTA 47.99 47.38 47.47 44.54 46.28 29.46 38.44 47.45 46.90 43.99 No Adaptation (Max) 45.17 36.58 42.28 26.57 38.70 24.39 32.76 38.99 37.37 35.87 Ev-TTA (Min) 45.50 46.46 46.58 43.48 43.87 27.28 37.06 46.72 46.12 42.91 Ev-TTA (Online) 44.77 44.80 45.05 41.77 43.12 26.43 35.42 44.42 44.22 41.11 Table 1. Robustness evaluation results on N-ImageNet and its variants. The results are averaged for all tested event representations. Dataset Source Day 1 Day 2 Day 3 Day 4 Day 5 None 77.30 70.47 78.53 74.88 71.36 83.37 Tent [46] 73.60 80.81 75.71 74.74 87.37 Ev-TTA 74.83 82.77 77.15 74.76 88.38 Table 2. Evaluation results on Prophesee Megapixel Dataset. Representation Sim None Tent [46] Ev-TTA Timestamp Image [27] 53.53 31.36 38.96 40.66 Binary Event Image [7] 54.63 26.62 38.67 40.94 Event Histogram [21] 44.44 21.97 30.2 34.87 Table 3. Evaluation results on Sim2Real gap. event representation. Real-World Environments We also verify the adaptation of Ev-TTA in real-world recordings with uncontrolled external settings. While N-ImageNet [18] allows for systematic evaluation across numerous environment changes, the dataset has synthetic aspects since it is recorded with monitor displayed images. To cope with such limitations, we test Ev-TTA on the Prophesee Megapixel dataset [29], which contains object labels for real-world recordings. The recordings are split by day and contain five object labels from which three (car, truck, bus) are selected for the experiments. We crop the object bounding boxes for use in classification and train a classifier on a recording from a single day, and test on five recordings from other days. Additional details about the dataset preprocessing are provided in the supplementary material. We compare Ev-TTA with Tent using the timestamp image [27] representation. As shown in Table 2, Ev-TTA outperforms Tent [46] in all tested recordings. Compared to the plain entropy minimization of Tent [46], Ev-TTA imposes additional loss functions using the temporal nature of events, which leads to superior performance. The results indicate the applicability of Ev-TTA to practical real-world scenarios incorporating event cameras. Simulation and Reality Gap While the main focus of Ev-TTA is on adaptation amidst external changes, we demonstrate that it could also perform adaptation to reduce the simulation to reality gap. To this end, we generate a synthetic version of N-ImageNet [18], termed SimNImageNet. SimN-ImageNet is created with the event camera simulator Vid2E [13] by moving a virtual event camera around ImageNet [33] images. Additional details about SimN-ImageNet are in the supplementary material. We evaluate Ev-TTA for Sim2Real adaptation by applying Ev-TTA to pre-trained models in SimN-ImageNet and observing the performance change in the N-ImageNet [18] validation set. Table 3 reports the results of three tested representations, namely timestamp image [27], binary event image [7], and event histogram [21]. Ev-TTA shows the highest validation accuracy in all cases, effectively reducing the performance caused by the Sim2Real gap. Due to the easy applicability of Ev-TTA, we expect the Sim2Real gap to be further reduced by combining Ev-TTA with recent advances in event vision for Sim2Real adaptation [8,22,40]. 4.1.2 Event-Based Steering Angle Prediction We test our adaptation strategy into a regression task of a steering angle prediction as described in Section 3.1. We use the DDD17 dataset [5], which contains approximately 12 hours of annotated driving recordings, captured in various external conditions and organized by day. For evaluation, we train a steering angle estimator algorithm using recordings from a single day and further evaluate the estimator on four other days. The steering angle estimator is designed as a ResNet34 [16] backbone receiving event histograms [21] as input, following Maqueda et al. [21]. We report the adaptation results in Table 4, where the RMSE(\u25e6) with the ground-truth steering angle is measured. Ev-TTA outperforms Tent [46] in all tested scenarios. By employing a subtle change in formulation, Ev-TTA could be extended to regression tasks and successfully reduce the prediction error. However, the performance improvement \fScene Type City (Source) Freeway City Town City Time Day (Source) Evening Night Day Day None 25.48 6.15 16.09 32.01 43.02 Tent [46] 6.52 15.65 30.94 41.66 Ev-TTA 5.84 15.45 30.65 41.44 Table 4. Evaluation results on steering angle prediction using the DDD17 [5] dataset. The RMSE(\u25e6) is reported. Method Validation 6 Validation 7 Tent [46] 21.16 30.02 Tent + LPS 26.51 35.83 Tent + LPS + LSE 26.82 36.87 Tent + LSE (SENTRY [30]) 20.13 33.92 Tent + LSE (Ignore Inconsistency) 27.13 36.69 Tent + LPS + LSE + CD (Ev-TTA) 29.20 38.45 Table 5. Ablation study on the key components of Ev-TTA. LPS, LSE, CD denotes prediction similarity loss, selective entropy loss, and conditional denoising, respectively. is not as dramatic compared to the classification tasks. A more effective approach for test-time adaptation in regression tasks is left as future work. 4.2. Ablation Study In this section, we ablate various components of EvTTA. Experiments are conducted in the # 6 and 7 variants from N-ImageNet [18], using the timestamp image [27]. These are the most challenging splits among the N-ImageNet variants as they are recorded in low light conditions and thus contain a large amount of noise as shown in Figure 1, whose performance is also presented in Table 1. We first examine the effect of the key constituents of EvTTA, namely prediction similarity loss, selective entropy loss, and conditional denoising. As shown in Table 5, by imposing prediction similarity loss LPS on Tent [46] (second row), a large performance enhancement takes place. Similarly, the selective entropy loss LSE also plays an important role in performance gain (third row). Compared to SENTRY [30], which maximizes entropy of inconsistent samples (fourth row), simply ignoring such samples (Tent + LSE) is much more effective (fifth row). Finally, the conditional noise removal (CD) (Section 3.2) leads to significant performance enhancement on prevalent noise bursts under low-light conditions, which can be deduced by comparing the third and sixth row of Table 5. We further investigate the effect of the number of testtime training samples. The six representations from Table 1 are trained with varying numbers of samples and evaluated on all variants of the N-ImageNet dataset [18]. Figure 5 shows the evaluation accuracy averaged across all representations, where the results are split by N-ImageNet variants with brightness and trajectory changes. We additionally deFigure 5. Effect of number of training samples on adaptation. lineate the upper bound in performance by performing training with ground-truth labels for one epoch using the same number of training samples. As the number of training samples increases, the average accuracy approaches the upper bound. Furthermore, even with a very small set (\u223c500 samples) of training data, large performance enhancement from \u2018No Adaptation\u2019 is observable. This demonstrates the practicality of Ev-TTA, as it can adapt in novel environments with only a small number of training data. 5." + }, + { + "url": "http://arxiv.org/abs/2112.04120v2", + "title": "Feature Statistics Mixing Regularization for Generative Adversarial Networks", + "abstract": "In generative adversarial networks, improving discriminators is one of the\nkey components for generation performance. As image classifiers are biased\ntoward texture and debiasing improves accuracy, we investigate 1) if the\ndiscriminators are biased, and 2) if debiasing the discriminators will improve\ngeneration performance. Indeed, we find empirical evidence that the\ndiscriminators are sensitive to the style (e.g., texture and color) of images.\nAs a remedy, we propose feature statistics mixing regularization (FSMR) that\nencourages the discriminator's prediction to be invariant to the styles of\ninput images. Specifically, we generate a mixed feature of an original and a\nreference image in the discriminator's feature space and we apply\nregularization so that the prediction for the mixed feature is consistent with\nthe prediction for the original image. We conduct extensive experiments to\ndemonstrate that our regularization leads to reduced sensitivity to style and\nconsistently improves the performance of various GAN architectures on nine\ndatasets. In addition, adding FSMR to recently-proposed augmentation-based GAN\nmethods further improves image quality. Our code is available at\nhttps://github.com/naver-ai/FSMR.", + "authors": "Junho Kim, Yunjey Choi, Youngjung Uh", + "published": "2021-12-08", + "updated": "2022-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Generative adversarial networks (GANs) [8] have achieved signi\ufb01cant development over the past several years, enabling many computer vision and graphics applications [4, 5, 14, 22, 23, 25, 31, 44]. On top of the carefully designed architectures [3,18,20,21,30,32,40], GANspeci\ufb01c data augmentation and regularization techniques have been keys for improvements. Regularization techniques [9,15\u201317,28,29,41,43] stabilize the training dynamics by penalizing steep changes in the discriminator\u2019s output within a local region of the input. On the other hand, data augmentation techniques [19, 42] prevent the discriminator from over\ufb01tting as commonly adopted in classi\ufb01cation do\u2020Corresponding author. mains. Note that both efforts aim to guide the discriminator not to \ufb01xate on particular subsets of observations and to generalize over the entire data distribution. Texture has been shown to provide a strong hint for classi\ufb01ers [6,7,10]. If such a hint is suf\ufb01cient enough to achieve high accuracy, the models tend not to learn the complexity of the intended task [2]. As the GAN discriminators are inherently classi\ufb01ers, we presume that they also tend to rely on textures to classify real and fake images. Accordingly, the generators would focus on synthesizing textures which are regarded as real by the biased discriminator. In this paper, we answer the two questions: 1) are discriminators sensitive to style (e.g., texture and color)? and 2) if yes, will debiasing the discriminators improve the generation performance? To answer the \ufb01rst question, we de\ufb01ne style distance as shown in Figure 1a. An ideal discriminator would produce small style distance because the two images have the same content. As we do not have a unit of measurement, we compute relative distance: the style distance divided by the content distance. In other words, we measure the sensitivity to style as multiples of the distance between images with different content. Surprisingly, Figure 1b shows that all baselines have noticeable values in relative distance. To answer the second question, we debias the discriminators and measure improvements in generative performance. A straightforward approach for debiasing is to suppress the difference in the discriminator\u2019s output with respect to the style changes of the input image. Indeed, we observe that imposing a consistency loss [41,43] on the discriminator between the original image and its stylized version improves the generator as mimicking contents becomes easier than mimicking style to fool the discriminator. However, this approach leads to other dif\ufb01culties: the criteria for choosing style images are unclear, and stylizing all training images with various style references requires a huge computational burden and an external style dataset. To ef\ufb01ciently address the style bias issue, we propose feature statistics mixing regularization (FSMR) which encourages the discriminator\u2019s prediction to be invariant to the styles of input images by mixing feature statistics within the discrim1 arXiv:2112.04120v2 [cs.CV] 25 Mar 2022 \finator. Speci\ufb01cally, we generate mixed features by combining original and reference features in the discriminator\u2019s intermediate layers and impose consistency between the predictions for the original and the mixed features. In the experiments, we show that FSMR indeed induces the discriminator to have reduced sensitivity to style (Section 4.1). We then provide thorough comparisons to demonstrate that FSMR consistently improves various GAN methods on benchmark datasets (Section 4.2). Our method can be easily applied to any setting without burdensome preparation. Our implementation and models will be publicly available online for the research community. Our contributions can be summarized as follows: \u2022 To the best of our knowledge, our work is the \ufb01rst style bias analysis for the discriminator of GANs. \u2022 We de\ufb01ne the relative distance metric to measure the sensitivity to the styles (Section 2). \u2022 We propose feature statistics mixing regularization (FSMR), which makes the discriminator\u2019s prediction to be robust to style (Section 3). \u2022 FSMR does not use external style images and outperforms the straightforward solution with external style images (Section 4.1). \u2022 FSMR improves \ufb01ve baselines on all standard and small datasets regarding FID and relative distance (Section 4.2, 4.3). 2. Style-bias in GANs Our work is motivated by the recent \ufb01nding that CNNs are sensitive to style rather than content, i.e., ImageNettrained CNNs are likely to make a style-biased decision when the style cue and content cue have con\ufb02ict [7]. To quantitatively measure how sensitive a discriminator is to style, we compute style distance, content distance, and then relative distance. Afterward, we describe a straightforward baseline solution to reduce the discriminator\u2019s distance to style. 2.1. Style distance and content distance We de\ufb01ne a quantitative measure for how sensitive a discriminator is sensitive to style. First, given a set of training images, we utilize a style transfer method to synthesize differently stylized images of the same content. The styles are randomly chosen from WikiArt [1]. Figure 1a shows some example stylized images from AFHQ [5]. We de\ufb01ne style distance ds between images with different styles and the same content. The content distance dc is de\ufb01ned vice versa: ds(c, s1, s2) | {z } style distance = d(T(c, s1), T(c, s2)), (1) (a) Style distance and content distance (b) Relative distance Figure 1. (a) The style transfer method T(c, s) transfers the style of s on the content of c. We de\ufb01ne style distance as the output difference due to style variations. Content distance is de\ufb01ned vice versa. (b) Relative distance across various GAN methods. Relative distance indicates how sensitive a discriminator is to style changes (Eq. 3). See Section 2 for details. dc(s, c1, c2) | {z } content distance = d(T(c1, s), T(c2, s)), (2) where T(c, s) transfers the style of the reference image s \u2208RC\u00d7H\u00d7W to the content image c \u2208RC\u00d7H\u00d7W , and d measures cosine distance in the last feature vectors of the discriminator. In practice, we use adaptive instance normalization (AdaIN) [13] as T. Figure 1 illustrates the process of calculating the content and style distances in Eq. (1) and (2). 2 \fFigure 2. Overview of feature statistics mixing regularization (Section 3.2). Within the forward pass in the discriminator, we perturb features by applying AdaIN with a different sample. In deeper layers, the perturbations are applied recursively. A scalar \u03b1 \u223cUniform(0, 1) moderates their strength. Then we enforce similarity between the original output and the perturbed one. As we do not have a unit of measurement, we compute relative distance \u03c1, i.e., the style distance divided by the content distance: \u03c1 |{z} relative distance = E c1,c2\u2208C, s1,s2\u2208S \u0014ds(c1, s1, s2) dc(s1, c1, c2) \u0015 , (3) where C and S denote the training dataset and an external style dataset, respectively. The larger the \u03c1 value, the more sensitive the discriminator is to style when classifying real and fake images. We will use the relative distance \u03c1 for further analysis from here on. Our goal is to reduce the style distance so that the discriminators consider contents more important and produce richer gradients to the generators. The relative distances of ImageNet-pretrained ResNet50 and ResNet50 pretrained for classifying Stylized ImageNet [7] supports validity of the metric. As the relative distance of the latter is less than the former and the latter is proven to be less biased toward style, we argue that the discriminators with lower relative distance are less sensitive to style (\ufb01gures are deferred to Section 4.2). 2.2. Baseline: On-the-\ufb02y stylization A well-known technique for preventing the classi\ufb01ers from being biased toward styles is to augment the images with their style-transferred versions, especially using the WikiArt dataset [1] as style references [7]. It works because the style transfer does not alter the semantics of the original images or the anticipated output of the network. On the other hand, in GAN training, style transfer drives the images out of the original data distribution, thus it changes the anticipated output of the discriminator [19]. There are two workarounds for such a pitfall: 1) applying stochastic augmentations for both real and fake data [19, 42] and 2) penalizing the output difference caused by the augmentation instead of feeding the augmented images to the discriminator [41, 43]. As our goal is to make the discriminator less sensitive to style changes, we take the second approach as a straightforward baseline, for example, imposing consistency on the discriminator between the original images c and their randomly stylized images T(c, s) by Lconsistency = Ec,s \u0002 (D(c) \u2212D(T(c, s)))2\u0003 , (4) where D(\u0005) denotes the logit from the discriminator. However, it raises other questions and dif\ufb01culties: the criteria for choosing the style images are unclear, and stylizing each image on-the-\ufb02y requires additional costs and an external dataset. Another option is to prepare a stylized dataset instead of on-the-\ufb02y stylization but it further requires prohibitively large storage. To combat this, we propose an ef\ufb01cient and generally effective method, feature mixing statistics regularization, whose details are described in the next 3 \fsection. 3. Proposed method We \ufb01rst describe the traditional style transfer algorithm, AdaIN, as a preliminary. Then, we discuss how our proposed method, feature statistics mixing regularization (FSMR), incorporates AdaIN to induce the discriminator to be less sensitive to style. 3.1. Preliminary: AdaIN Instance normalization (IN) [35] performs a form of style removal by normalizing feature statistics. Adaptive instance normalization (AdaIN) [13] extends IN to remove the existing style from the content image and transfer a given style. Speci\ufb01cally, AdaIN transforms content feature maps x into feature maps whose channel-wise mean and variance are the same as those of style feature maps y: AdaIN(x, y) = \u03c3(y) \u0012x \u2212\u00b5(x) \u03c3(x) \u0013 + \u00b5(y), (5) where x, y \u2208RC\u00d7H\u00d7W are features obtained by a pretrained encoder, and \u00b5(\u00b7) and \u03c3(\u00b7) denote their mean and standard deviation their spatial dimensions, calculated for each channel, respectively. Then, through a properly trained decoder, the transformed features become a stylized image1. Much work has adopted AdaIN within the generator for improving the generation performance [5, 14, 20, 22, 23, 25]. On the contrary, our proposed method (FSMR) employs it within the discriminator for ef\ufb01cient regularization, as described below. 3.2. Feature statistics mixing regularization Our goal is to make the discriminator do not heavily rely on the styles of the input images, without suffering from the dif\ufb01culties of the on-the-\ufb02y stylization (Section 2.2). Hence, we propose feature statistics mixing regularization (FSMR), which does not require any external dataset and can be ef\ufb01ciently implemented as per-layer operations in the discriminator. FSMR mixes the mean and standard deviation of the intermediate feature maps in the discriminator using another training sample and penalizes discrepancy between the original output and the mixed one. Speci\ufb01cally, we de\ufb01ne feature statistics mixing (FSM) for feature maps x with respect to feature maps y to be AdaIN followed by linear interpolation: FSM(x, y) = \u03b1x + (1 \u2212\u03b1)AdaIN(x, y), (6) 1AdaIN may denote the full stylization process but it denotes the operation on the feature maps (Eq. 5) in this paper. Algorithm 1 FSM Pseudocode, Tensor\ufb02ow-like # N: batch size, H: height, W: width, C: channels def FSM(x, y, eps=1e-5): x_mu, x_var = tf.nn.moments(x, axes=[1,2]) y_mu, y_var = tf.nn.moments(y, axes=[1,2]) # normalize x_norm = (x mu) / tf.sqrt(var + eps) # de-normalize x_fsm = x_norm * tf.sqrt(y_var + eps) + y_mu # combine alpha = tf.random.uniform(shape=[]) x_mix = alpha * x + (1 alpha) * x_fsm return x_mix # NxHxWxC where \u03b1 \u223cUniform(0, 1) controls the intensity of feature perturbation. We suppose that varying \u03b1 will let the discriminator learn from various strengths of regularization. Denoting an i-th layer of the discriminator as fi, a content image as c, and a style reference image as s which is randomly chosen from the current mini-batch samples, we de\ufb01ne the mixed feature maps \u02dc x and \u02dc y through feedforward operations with FSM: \u02dc x1 = x1 = f1(c), \u02dc y1 = y1 = f1(s), \u02dc xi+1 = fi+1(FSM(\u02dc xi, \u02dc yi)), \u02dc yi+1 = fi+1(FSM(\u02dc yi, \u02dc xi)). (7) Then the \ufb01nal output logit of the mixed feed-forward pass through the discriminator with n convolutional layers becomes: DFSM(c, s) = Linear(\u02dc xn). (8) Given the original output D(c) and the mixed output DFSM(c, s), we penalize their discrepancy with a loss: LFSMR = Ec,s\u223cpdata \u0002 (D(c) \u2212DFSM(c, s))2\u0003 . (9) Figure 2 illustrates the full diagram of FSMR. This loss is added to the adversarial loss [8] when updating the discriminator parameters. It regularizes the discriminator to produce consistent output under different statistics of the features varying through the layers. Our design of LFSMR is general-purpose and thereby can be combined with other methods [19, 20, 42]. As shown in Algorithm 1, FSM can be implemented with only a few lines of code. Also, we provide the Tensor\ufb02ow-like pseudo-code of FSMR in Appendix C. 3.3. Visualizing the effect of FSM To visually inspect the effect of FSM in the discriminator, we train a decoder (same architecture as the one for AdaIN [13]) which reconstructs the original image from the 32 \u00d7 32 feature maps of the original discriminator. 4 \f(a) Style (b) Content (c) Stylization by AdaIN [13] (d) Visualization of FSM Figure 3. Visualization of the effect of FSM (Section 3.3). (a) Example style images. (b) Example content images. (c) AdaIN largely distorts \ufb01ne details. (d) Reconstruction of FSMed features preserves them. In Figure 3, the content images go through the discriminator with FSM on all layers with respect to the style images to produce stylized (i.e., FSMed) intermediate features. Then the learned decoder synthesizes the result images from the FSMed features. The FSMed images have similar global styles to the style images but contain semantics of the content images. It has a similar effect to AdaIN but better preserves the \ufb01ne details of the content. We suggest that it is the key for the discriminator to be able to provide gradients toward more realistic images for the generator leading to higher quality images than the on-the-\ufb02y stylization baseline (Section 4.1). 4. Experiments We conduct extensive experiments on six datasets of CIFAR-10 [26], FFHQ [20], AFHQ [5], CelebA-HQ [18], LSUN Church [37], and MetFaces [19] with \ufb01ve GAN methods such as DCGAN [32], bCRGAN [43], StyleGAN2 [21], DiffAugment [42], and ADA [19]. We choose the datasets and baseline methods following the recent experimental setups [19,42]. We use the relative distance \u03c1 (Eq. 3), Fr\u00b4 echet inception distance (FID) [11], and inception score (IS) [33] as evaluation metrics. When we compute FID, we use all training samples and the same number of fake samples. All the baseline methods are trained using the of\ufb01cial implementations provided by the authors. See Appendix A for more details. We next conduct thorough experiments to demonstrate the superiority of our method over the straightforward solution and the baselines. 4.1. Comparison with the on-the-\ufb02y stylization In this section, we compare our method with the on-the\ufb02y stylization, i.e., generating stylized images via AdaIN during training and applying consistency regularization (Section 2.2). To perform this, we collect 100 style images from WikiArt [1] and randomly sample one for stylizing each image during training. Note that, unlike the on-the\ufb02y stylization, FSMR does not rely on external style images. We conduct experiments on \ufb01ve benchmark datasets: CIFAR-10, CelebA-HQ, FFHQ, AFHQ, and LSUN Church. Table 1 compares effect of regularization with on-the-\ufb02y stylization and FSMR in FID. While the former improves FID compared to the baselines to some extent, improvements due to FSMR are larger in all cases. For comparison with additional networks and datasets, see Appendix F. To measure the discriminator\u2019s sensitivity to style, we compute the relative distance \u03c1 (Eq. 3) for each method. Figure 4 shows the relative distance on CIFAR-10, FFHQ, and AFHQ. As one can easily expect, utilizing the stylized dataset reduces the discriminator\u2019s sensitivity toward style. It is worth noting that FSMR not only consistently reduces the sensitivity but also outperforms the competitor in all cases. This is a very meaningful result because FSMR does not use any external stylized dataset but it uses only the original images during training. We also observe that the lower relative distances agree with the lower FIDs within the same environment. We compare the time and memory costs in Table 1. FSMR requires 3.0\u223c7.4% extra training time, but the onthe-\ufb02y method requires 17.2\u223c26.8% extra training time for additional feedforward passes in image stylization. In addition, the on-the-\ufb02y method requires 70.0\u223c87.5% extra GPU memory to hold pretrained networks and features for image stylization, but FSMR only adds negligible (\u223c2%) GPU memory. To avoid extra costs for the on-the-\ufb02y stylization during training, we can prepare the stylized datasets before training (i.e., different approach but has the same effect as the on-the-\ufb02y stylization). However, the one-to-many stylization in advance requires heavy computation and prohibitively large storage as shown in Table 2. For example, to construct the stylized dataset for 1024\u00d71024 FFHQ with 100 style references, we need to process and store more than 7.0M (70k \u00d7 100) images (8.93TB). As an ablation study, we push toward harsher regularization: using randomly shifted feature maps instead of FSM. We observe that using arbitrary mean and standard deviation in AdaIN (Eq. 5) signi\ufb01cantly hampers adversar5 \fFigure 4. The relative distance of the discriminators on CIFAR-10, FFHQ, and AFHQ. We observe a positive correlation with FID in each case. See Appendix F for more results on other baselines and datasets. Method Standard dataset Costs CIFAR-10 FFHQ AFHQ CelebA-HQ LSUN Church Time (Hours) Memory (GB) DCGAN 15.89\u00b10.12 7.82\u00b10.10 17.27\u00b10.13 6.71\u00b10.09 17.33\u00b10.11 25.4 (1.5\u2020) 5 (4\u2020) DCGAN w/ on-the-\ufb02y 15.88\u00b10.11 7.33\u00b10.17 14.22\u00b10.15 5.41\u00b10.10 26.05\u00b10.14 31.5 (1.8\u2020) 8.5 (7.5\u2020) DCGAN w/ FSMR 14.98\u00b10.09 6.76\u00b10.08 13.19\u00b10.09 5.23\u00b10.10 13.84\u00b10.10 26.2 (1.6\u2020) 5.1 (4\u2020) bCRGAN 12.46\u00b10.09 6.43\u00b10.08 9.35\u00b10.10 4.31\u00b10.09 13.20\u00b10.10 26.1 (1.6\u2020) 5 (4\u2020) bCRGAN w/ on-the-\ufb02y 12.43\u00b10.10 5.20\u00b10.09 8.63\u00b10.12 3.47\u00b10.09 10.51\u00b10.10 33.1 (1.9\u2020) 8.5 (7.5\u2020) bCRGAN w/ FSMR 11.17\u00b10.07 4.68\u00b10.08 8.33\u00b10.08 3.43\u00b10.09 9.09\u00b10.07 27.7 (1.7\u2020) 5.1 (4\u2020) Table 1. FID comparison on DCGAN variants with FSMR and the baseline on-the-\ufb02y stylization. The bold numbers indicate the best FID for each baseline. We report the mean FID over 3 training runs together with standard deviations and the additional costs. All image resolutions are set to 128\u00d7128 due to the backbone architecture except CIFAR-10 (32\u00d732). Time and memory are measured in 128\u00d7128 images, and \u2020 indicates what is measured in 32 \u00d7 32 images. Time means a full training time. CIFAR-10 CelebA-HQ FFHQ AFHQ LSUN Church Time 8 10 30 5 40 Table 2. The time to create the stylized dataset for each standard dataset, measured in hours. ial training between a generator and a discriminator, i.e., the training diverges. On the other hand, FSMR using indomain samples shows the anticipated effect. 4.2. Standard datasets We evaluate the effectiveness of FSMR on three benchmark datasets, all of which have more than 10k training images: CIFAR-10 (50k), FFHQ (70k), and AFHQ (16k). Table 3 (left) shows that FSMR consistently improves StyleGAN2 even with existing augmentation techniques [19,42]. We emphasize that FSMR enhances baselines by a large gap on AFHQ, in which case the discriminator might be easily biased toward color and texture of the animals. Figure 5 shows the relative distances on CIFAR-10, FFHQ, and AFHQ for StyleGAN2 variants. FSMR reduces the relative distances in all cases and they agree with the improvements in FID. We also provide the relative distances of ResNet50 networks pretrained on ImageNet and Stylized ImageNet as references in each dataset (Section 2.1). As the lower relative distances agree with the higher classi\ufb01cation performances, the lower relative distances of the discriminator agree with the higher generative performances. In addition, Table 4 demonstrates that applying FSMR on StyleGAN2 variants further improves both FID and IS for both unconditional and class-conditional generation on CIFAR-10. For qualitative results, see Figure 6 and Appendix F. 4.3. Small datasets. GANs are known to be notoriously dif\ufb01cult to train on small datasets due to limited coverage of the data manifold. Being able to train GANs on small datasets would lead to a variety of application domains, making a rich synthesis experience for the users. We tried our method with \ufb01ve small datasets that consist of a limited number of training images such as MetFaces (1k), AFHQ Dog (5k), AFHQ Cat (5k). 6 \fMethod Standard dataset Small dataset CIFAR-10 FFHQ AFHQ MetFaces AFHQ Dog AFHQ Cat AFHQ Wild StyleGAN2 3.89\u00b10.07 5.62\u00b10.10 11.37\u00b10.03 51.88\u00b10.44 19.65\u00b10.07 8.37\u00b10.06 4.17\u00b10.06 + FSMR 3.76\u00b10.03 3.74\u00b10.03 8.59\u00b10.03 45.47\u00b10.42 18.08\u00b10.07 6.69\u00b10.04 3.96\u00b10.03 StyleGAN2-ADA 3.23\u00b10.06 4.05\u00b10.07 7.73\u00b10.11 29.17\u00b10.08 13.56\u00b10.10 6.64\u00b10.09 3.74\u00b10.14 + FSMR 2.90\u00b10.08 3.91\u00b10.06 6.12\u00b10.10 27.81\u00b10.11 11.76\u00b10.14 5.71\u00b10.10 3.24\u00b10.16 StyleGAN2-DiffAug 3.23\u00b10.08 5.35\u00b10.09 7.52\u00b10.08 32.96\u00b10.08 16.92\u00b10.06 6.39\u00b10.05 4.39\u00b10.07 + FSMR 2.93\u00b10.05 4.99\u00b10.08 6.53\u00b10.05 29.98\u00b10.15 14.55\u00b10.18 6.29\u00b10.07 4.28\u00b10.04 Table 3. FID comparison on StyleGAN2 variants. The bold numbers indicate the best FID for each baseline. We report the mean FID over 3 training runs together with standard deviations. FSMR improves the baselines in all cases. Figure 5. The relative distance of the discriminators on CIFAR-10, FFHQ, and AFHQ for StyleGAN2 variants. The higher \u03c1 value, the more sensitive the discriminator is to style when classifying real and fake. We report the reference values for the relative distances using ResNet50 trained on ImageNet (red line) and ResNet50 trained on Stylized ImageNet (blue line) [7]. As the lower relative distances agree with the higher classi\ufb01cation performances, the lower relative distances of the discriminator agree with the higher generative performances. Method Unconditional Conditional FID \u2193 IS \u2191 FID \u2193 IS \u2191 StyleGAN2 3.89 9.36 3.52 9.77 + FSMR 3.76 9.58 3.35 10.05 StyleGAN2-ADA 3.23 9.47 2.76 9.98 + FSMR 2.90 9.68 2.63 10.03 StyleGAN2-DiffAug 3.23 9.63 3.10 9.84 + FSMR 2.93 9.81 2.87 10.02 Table 4. FID and inception score comparison on CIFAR-10 across StyleGAN2 variants. Bold face indicates the best scores for each baseline. We report the mean scores over three training runs. AFHQ Wild (5k). As shown in Table 3 (right), we can observe that FSMR improves FID stably for all the baseline models, even if the number of data is small. See Figure 6 and Appendix F for qualitative results. 5. Related Work Improving discriminators. While generative adversarial networks [8] have been developing regarding their network architectures [20,21,28,32], regularizing the discriminator has been simultaneously considered as an important technique for stabilizing their adversarial training. Examples include instance noise [15], gradient penalties [9, 28], spectral normalization [29], contrastive learning [16, 17], and consistency regularization [41, 43]. They implicitly or explicitly enforce smooth changes in the outputs within some perturbation of the inputs. Recent methods employ data augmentation techniques to prevent discriminator over\ufb01tting [19,42]. While they explicitly augment the images, our method implicitly augments the feature maps in the discriminator. In addition, while they use standard transformations which are used in training classi\ufb01ers, our method regularizes the discriminator to produce small changes when the style of the input image is changed and it effectively prevents discriminator from being biased toward style. Bias toward style. Convolutional neural networks are biased toward style (texture) when they are trained for classi7 \fFFHQ METFACES AFHQ CAT, DOG, WILD, 2562 CIFAR-10 70k img, 2562 1336 img, 2562 5653 img 5239 img 5238 img 50k, 10 cls, 322 Figure 6. Examples of generated images for several datasets trained using FSMR. Please note that we do not use transfer learning on MetFaces as opposed to ADA. See Appendix F for more uncurated results. \ufb01cation [6, 7, 10]. The straightforward solution for reducing the bias is randomizing textures of the samples by a style transfer algorithm [7]. It is a kind of data augmentation technique in that the style transfer prevents classi\ufb01ers from over\ufb01tting to styles as geometric or color transformations prevent classi\ufb01ers from over\ufb01tting to certain positions or colors. As simply perturbing the data distribution in GAN training results in perturbed fake distribution [19], we introduce an extra forward pass with an implicitly stylized feature and impose consistency in the output with respect to the original forward pass (Eq. 10). While the linear interpolation of our mixing resembles mixup [39], we do not interpolate target outputs but only soften the changes in feature statistics. Style mixing regularization [20] may look similar to FSMR in that it also mixes two styles. It mixes styles in the generator and encourages the generator to produce mixed images that will be used in the adversarial training for both the generator and the discriminator. Its goal is to divide the role of layers and it has little effect on performance (4.42\u21924.40, FFHQ, StyleGAN, 1024x1024 resolution). On the other hand, FSMR implicitly mixes styles in the discriminator and suppresses sensitivity to style by imposing consistency regularization to the discriminator. FSMR has a great in\ufb02uence on performance improvement (5.52\u21923.72, FFHQ, StyleGAN2, 256x256 resolution). 6. Limitation and Discussion As shown in various experiments, we have found that the discriminators have a bias for style, which enables numerical representation through the relative distance metric. However, we have not found out the optimal value that how much relative distance should be reduced for each model. We observed through the reference value in Figure 5, that even though we could not \ufb01nd the optimal value, the relationship where the relative distance decreases, the less bias to style reduces. We have proposed FSMR, which reduces the bias to style using only internal training datasets, rather than using external datasets, and proved that FSMR is very simple yet effective. In future work, it would be worthwhile to search the optimal value for the relative distances and to unify the relative distances among different models. 7." + }, + { + "url": "http://arxiv.org/abs/2112.01041v2", + "title": "N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras", + "abstract": "We introduce N-ImageNet, a large-scale dataset targeted for robust,\nfine-grained object recognition with event cameras. The dataset is collected\nusing programmable hardware in which an event camera consistently moves around\na monitor displaying images from ImageNet. N-ImageNet serves as a challenging\nbenchmark for event-based object recognition, due to its large number of\nclasses and samples. We empirically show that pretraining on N-ImageNet\nimproves the performance of event-based classifiers and helps them learn with\nfew labeled data. In addition, we present several variants of N-ImageNet to\ntest the robustness of event-based classifiers under diverse camera\ntrajectories and severe lighting conditions, and propose a novel event\nrepresentation to alleviate the performance degradation. To the best of our\nknowledge, we are the first to quantitatively investigate the consequences\ncaused by various environmental conditions on event-based object recognition\nalgorithms. N-ImageNet and its variants are expected to guide practical\nimplementations for deploying event-based object recognition algorithms in the\nreal world.", + "authors": "Junho Kim, Jaehyeok Bae, Gangin Park, Dongsu Zhang, Young Min Kim", + "published": "2021-12-02", + "updated": "2022-03-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Event cameras are neuromorphic vision sensors that encode visual information as a sequence of events, and have a myriad of bene\ufb01ts such as high dynamic range, low energy consumption, and microsecond-scale temporal resolution. However, algorithms for processing event data are still at their nascency. This is primarily due to the lack of a large, \ufb01ne-grained dataset for training and evaluating different event-based vision algorithms. While the number of event camera datasets surged in the past few years, many \ufb01ne-grained datasets lack size [39], whereas largescale real-world datasets lack label diversity [50]. Large amounts of publicly available data are one of the key facFigure 1: Sample events from N-ImageNet displayed along with their RGB counterparts from ImageNet [46]. Positive, negative events are shown in blue and red, respectively. tors in the recent success of computer vision. For example, ImageNet [46] triggered the development of accurate, high performance object recognition algorithms [15, 23] whereas MS-COCO [29] led to the advent of eloquent image captioning systems [60]. We provide N-ImageNet, an event camera dataset targeted for object recognition that surpasses all existing datasets in both size and label granularity as summarized in Table 1 and Figure 2. Since it is infeasible to manually obtain real-world instances of thousands of object categories, we opt to generate events by moving the sensor in front of an LCD monitor which displays images from ImageNet [46] as in [39, 27] using programmable hardware. N-ImageNet is projected to function as a challenging benchmark for event-based object recognition algorithms. As shown in Table 1, evaluations of various classi\ufb01ers on N-ImageNet demonstrate a large room for improvement, in contrast to popular benchmarks such as N-Cars [50] and NCaltech101 [39]. We also experimentally justify the effectiveness of N-ImageNet pretraining. Models pretrained on arXiv:2112.01041v2 [cs.CV] 28 Mar 2022 \fDataset # of Samples # of Classes Top Accuracy Robustness Quanti\ufb01able? N-Cars [50] 24029 2 95.8 [50] \u2a09 N-Caltech101 [39] 8709 101 90.6 [13] \u2a09 CIFAR10-DVS [27] 10000 10 69.2 [47] \u2a09 ASL-DVS [4] 100800 24 94.6 [20] \u2a09 N-MNIST [39] 70000 10 99.2 [20] \u2a09 MNIST-DVS [49] 30000 10 99.1 [20] \u2a09 N-SOD [44] 189 4 97.14 [44] \u2a09 DVS128-Gesture [2] 1342 11 99.62 [19] \u2a09 N-ImageNet 1781167 1000 48.93 \u25ef Table 1: Comparison of N-ImageNet with other existing benchmarks for event classi\ufb01cation. N-ImageNet show a large amount of performance gain in various object recognition benchmarks, and are capable of rapidly generalizing to new datasets even with a small number of training samples. We further analyze the robustness of event-based object recognition algorithms amidst changes in camera trajectories and lighting conditions. Event cameras can operate in highly dynamic scenes and low light environments, but events produced in such conditions tend to have more noise and artifacts from motion blur [9]. We record variants of N-ImageNet under diverse camera trajectories and lighting, and quantify the signi\ufb01cant performance degradation of event-based classi\ufb01ers under environment changes. To the best of our knowledge, our dataset is the \ufb01rst event camera dataset capable of providing quantitative benchmarks for robust event-based object recognition, as shown in Table 1. In addition, we propose a simple event representation, called DiST (Discounted Sorted Timestamp Image), that shows improved robustness under the external variations. DiST penalizes events that are more likely to be noise, and uses sorted indices of event timestamps to ensure durability against speed change. To summarize, our main contributions are (i) NImageNet, the largest \ufb01ne-grained event camera dataset to date, thus serving as a challenging benchmark, (ii) NImageNet pretraining which leads to considerable performance improvement, (iii) N-ImageNet variants that enable quantitative robustness evaluation of event-based object recognition algorithms, and (iv) an event camera representation exhibiting enhanced robustness in diverse environment changes. 2. Related Work Event Camera Datasets With rising interests in eventbased vision, the \ufb01eld has seen a wide range of event camera datasets targeted towards various computer vision tasks, such as object detection [42, 8], optical \ufb02ow estimation [61, 45], and image reconstruction [48, 33, 56, 55, 54, 58, 28]. For event-based object recognition in particular, diverse datasets [39, 49, 27, 4, 50, 34, 18, 2, 6, 30, 52, 53, 44] 102 103 104 105 106 # of data (log scale) 100 101 102 103 # of classes (log scale) N-ImageNet (Ours) N-MNIST N-Caltech101 MNIST-DVS Poker-DVS CIFAR10-DVS ASL-DVS N-CARS PRED18 UCF-50 Caltech-256 DVSGesture RoShamBo17 SL-Animals-DVS Figure 2: Comparison of N-ImageNet (the red star in the upper-right corner) against existing datasets (distributed in the area shaded in gray) in terms of dataset size and class count. Note that the axes are displayed in log scale. have been proposed, and can be categorized depending on whether the recordings consist of real-world objects or monitor-displayed images. N-Cars [50] and ASL-DVS [4] contain event data obtained by directly capturing various real-world objects. Such datasets typically have a smaller number of labels compared to datasets acquired from monitor-displayed images since it is dif\ufb01cult to acquire \ufb01ne-grained labels of real-world recordings. Datasets such as N-MNIST, N-Caltech101 [39], MNISTDVS [49] and CIFAR10-DVS [27], which belong to the latter category, are recorded by moving an event camera around monitors displaying images of well-known datasets like MNIST [25] and Caltech101 [11]. Monitor-generated event camera datasets can be considered synthetic in some aspects, but contain abundant labels from their original image datasets, which are bene\ufb01cial for training and evaluating event-based object recognition algorithms. We follow the data acquisition procedure of these datasets, and generate a large-scale, \ufb01ne-grained dataset by transforming ImageNet [46] data to event camera recordings. Furthermore, our experiments demonstrate that leveraging such a large-scale event dataset greatly improves the performance of event-based object recognition algorithms. Event-Based Object Recognition Event camera data exhibits unique characteristics, namely asynchronicity and sparsity. Existing event-based object recognition algorithms could be classi\ufb01ed by whether such characteristics are utilized. A large body of literature proposes models that perform asynchronous updates in a sparse manner [26, 38, 40, 35, 43, 2, 24, 50, 32, 5]. Recently proposed MatrixLSTMs [32] exhibit competitive results when evaluated on popular event-based object recognition benchmarks such as N-Cars [50] and N-Caltech101 [11]. MatrixLSTMs handle streams of event data using sparse updates of LSTMs [16], where an adaptive \u2018grouping\u2019 opera\ftion is used to update the network outputs asynchronously. Algorithms that avoid the direct exploitation of the sparse and asynchronous nature of event data also prevail [14, 57, 13, 37, 17, 22, 4]. These methods place more weight on performance, typically surpassing their sparse and asynchronous competitors in accuracy when tested on various object recognition datasets. Event Spike Tensors (EST) [14, 13] aggregate events using a learned kernel, resulting in a highly versatile representation of event data. Relatively simple encodings of event camera data have also been proposed, as in EV-FlowNet [61] and EV-Gait [57], where events are accumulated to form a four-channel image consisting of event counts and the newest timestamps of each pixel. We evaluate the performance of the aforementioned object recognition algorithms on N-ImageNet. Due to its scale and label diversity, N-ImageNet is capable of providing reliable assessments on different event-based object recognition algorithms. Robustness in Event-Based Object Recognition Event cameras are known to successfully function in low-light conditions and dynamic scenes. However, the robustness of event-based classi\ufb01ers under such conditions is a relatively unstudied problem. Most existing works [59, 42, 50] either only display qualitative results or experiment with synthetic adversaries. Sironi et al. [50] shows the robustness of their proposed event representation in various real-world objects, but the analysis is only made qualitatively. Wu et al. [59] quantitatively investigates the effect of noise on event-based classi\ufb01ers, but the experiments are conducted on synthetic noise. Although there exist previous works such as Wang et al. [57] where real event camera noise is investigated, the experiments are carried out with a static camera under constant, ambient lighting. The N-ImageNet variants recorded under diverse lighting and camera trajectories enable realistic, quantitative assessment on the robustness of event-based object recognition algorithms. 3. Method Overview 3.1. Dataset Acquisition N-ImageNet Dataset Following the footsteps of previous image-to-event conversion methods [39, 27, 49], we acquire N-ImageNet from an event camera that observes monitordisplayed images from ImageNet [46]. Since events are triggered by pixel intensity changes, external stimuli are solicited to generate event data. One viable solution for generating constant stimuli would be to keep the camera still and make the displayed images move, as in [27, 49]. However, as pointed out by Orchard et al. [39], such methods suffer from artifacts induced by the refresh mechanisms of monitors. Removing such artifacts requires an additional postprocessing step in the frequency domain [27, 49], which is Figure 3: Custom hardware designed to convert RGB images to event camera data. costly due to the immense number of images in ImageNet, and may alter the inherent subtleties in the raw measurements. Instead, we opt to move the event camera around an LCD monitor displaying still images from ImageNet [46], as proposed in Orchard et al. [39]. We devise custom hardware to trigger perpetual camera motion as shown in Figure 3. The device consists of two geared motors connected to a pair of perpendicularly adjacent gear racks where the upper and lower motors are responsible for vertical and horizontal motion, respectively. Each motor is further linked to a programmable Arduino board [3], which can control the camera movement. In all our experiments, the event camera is vibrated vertically and horizontally on a plane parallel to the LCD monitor screen. The amplitude and frequency of vibration are controlled by the program embedded in the Arduino microcontroller [3]. Once the device is prepared, the event camera is mounted at the forefront of the device for recording event data. We use the 480\u00d7640 resolution Samsung DVS Gen3 [51] event camera for recording event sequences, and a 24-inch Dell P2419H LCD monitor for displaying RGB images from ImageNet [46]. The acquisition process is performed in a sealed chamber, to ensure that no external light will adulterate the recorded events. Under this setup, both the training and validation splits of ImageNet are converted to NImageNet. Thanks to the large scale and label granularity of ImageNet, N-ImageNet serves as a challenging benchmark for event-based object recognition and signi\ufb01cantly boosts the performance of event-based classi\ufb01ers via pre-training. These amenable properties of N-ImageNet will be further examined in Section 4.1. Datasets for Robustness Evaluation We additionally present a benchmark to quantitatively assess the robustness of event-based object recognition algorithms. Event representations are vulnerable to alternations in camera motion or illumination, as even small changes can trigger a wide variety of event sequences. We simulate such changes using the programmable camera trajectory and monitor brightness \fDataset Frequency (Hz) Amplitude (mm) Shape Original 5 3 Square \u21ba Validation 1 8.33 4.5 Vertical Validation 2 5 3 Horizontal Validation 3 5 6 Vertical Validation 4 5 6 Horizontal Validation 5 5 6 Square \u21ba Table 2: Validation datasets made with various camera motion. \u21baindicates counterclockwise rotation. The amplitudes of square trajectories represent the lengths of the diagonals. Dataset Brightness Level Gamma Illuminance (lux) Original 50 1 70.00 Validation 6 0 0.7 12.75 Validation 7 0 1 23.38 Validation 8 100 1 95.50 Validation 9 100 1.5 111.00 Table 3: Validation datasets generated under various brightness conditions. from our hardware setup, and generate variants of the NImageNet validation split. We use these validation datasets to quantitatively evaluate the performance degradation of existing object recognition algorithms in Section 4.2. Speci\ufb01cally, we present nine validation datasets to test the robustness of event-based object recognition algorithms amidst changes in motion or illumination. Table 2 lists \ufb01ve datasets with different camera trajectories. The frequency, amplitude, and trajectory shape of the camera movement are modi\ufb01ed with the Arduino microcontroller [3]. Table 3 shows the monitor con\ufb01gurations of four additional validation datasets, designed to examine the effect of scene brightness changes on event-based classi\ufb01ers. Note that Validation 6 and 9 datasets are intended to model scenes with exceedingly low/high illumination by using extreme monitor gamma values. We also report the illuminance measured at the position of the event camera since the same numerical con\ufb01gurations of different monitors may yield distinct displayed results. In all cases, other con\ufb01gurations are kept the same as the original N-ImageNet dataset. 3.2. Robust Event-Based Object Recognition As many event-based classi\ufb01ers are typically trained and tested in datasets captured in prede\ufb01ned conditions [39, 4], the performance degradation is inevitable in challenging scenarios that arise in real-life applications. In Section 4.2 we evaluate the robustness of existing object recognition algorithms with N-ImageNet variants and demonstrate that external changes indeed incur performance degradation. Even the best-performing event-based algorithms [14, 5] are fragile to diverse motion and illumination variations. Ironically, the main bene\ufb01ts of event cameras include the fast temporal response and high dynamic range. We introduce Discounted Sorted Timestamp Image (DiST), which is designed speci\ufb01cally for robustness against changes in camera trajectory and lighting. The intuition behind DiST is twofold: (i) noisy events incurred from severe illumination can be suppressed with the evidence of the spatio-temporal neighborhood, and (ii) relative timestamps are robust against the speed of camera motion compared to raw timestamp values. A typical output of an event camera is a sequence of events E = {ei = (xi,yi,ti,pi)}, where ei indicates brightness change with polarity pi \u2208{\u22121,1} at pixel location (xi,yi) at time ti. Given an event camera of spatial resolution H \u00d7 W, let N\u03c1(x,y,p) denote the set of events in E of polarity p, con\ufb01ned within a spatial neighborhood of size \u03c1 around (x,y). For example, N0(x,y,p) would indicate the set of all events of polarity p at the pixel coordinate (x,y). DiST aggregates a sequence of events E into a 2 channel image S \u2208RH\u00d7W \u00d72. DiST initiates its representation as the timestamp image [41]. The timestamp image is a 2 channel image, which stores the raw timestamp of the newest event at each pixel, namely So(x,y,p) = Tnew(N0(x,y,p)), where Tnew(\u22c5) indicates the newest timestamp. We \ufb01rst de\ufb01ne the Discounted Timestamp Image (DiT) SD, obtained by subtracting the event occurrence period of the neighborhood from the newest timestamp in So, SD(x,y,p) = So(x,y,p) \u2212\u03b1D(x,y,p). (1) Here \u03b1 is a constant discount factor and D(x,y,p) is the neighborhood event occurrence period, D(x,y,p) = Tnew(N\u03c1(x,y,p)) \u2212Told(N\u03c1(x,y,p)) C(N\u03c1(x,y,p)) , (2) where Told(\u22c5) is de\ufb01ned similarly as Tnew(\u22c5) and \u03c1 > 1. For each pixel, the discount D(x,y,p) represents the time range (Tnew(\u22c5) \u2212Told(\u22c5)) in which events are generated from the neighborhood N\u03c1, normalized by its event count C(\u22c5). The discount mechanism is designed to be robust against event camera noise. Figure 4 illustrates the typical patterns for event sequences (left) and the resulting representation of DiST that resolves the discrepancies due to noise (right) in the 1-D case. Two dominant factors of event camera noise are background activities and hot pixels [9, 12]. Background activities are low frequency noise [12] more likely to occur in low-light conditions, caused by transistor leak currents or random photon \ufb02uctuations [9, 36, 21]. Figure 4 (a) indicates background activities, whose low frequency results in higher discounts from Equation 2. Hot pixels are triggered by the improper resets of events [9, 12], and are often spatially isolated [12] (Figure 4 (c)). Such pixels have \fFigure 4: Suppressing event camera noise with the discount mechanism. One-dimensional events with a single polarity are shown. Events colored in red are the newest event of each pixel and the neighborhood regions N\u03c1 are shaded in gray. (a), (b), and (c) denote background activity, normal events, and hot pixels, respectively. Both the background activity and hot pixels in the raw event sequence (left) result in large discounts and are suppressed by DiST (right). a small neighborhood count C(\u22c5) and thus are highly discounted in Equation 2. The \ufb01nal representation of DiST, S is the normalized sorted indices of SD, S = argsort(SD)/max x,y,p argsort(SD),33 (3) where argsort denotes the operation that returns the sorted indices of a tensor. The transformation from SD to S is similar to Alzugaray et al. [1]. However, DiST performs a global sort where all pixels in SD are sorted with a single ordering scheme, instead of the local, patch-wise sorting in Alzugary et al. [1]. The global sort allows for an ef\ufb01cient implementation of DiST, which is further explained in the supplementary material. By using the normalized relative timestamps, i.e., sorted indices, DiST is resilient against the camera speed. To illustrate, consider the sequences of one-dimensional events (x,t,p) displayed in Figure 5. While the absolute values of the timestamps [41] are directly affected by the speed change, DiST remains constant. We expect DiST to serve as a baseline representation for robust event-based object recognition. Its robustness against camera trajectory and scene illumination changes will be quantitatively investigated in Section 4.2. 4. Experimental Results In this section, we empirically validate various properties of N-ImageNet. With its large scale and label diversity, N-ImageNet is not only a useful benchmark to assess various event-based representations, but can also boost the (a) Fast camera motion. (b) Slow camera motion. Figure 5: Robustness of DiST against event camera speed. Similar to Figure 4, one-dimensional events are displayed and red events denote the newest event. While the timestamps of red events vary with the camera speed, their relative timestamps obtained from sorting remain constant. performance of existing algorithms via pre-training (Section 4.1). In Section 4.2, we investigate the robustness of event-based classi\ufb01ers against diverse external conditions, along with the ef\ufb01cacy of our proposed event representation, DiST. Event Representations for Object Recognition We introduce the event representations used throughout our experiments. The representations are inputs to the object recognition algorithms, while the backbone classi\ufb01er is \ufb01xed to ResNet34 [15]. This is because most event-based object recognition algorithms [14, 5] only differ in the input event representation and share a similar classi\ufb01cation backbone. Eleven event representations are selected for evaluation on N-ImageNet and its variants, as shown in Table 4. Two of the representations are learned from the data, namely MatrixLSTM [5] and Event Spike Tensor (EST) [14]. After the events are passed through LSTM [16] for MatrixLSTM and multilayer perceptrons for EST, the outputs are further voxelized to form an image-like representation. The remaining representations can be classi\ufb01ed based on how the timestamps are handled. Two of these representations discard temporal information, and only use the locations of events. Binary event image [7, 37] is generated by assigning 1 to pixels with events, and 0 to others. Event histogram [31] is an extension of the former, additionally keeping the event count of each pixel. Three representations use the raw timestamps to leverage temporal information. Timestamp image [41] caches the newest timestamp for each pixel. Event image [57, 61] is a richer representation that concatenates the event histogram [31] and timestamp image [41]. Time surface [24] extends the timestamp image [41] in a slightly different manner, by passing each timestamp through an exponential \ufb01lter. This allows the surface to place more weight on the newest events, which enhances the sharpness of the representation. The aforementioned representations with raw timestamps can be vulnerable to camera speed changes, as \fRepresentation Description # of Channels Accuracy(%) MatrixLSTM [5] Learned with LSTM 3 32.21 Event Spike Tensor [14] Learned with MLP 18 48.93 Binary Event Image [7] Binarized event occurence 2 46.36 Event Histogram [31] Event counts 2 47.73 Event Image [57] Event counts and newest timestamps 4 45.77 Time Surface [24] Exponential of newest timestamps 2 44.32 HATS [50] Aggregated newest timestamps 2 47.14 Timestamp Image [41] Newest timestamps 2 45.86 Sorted Time Surface [1] Sorted newest timestamps 2 47.90 DiT Discounted newest timestamps 2 46.1 DiST Sorted discounted timestamps 2 48.43 Table 4: N-ImageNet validation accuracy evaluated on various event representations. pointed out in Section 3.2. We further include representations targeted to enhance robustness. HATS [50] improves the robustness against event camera noise. Speci\ufb01cally, the outliers are smoothed by aggregating neighboring pixels of the time surface [24]. We use a slightly modi\ufb01ed version of HATS [50] for more competitive results, and the details are provided in the supplementary material. Surface of active events with sort normalization [1], which we will refer to as sorted time surface, is robust against camera speed changes as the sorting generates relative timestamps. DiST, as explained in Section 3.2, is robust against both event camera noise and speed changes. We additionally report results on the variant of DiST without sorting, namely the Discounted Timestamp Image (DiT). Evaluation on DiT can shed light on the importance of the sorting operation in DiST. Implementation Details All inputs are reshaped into a 224 \u00d7 224 grid to restrict GPU memory consumption and shorten inference time. All models are trained from scratch with a learning rate of 0.0003, except for the learned representations (MatrixLSTM [5] and EST [14]). The weights are initialized with ImageNet pre-training for these representations to fully replicate the training setup speci\ufb01ed in the original works. We train these models with a learning rate of 0.0001. Further information regarding experimental details is provided in the supplementary material. 4.1. Evaluation Results with N-ImageNet Event-based Object Recognition Table 4 displays the evaluation results of existing event-based object recognition algorithms on N-ImageNet. The accuracy of the best performing model on N-ImageNet (48.9%), is far below that of the state-of-the-art model on ImageNet [10] (90.2%). The clear gap indicates that mastering N-ImageNet is still a long way to go. Other examined models also exhibit a stark contrast in their reported accuracy on existing benchmarks and performance on N-ImageNet. For example, the test accuracy of the event histogram [32] on N-Cars is 94.5%, and the test accuracy of MatrixLSTM [5] on N-Caltech101 is 86.6%. These models show a validation accuracy of around 30 \u223c50% in N-ImageNet, further supporting the dif\ufb01culty of N-ImageNet. N-ImageNet is a large-scale, \ufb01ne-grained benchmark (Table 1) compared to any other existing benchmark and the inherent challenge will foster development in event classi\ufb01ers that could readily function in the real world. Assessment on Representations The evaluation of various representations on N-ImageNet allows us to make a systematic assessment of different design choices to handle event-based data. Interestingly, the performance of representations without temporal information (binary event image [37, 7] and event histogram [31]) are superior to representations directly using raw timestamps (timestamp image [41], time surface [24], and event image [57, 61]). The wide variations in raw timestamps deteriorate the generalization capacity of representations that directly utilize this information. This notion is further supported by the fact that the representations using relative timestamps (sorted time surface [1] and DiST) outperform those using raw timestamps. It should also be noted that our proposed robust representation, DiST, successfully generalizes to large-scale datasets such as N-ImageNet, and shows performance on par with strong learned representations. EST [14] is the best performing model in Table 4, capable of learning highly expressive encodings of event data, thanks to its event aggregation using multilayer perceptrons. The performance of DiST is very close to that of EST, although it does not incorporate any learnable module in its event representation. The suppression of noise from discounting, and the resilience to variations in camera speed from using relative timestamps help DiST to generalize. If we either omit the discount (sorted time surface [1]) or the sorting mechanism (DiT), the performance is inferior to DiST, indicating the importance of the discounting and sorting operations. We further investigate the robustness of DiST in Section 4.2. \fDataset N-Cars CIFAR10-DVS ASL-DVS N-Caltech101 # of classes 2 10 24 101 Random 90.80 62.57 29.57 68.12 ImageNet 91.48 70.36 53.43 80.88 N-ImageNet 94.73 73.72 58.28 86.81 Table 5: Test accuracy of N-ImageNet pretrained models on existing event-based object recognition benchmarks, compared with ImageNet pretraining and random initialization. Figure 6: Test accuracy of N-ImageNet pretrained models in resource constrained settings. Each model is trained for 5 epochs with varying amounts of training data. Ef\ufb01cacy of N-ImageNet Pre-Training Apart from being a challenging benchmark, the main motivation of NImageNet is to provide a large reservoir of event data to pretrain powerful representations for downstream tasks, echoing the role of ImageNet in conventional images. We validate the effectiveness of N-ImageNet pre-training by observing the capacity to generalize in new, unseen datasets. Four standard event camera datasets are used for evaluation: N-Caltech101 [39], N-Cars [50], CIFAR10-DVS [27], and ASL-DVS [4]. For seven event representations from Table 4, ResNet34 [15] is pre-trained on N-ImageNet and compared with ImageNet pre-training and random initialization. The seven representations selected are as follows: binary event image [7], event histogram [31], timestamp image [41], event image [57], time surface [24], sorted time surface [1], and DiST. In experiments explicated below, we report the averaged test accuracy of the seven representations on each dataset. Additional details about the experimental setup are speci\ufb01ed in the supplementary material. Table 5 displays the average test accuracy after training a \ufb01xed number of epochs for different initialization schemes. Note that we only use 800 samples from ASL-DVS [4] Factor Trajectory Brightness Change Amount Small Big Small Big Validation Dataset Number 1, 2 3, 4, 5 7, 8 6, 9 MatrixLSTM [5] 33.00 25.62 28.91 23.60 Event Spike Tensor [14] 36.97 32.35 24.89 22.36 Binary Event Image [7] 36.68 31.82 30.94 25.54 Event Histogram [31] 38.72 32.49 33.01 27.72 Event Image [57] 36.52 30.96 32.26 27.04 Time Surface [24] 37.82 33.46 34.19 28.74 HATS [50] 38.95 33.28 33.26 28.22 Timestamp Image [41] 38.31 33.70 33.27 28.04 Sorted Time Surface [1] 38.92 33.69 33.47 28.38 DiT 38.21 33.61 32.66 28.42 DiST 40.88 35.85 35.87 30.88 Table 6: Mean accuracy measured on N-ImageNet variants with changes in camera trajectory and brightness. for training, as using the whole dataset made all model performances saturate near 99%. Networks initiated with N-ImageNet pre-trained weights outperform models from other initialization schemes by a large margin. Notably, the bene\ufb01ts of pre-training intensify as the number of classes in the datasets increases. This could be attributed to the \ufb01ne-grained labels of N-ImageNet, which help models to generalize in challenging datasets where numerous labels are present. Furthermore, N-ImageNet pre-trained models outperform its competitors in N-Cars and ASL-DVS, which are recordings of real-world objects. This indicates that although N-ImageNet contains events from monitor displayed images, models pre-trained on it could seamlessly generalize to recognizing real-world objects. As a practical extension to the previous experiment, we validate the generalization capability of N-ImageNet pretrained models under resource-constrained settings. We train the same set of models for 5 epochs with initialization schemes from the previous experiment, under varying numbers of training samples. Figure 6 shows that N-ImageNet pre-training incurs a large performance improvement across all four evaluated datasets. The performance gain is further increased when the number of training samples is small. Such results imply that N-ImageNet pre-training provides strong semantic priors that enable object recognition algorithms to quickly adapt to new datasets, even with a few labeled samples. 4.2. Robust Event-Based Object Recognition Validation Accuracy of N-ImageNet Variants Using the N-ImageNet variants created under various external conditions as described in Section 3.1, we examine the robustness of event-based object recognition algorithms. Table 6 \fshows the validation accuracy averaged over the trajectorymodi\ufb01ed datasets and brightness-modi\ufb01ed datasets. All models displayed in Table 4 are evaluated on the NImageNet variants. We group datasets according to the variation factor, i.e., brightness and trajectory, and the amount of discrepancy between the original setup and the modi\ufb01ed setup. All tested models exhibit a consistent deterioration in performance when evaluated on the N-ImageNet variants. Furthermore, the amount of performance degradation intensi\ufb01es as the amount of environment change increases, as shown in Table 6. These observations imply that many event-based object recognition algorithms are biased on their training setups, and thus fail to fully generalize in challenging, unseen environments. In spite of the consistent performance drop however, DiST outperforms its competitors under all external variations shown in Table 6. Notably, the ablated versions of DiST, i.e. sorted time surface [1], DiT, and timestamp image [41], all perform poorly compared to DiST. Along with the validation accuracy on the original NImageNet, this reinforces the necessity of both the discounting and sorting modules of DiST. DiST\u2019s capacity to generalize in unseen environmental conditions demonstrates its effectiveness as a robust representation for event-based object recognition. Representation Consistency To further investigate the robustness of DiST, we quantify the content-wise consistency of various event representations. Seven representations from Table 4 are compared against DiST. Other three representations (MatrixLSTM [5], EST [14], event image [57]) are omitted as they have different number of channels, which may incur unfair comparison. For each representation, we assess the structural similarity index measure (SSIM) between the original representation from N-ImageNet and the representation obtained from NImageNet variants. To further elaborate, suppose Eorig and Evar are event sequences derived from the same image in the ImageNet validation dataset. Let Rorig and Rvar be the event representations obtained from Eorig and Evar respectively. We report SSIM(Rorig,Rvar), which measures how consistent each representations are amidst external condition changes. As displayed in Figure 7, the contents of DiST are more consistent than other competing representations, which can be observed from its highest SSIM value. The contribution of discounting is greater than that of sorting in representation consistency, which can be seen from the SSIM difference of DiT and DiST. However, sorting is crucial for robust object recognition, as can be observed from Table 6, where a clear gap exists between DiT and DiST. Thus, the interplay between discounting and sorting as a whole enhances the robustness of DiST, further leading to improved perforFigure 7: Structural similarity measure (SSIM) between the representations from N-ImageNet and its variants, grouped by changes in motion and brightness. High SSIM indicates that the structure of the representation is stable under external variations. Note that \u2018Time\u2019 and \u2018Exp\u2019 denote timestamp image [41] and time surface [24], respectively. mance in N-ImageNet variants. Apart from the robustness of DiST, it must be noted that the N-ImageNet variants serve as the \ufb01rst benchmark for quantifying robustness in event-based classi\ufb01ers. Although DiST shows a consistent improvement from previous models in robustness, it does not fully recover the original NImageNet validation accuracy reported in Table 4. We expect the N-ImageNet variants to spur future work in robust representations for event-based object recognition. 5." + }, + { + "url": "http://arxiv.org/abs/2110.07171v1", + "title": "SGoLAM: Simultaneous Goal Localization and Mapping for Multi-Object Goal Navigation", + "abstract": "We present SGoLAM, short for simultaneous goal localization and mapping,\nwhich is a simple and efficient algorithm for Multi-Object Goal navigation.\nGiven an agent equipped with an RGB-D camera and a GPS/Compass sensor, our\nobjective is to have the agent navigate to a sequence of target objects in\nrealistic 3D environments. Our pipeline fully leverages the strength of\nclassical approaches for visual navigation, by decomposing the problem into two\nkey components: mapping and goal localization. The mapping module converts the\ndepth observations into an occupancy map, and the goal localization module\nmarks the locations of goal objects. The agent's policy is determined using the\ninformation provided by the two modules: if a current goal is found, plan\ntowards the goal and otherwise, perform exploration. As our approach does not\nrequire any training of neural networks, it could be used in an off-the-shelf\nmanner, and amenable for fast generalization in new, unseen environments.\nNonetheless, our approach performs on par with the state-of-the-art\nlearning-based approaches. SGoLAM is ranked 2nd in the CVPR 2021 MultiON\n(Multi-Object Goal Navigation) challenge. We have made our code publicly\navailable at \\emph{https://github.com/eunsunlee/SGoLAM}.", + "authors": "Junho Kim, Eun Sun Lee, Mingi Lee, Donsu Zhang, Young Min Kim", + "published": "2021-10-14", + "updated": "2021-10-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "main_content": "Introduction Visual navigation, the task of navigating a 3D environment using visual information, is a crucial component for autonomous agents. The development of agents that could \ufb01nd an object in novel environments or move towards a speci\ufb01ed goal location via visual navigation can serve as a building block for higher-level services ranging from medical assistant robots to home robots. Further, the advancements in 3D sensing technology have made a vast range of sensors available for mobile robots, greatly enhancing the range and quality of utilizable information. In this paper, we tackle the problem of Multi-Object Goal navigation. Given an agent equipped with an RGB-D camera and a GPS/Compass sensor, the goal of Multi-Object Goal navigation is to plan policies for visiting a designated set of objects in order. Multi-object goal navigation is an extension of the classical object goal navigation task( Zhu et al. [2016])), where multiple instead of single object goals should be found. Such additional dif\ufb01culty aims to rigorously evaluate two main components in visual navigation: goal localization and mapping. The task assesses the agent\u2019s ability to locate the target objects and generate appropriate maps that can cache useful spatial information. We present SGoLAM, a simple and ef\ufb01cient algorithm for Multi-Object Goal navigation. The key idea is to localize goals and simultaneously map the environment using classic projective geometry in a modular fashion. Each module in SGoLAM addresses a speci\ufb01c task such as goal localization, mapping, and planning. The bene\ufb01t of SGoLAM is that the approach can be deployed on an agent without any training procedure, in contrast to learning-based methods. Many end-to-end learning-based approaches which directly learn the navigation policy from sensory data lack the generalizability over various arXiv:2110.07171v1 [cs.CV] 14 Oct 2021 \fenvironments and transferability over tasks. SGoLAM presents work robust to previously unseen RGB-D inputs and recyclable modules for other navigation tasks. We validate SGoLAM for Multi-Object Goal navigation in Habitat simulator ( Abhishek Kadian* et al. [2019]) with 3D indoor scene datasets from Matterport3D ( Chang et al. [2017]). Our approach signi\ufb01cantly outperforms the baseline methods presented in the CVPR 2021 MultiON challenge and performs on par with the state-of-the-art learning-based approaches. 2 Related Work 2.1 Embodied AI & Visual Navigation Enabled by the availability of realistic 3D environments and simulation platforms for robotic agents, signi\ufb01cant progress has been made in the \ufb01eld of embodied AI in the past few years. Creative work on visual navigation has been done with some common themes like egocentric perception, long-term planning, learning from interaction, and holding a semantic understanding of an environment. As an output of this work, a plethora of task de\ufb01nitions have been made and then converged to several common de\ufb01nitions regarding the nature of the tasks (Anderson et al. [2018a]). Navigation tasks can be distinguished in many dimensions but the nature of a task depends mostly on the type of goal. In PointGoal navigation, the agent must navigate to a speci\ufb01c location given relative to where the agent is currently positioned (e.g., (150,300)). PointGoal task in simulated environments with noiseless sensors has recently been solved with near-perfect performance (Wijmans et al. [2020]) and shifted focus to real world deployment with noisy sensors (Abhishek Kadian* et al. [2019], Ramakrishnan et al. [2020], Chaplot et al. [2020b]). In ObjectGoal navigation, the agent must navigate to an object of a speci\ufb01c category drawn from a prede\ufb01ned set (e.g., \u2019chair\u2019). Despite being formulated in early work (Zhu et al. [2016]), the task remains far from being solved. There have been various approaches tackling ObjectGoal navigation by building episodic semantic map (Chaplot et al. [2020a]), exploiting object relationships (Qiu et al. [2020]), and by learning spatial context (Druon et al. [2020]). In AreaGoal navigation, the agent must navigate to a room of a speci\ufb01ed category (e.g., \u2019kitchen\u2019). AreaGoal has been formulated \ufb01rst in (Anderson et al. [2018b]) as visually-grounded natural language navigation task and has been dealt with formulating agent under Bayesian \ufb01ltering (Anderson et al. [2019]) and contextual global graphical planner (Deng et al. [2020]). As the performance of proposed methods in each task is increasing incredibly fast, recent discussion in the area is starting to tackle complex long-horizon tasks. Room-scale environments are expanded to building-scale environments and single goal tasks are expanded to sequential goals. Considering the practical use of an indoor embodied robot, Multi-Object Goal navigation is one of the most important tasks in embodied AI. Fang et al. [2019] proposes \u2019object-search\u2019 where agent must navigate to some categories of object with no speci\ufb01c order. In a static scene, the position and the number of goals would be \ufb01xed which limits task complexity and property. Beeching et al. [2019] proposes an \u2019ordered k-item\u2019 task where the agent must navigate to a set of items in a speci\ufb01ed order \ufb01xed across episodes. Wani et al. [2020] proposes MultiON task where an agent must navigate to arbitrary colored objects given as an episode-speci\ufb01c ordered set. Episode-speci\ufb01city requires grounding object class labels to their visual appearance. The ordered aspect requires an ef\ufb01cient strategy to store information of possible future goals. To this extent, we choose to tackle Multi-Object Goal navigation following the de\ufb01nition of the MultiON task. 2.2 Mapping in Navigation Long-horizon navigation task in a complex realistic environment is considered beyond the capability of existing memory-less systems. Including revisits to SLAM methods, there is a growing interest in extending the agents with memory structures. A simple implicit type of memory has been studied in reinforcement learning settings with memory-base policy using RNN (Oh et al. [2016], Mirowski et al. [2017], Mousavian et al. [2019]). Drawbacks of such policy are apparent as merging observations into lower dimension state vectors can easily lose information and optimizing over long sequences is dif\ufb01cult in backpropagation through time. External memory structure can be categorized broadly into topological maps and spatial maps. Topological map(Savinov et al. [2018], Mirowski et al. [2017]) is a more generic type of memory which stores landmarks(e.g., speci\ufb01c input frame) as nodes and their connectivity as edges. Spatial maps are mostly in the form of 2D grids where dimensions align with 2 \fFigure 1: Overview of SGoLAM (simultaneous goal localization and mapping). In the mapping module, RGBD information is \ufb01rst processed to form an occupancy map. In the goal localization module, goals are detected and further projected to form a goal map. The information from these to modules is fused to determine a policy: plan toward goal if goal is found and explore otherwise. an environment\u2019s top-down layout. Starting with SLAM which builds the very basic form of spatial maps, there have been extensive studies on the use of spatial map(Gordon et al. [2018], Henriques and Vedaldi [2018], Zhang et al. [2020], Chaplot et al. [2020b], Chaplot et al. [2020a]) and has recently shown the state-of-the-art performance on the ObjectGoal navigation tasks(Chaplot et al. [2020a]). Wani et al. [2020] provides baseline agents for Multi-Object Goal navigation, each adopted from representative methods utilizing either implicit or external memories. However, all baselines are based on learning-based methods and the best performing method adopted from Henriques and Vedaldi [2018] requires a heavy, complex neural image feature map. In contrast, our method does not need any training and has a simple, straightforward structure while outperforming all of the baselines. 3 Method We propose a modular navigation approach, SGoLAM, composed of three components: Mapping module, Goal Localization module, and a Policy module. The overview of our approach is visualized in Figure 1. Given the agent\u2019s pose from a noiseless GPS and Compass sensor, the mapping module creates the map of the environment by back-projecting the 3D coordinates from the current depth observation. The Goal Localization module detects the target objects from the current RGB observation and creates a goal map by back-projecting the aligned goal location from the depth observation. The Policy module takes in the predicted map and the agent pose from the Mapping module and goal locations from the Goal Localization module when the targets are detected. Based on the localization result, the Policy module outputs actions to explore the unseen area or to navigate to the goal locations. 3.1 Task Description MultiON task extends the ObjectGoal navigation task, where the objective is to \ufb01nd multiple target objects in a given sequential order. We follow the setup from CVPR 2021 MultiON challenge(Wani et al. [2020]) . A set of three target objects are randomly sampled without replacement from 8 cylinders with identical shapes but with different colors: red, green, blue, cyan, magenta, yellow, black, white. The embodied agent is shaped with a cylindrical body of height 1.5m and radius 0.1m and is equipped with an RGB-D camera and a noiseless GPS/Compass sensor. Each episode begins with the agent randomly positioned in an unseen environment. The agent takes one of three navigational actions (Move Forward, Turn Right, and Turn Left) and indicates goal discovery by calling a Found action within 1.5m from its target object. The episode terminates with success when all goals are properly discovered or with failure if the agent incorrectly calls the Found action. 3 \f3.2 Mapping The Mapping module maintains the agent\u2019s position (x, y, \u03b8) and a global occupancy map O \u2208 RM\u00d7M\u00d71. Given a depth image D, the module projects the 3D coordinates to an egocentric topdown grid map mt \u2208RN\u00d7N\u00d71 which indicates the probability of the corresponding location being vacant, occupied or unexplored at each time step. Note that M and N denote the size of the global and egocentric map. Based on the current agent\u2019s pose and orientation (x, y, \u03b8) , the egocentric map mt is transformed to an allocentric top-down grid map at \u2208RM\u00d7M\u00d71. The allocentric top-down map is overlaid on the global occupancy map Ot\u22121 from the previous time step to generate a new global occupancy map Ot. The global map is further used in the Policy module. 3.3 Goal Localization Given an RGB image I and depth map D, the Goal Localization module caches the location of target objects. To prevent redundant exploration, all target objects are localized regardless of whether or not it is the current goal. The module initiates by detecting target objects, namely colored cylinders, from the image I. Regions in the image that share a similar color with the target objects are marked and further projected to form a goal map, which is a map that stores the 2D location of localized goals. Speci\ufb01cally, suppose that from n cylinders with color C = {c1, . . . , cn}, the agent must \ufb01nd k cylinders with color S = {ci1, . . . , cik}. For each ci \u2208C, pixel locations (x, y) in the image that satisfy \u2225I(x, y) \u2212ci\u2225< \u03f5 for a small constant \u03f5 are \ufb01rst marked as putative target objects. As such naive thresholding is prone to false positives, we further apply connected components labeling(Samet and Tamminen [1988]) and remove regions whose component size is below a threshold \u03b4. The \ufb01ltered pixel locations are stored in a M \u00d7 M goal map G \u2208RM\u00d7M\u00d7n, instantiated as a top-down grid map, where the location of each pixel within the goal map is determined using the depth image D. Note that n grid maps are stored in G, with each map storing the goal location for a speci\ufb01c target object. The goal map G is further used to selecting the appropriate policy for Multi-Object Goal navigation. 3.4 Policy The agent policy is determined in the Policy module using the occupancy map O and goal map G. If the current object goal is localized within G, the agent plans toward the goal, and otherwise the agent performs exploration. To elaborate, suppose the current object goal is the cylinder with color ci. The agent \ufb01rst inspects the goal map corresponding to the ith object goal. If there are non-zero pixels within the map, the agent performs actions to move closer to the goal location, which is estimated as the mean pixel location of all non-zero pixels. The speci\ufb01c actions are planned with the D* algorithm(Stentz [1995]), which is a variant of the A* algorithm that is suitable for dynamic environments where the map constantly changes. If the map is left empty, the agent explores the environment with the frontier-based method(Yamauchi [1997]) that leverages information from O until the current object goal is found. The frontier-based method is a classical method for robot navigation, where the agent is encouraged to move towards regions in the occupancy map with the largest intersections with the unexplored area. Note that both the mapping and goal localization modules operate simultaneously with the execution of agent policies. This allows SGoLAM to rapidly detect target object goals and minimize redundant exploration. 4 Experimental Results 4.1 Experimental Setup SGoLAM is mainly implemented using PyTorch(Paszke et al. [2019]), and is accelerated with a single RTX 2080 GPU. All evaluations are performed on the Habitat simulator(Savva et al. [2019]) using the Matterport3D(Chang et al. [2017]) scenes. We set both the goal map and occupancy map size to be M, N = 550. Also, the threshold values used for goal localization is as follows: \u03f5 = 0.001, \u03b4 = 50. We compare SGoLAM against four baselines: NoMap(RNN): An agent that does not utilize any map information. An RNN encoder keeps track of the agent\u2019s history, and the agent makes actions using the hidden state. 4 \fMethod Success Progress PPL SPL NoMap (RNN) 0.05 0.19 0.13 0.03 ProjNeuralMap 0.12 0.29 0.16 0.06 VisMemoryMap 0.43 0.57 0.36 0.27 AuxTaskMap 0.57 0.70 0.45 0.36 SGoLAM 0.62 0.71 0.39 0.34 Table 1: Quantitave comparisons with the baseline methods. Note that all metrics are reported in decimals. SGoLAM performs on par with the state-of-the-art learning-based methods. In terms of the overall success rate, SGoLAM outperforms all the baselines by a large margin. ProjNeuralMap: This agent projects image features onto a top-down grid map, and utilizes this map information for Multi-Object Goal navigation. The image features are obtained by passing RGBD observations through a pre-trained CNN. AuxTaskMap: This agent shares the same neural network architecture as the ProjNeuralMap agent. However, it further \ufb01ne-tunes the feature extractor CNN with three auxiliary tasks: goal location estimation, goal visibility estimation, and goal distance estimation. The agent is trained on these tasks in a supervised manner. VisMemoryMap: This agent utilizes an array of memory vectors to keep track of salient past observations, similar to Memory Networks (Sukhbaatar et al. [2015]). The memory vectors are used to plan trajectories for Multi-Object Goal navigation. 4.2 Metrics We evaluates the performance of our agent with metrics suggested for object goal navigation task and extended for Multi-Object Goal navigation task in previous work (Anderson et al. [2018a],Wani et al. [2020]). Success: Binary indicator of success for each episode. The metric is a success if an agent calls {Found} for all target objects within a threshold distance, in a correct sequence, and within the allowed maximum steps for each episode. The episode fails if the agent reaches its maximum step without \ufb01nding all three objects or incorrectly calls Found. Progress: The proportion of object goals discovered successfully. In one object goal navigation task, progress evaluates the same metric as success. SPL: Extended version of \u2018Success weighted by Path Length\u2019. (Anderson et al. [2018a]). SPL = s \u00b7 d/max(p, d) (1) s is the binary success indicator, p is the total number of steps progressed by the agent and d = Pn i=1 di\u22121,i, the total geodesic distance from the starting position through each goal location. PPL: Progress weighted by Path Length. PPL = \u00af s \u00b7 \u00af d/max(p, \u00af d) (2) \u00af s is a progress. d = Pl i=1 di\u22121,i, where l is the number of objects found. p and di\u22121,i are equally de\ufb01ned as before. The metric prevents unfair high weights on shorter trajectory between goals by weighting overall distance based on progress. PPL for one object goal navigation is equal to SPL. 4.3 Performance Analysis Quantitative comparisons with the baselines are shown in Table 1. SGoLAM performs competitively against the learning-based approaches, and SGoLAM outperforms all other methods in metrics that evaluate the overall success rate, namely success and progress. However, for metrics that also consider path ef\ufb01ciency (PPL and SPL), the state-of-the-art learning-based method AuxTaskMap outperforms SGoLAM. This could be attributed to the intrinsic limitations of classical planning-based navigation methods. As noted by Mishkin et al. [2019], classical methods for visual navigation tend 5 \fto show higher success rates than their learning-based counterparts but produce more inef\ufb01cient paths. Although such conclusions were made for point-goal navigation (Wijmans et al. [2020]), a similar conclusion could be made for Multi-Object Goal navigation. Nonetheless, the performance of SGoLAM is not far behind that of learning-based approaches without the help of training, making it amenable for fast, effective adaptation in novel environments. 5" + }, + { + "url": "http://arxiv.org/abs/2108.06545v3", + "title": "PICCOLO: Point Cloud-Centric Omnidirectional Localization", + "abstract": "We present PICCOLO, a simple and efficient algorithm for omnidirectional\nlocalization. Given a colored point cloud and a 360 panorama image of a scene,\nour objective is to recover the camera pose at which the panorama image is\ntaken. Our pipeline works in an off-the-shelf manner with a single image given\nas a query and does not require any training of neural networks or collecting\nground-truth poses of images. Instead, we match each point cloud color to the\nholistic view of the panorama image with gradient-descent optimization to find\nthe camera pose. Our loss function, called sampling loss, is point\ncloud-centric, evaluated at the projected location of every point in the point\ncloud. In contrast, conventional photometric loss is image-centric, comparing\ncolors at each pixel location. With a simple change in the compared entities,\nsampling loss effectively overcomes the severe visual distortion of\nomnidirectional images, and enjoys the global context of the 360 view to handle\nchallenging scenarios for visual localization. PICCOLO outperforms existing\nomnidirectional localization algorithms in both accuracy and stability when\nevaluated in various environments. Code is available at\n\\url{https://github.com/82magnolia/panoramic-localization/}.", + "authors": "Junho Kim, Changwoon Choi, Hojun Jang, Young Min Kim", + "published": "2021-08-14", + "updated": "2024-02-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction With the recent advancements in 3D sensing technology, 3D maps of the environment are often available for download [1] or can be easily captured with commodity sensors [10]. The 3D map and the accurate location of the user within the map provide crucial information for AR/VR applications or other location-based services. Visual localization is a cheap localization method as it only uses an image input and utilizes the 3D map without additional sensors such as WIFI, GPS, or gyroscopes. However, visual localization is fragile to changes in illumination or local geometric variations resulting from object displacements [39, 47]. Further, with the limited field of view, perspective cameras often fail to regress the camera pose when the observed imFigure 1: Overview of our approach. PICCOLO minimizes a novel, point cloud-centric loss function called sampling loss. After the initialization phase trims off local minima, PICCOLO minimizes the sampling loss with gradient descent. age lacks visual features (e.g., a plain wall) or the scene exhibits symmetric or repetitive structure [46, 44]. Omnidirectional cameras, equipped with a 360\u25cbfield of view, provide a holistic view of the surrounding environment. Hence these cameras are immune to small scene changes and ambiguous local features [51], which gives them the potential to dramatically improve the performance of visual localization algorithms. However, the large field of view comes with a cost: significant visual distortion caused by the spherical projection equation. This makes it difficult to directly apply conventional visual localization algorithms on omnidirectional cameras [22, 13, 49, 16, 8], as many visual localization algorithms [50, 44, 46, 43] do not account for distortion. Furthermore, learning-based approaches are bound to the settings they are trained on, and cannot generarXiv:2108.06545v3 [cs.CV] 2 Feb 2024 \falize to arbitrary scenes. In this paper, we introduce PICCOLO, a simple yet effective omnidirectional localization algorithm. PICCOLO optimizes over sampling loss, which samples color values from the query image and compares them with the point cloud color. We only utilize the color information from point clouds, as it is usually available from raw measurements. With a simple formulation, PICCOLO can be adapted to any scene with 3D maps in an off-the-shelf manner. Further, PICCOLO can work seamlessly with any other point-wise information, such as semantic segmentation labels shown in Figure 5. Sampling loss is point cloud-centric, as every point is taken into consideration. In contrast, conventional photometric loss widely used in computer vision [14, 12] evaluates the color difference at every pixel location [12, 32], thus is image-centric. Our point cloud-centric formulation leads to a significant performance boost in omnidirectional localization, where the image-centric approach suffers from distorted omnidirectional images unless the distortion is explicitly considered with additional processing [48, 16]. The gradient of our proposed sampling loss can be efficiently obtained with differentiable sampling [21]. While differentiable sampling is widely used to minimize discrepancies in the projected space, it is usually part of a learned module [14, 19]. Instead, we utilize the operation in a standalone fashion, making our framework cheap to compute. We further accelerate the loss computation by ignoring the non-differentiable, costly components of projection, such as occlusion handling. These design choices make sampling loss very fast: it only takes 3.5 ms for 106 points on a commodity GPU. With the rich information of the global context in point cloud color, our efficient formulation is empirically robust against visual distortions and more importantly, local scene changes. The algorithm quickly converges to the global minimum of the proposed loss function as shown in Figure 1. Equipped with a light-weight search for decent starting points, PICCOLO achieves stable localization in various datasets. The algorithm is extensively evaluated on indoor/outdoor scenes and scenes with dynamic camera motion, scene changes, and arbitrary point cloud rotation. Several qualitative results of our algorithm are shown in Figure 2 and 5. In addition, we introduce a new dataset called OmniScenes to highlight the practicality of PICCOLO. OmniScenes contains diverse recordings with significant scene changes and motion blur, making it the first dataset targeted for omnidirectional localization where visual localization algorithms frequently malfunction. PICCOLO consistently exhibits performance superior to the previous approaches [7, 48] in all of the tested datasets under a fixed hyperparameter configuration, indicating the practical effectiveness of our algorithm. 2. Related Work Before we introduce PICCOLO in detail, we clarify our problem setup and how it differs from previous visual localization algorithms [37, 44, 45, 5]. Then we will further describe recent algorithms proposed for omnidirectional localization. Learning-based Algorithms A large body of recent visual localization literature trains an algorithm on the database of RGB (and possibly depth) images annotated with ground truth poses [45, 5, 28, 43, 44, 24, 46, 34, 18, 35, 36]. While such training facilitates highly accurate camera pose estimation [45, 5, 44, 38], it limits the applicability of these algorithms. To estimate camera pose in new, unseen environments, these algorithms typically require additional pose-labelled samples. In order to develop an algorithm that could be readily used in an off-the-shelf manner, we make a slight detour from these previous setups: the camera pose must be found solely using the point cloud and query image information. One may opt to train these learning-based models [45, 5, 28] with synthesized views from the point cloud as in Zhang et al. [48]. However, it is costly to obtain such rendered views, and one must devise a way to reduce the domain gap between synthesized images and real query images, which is a non-trivial task. Feature-based algorithms Another line of work utilizes visual features for localization [37, 38, 29, 20, 41, 9]. Feature-based localization algorithms require each 3D point to be associated with a visual feature, typically SIFT [30], necessitating a structure-from-motion (SfM) point cloud. Provided an efficient search scheme [38, 37, 29], it is relatively straightforward to establish 2D-3D correspondences by matching features extracted from the query image with those in the SfM model. Our input point cloud is not limited to a structure-frommotion (SfM) point cloud. Due to the developments in RGB-D sensors and Lidar scanners, 3D point clouds of a scene could be obtained in a wide variety of ways other than SfM. These point clouds do not contain associated visual features for feature-based localization. Our setup also does not provide any explicit 2D-3D correspondences, thus disabling the direct usage of PnP algorithms [17, 27]. Further, many point clouds and query images used in our experiments contain repetitive structures or regions that lack features as shown in Figure 5. This hinders the usage of sparse local features such as SIFT [30] in our setup, where we report additional difficulties for using SIFT in the supplementary material. To accommodate these challenges, PICCOLO incorporates information from dense RGB measurements, which are easy to obtain in practice, and are robust \fStanford2D-3D-S MPO OmniScenes OmniScenes (w. change) Figure 2: Qualitative results of PICCOLO. We display the input query image (top), and the projected point cloud under the estimated camera pose (bottom). against local ambiguities. Omnidirectional Localization Visual localization on omnidirectional images requires an algorithm specifically designed to account for the unique visual distortion [22, 13, 49, 16]. A number of techniques have been proposed in recent years that tackle visual localization with omnidirectional cameras. These techniques could be divided into two groups, namely algorithms that utilize global optimization techniques and others that leverage deep learning. Campbell et al. [6, 7] proposed a family of global optimizationbased algorithms for camera pose estimation, GOSMA [7] and GOPAC [6], that could be readily applied for omnidirectional localization in diverse indoor and outdoor environments. While these algorithms have solid optimality guarantees and competitive performance, semantic labels should be fed to these algorithms as additional inputs for reasonable accuracy. On the other hand, deep learning-based omnidirectional localization algorithms such as Zhang et al. [48], train neural networks that learn rotationally equivariant features to effectively process omnidirectional images. Although such features enable omnidirectional localization under arbitrary camera rotations, these algorithms cannot generalize to unobserved scenes as they require training on pose-annotated images. We compare the localization performance of PICCOLO with optimization-based localization algorithms GOSMA [7], GOPAC [6], and deep learning-based localization algorithms from Zhang et al. [24, 48, 8]. 3. Method PICCOLO is a point cloud-centric omnidirectional localization algorithm, which finds the optimal SE(3) camera pose with respect to the colored point cloud at which the 360\u25cbpanorama image is taken. PICCOLO solely relies on the point cloud data and the input query image. It does not require a separate training process or explicit 2D-3D correspondences, and therefore could be used in an off-the-shelf manner. We first introduce the formulation of sampling loss, which is the objective function that PICCOLO aims to minimize. Then we will describe our light-weight initialization scheme. Sampling Loss Given a point cloud P = {X,C} and a single query image I \u2208RH\u00d7W \u00d73, where X,C \u2208RN\u00d73 are the point cloud coordinates and color values, the objective is to find the optimal rotation R\u2217\u2208SO(3) and translation t\u2217\u2208 R3 at which the 360\u25cbpanorama image I is taken. Denote \u03a0(\u22c5) \u2236R3 \u2192R2 as the projection function that maps a point x = (x1,x2,x3) in 3D to a point \u02dc x \u2208[0,H) \u00d7 [0,W) in the 360\u25cbpanorama image\u2019s coordinate frame. This could be explicitly written as follows, \u03a0(x) = (H \u03c0 atan( x3 \u221a x2 1 + x2 2 ), W 2\u03c0 atan(x2 x1 )). (1) Furthermore, let \u0393(\u22c5;I) indicate the sampling function that maps 2D coordinates \u02dc x \u2208[0,W) \u00d7 [0,H) to pixel values c \u2208R3 sampled from the query image I under a designated sampling kernel. Suppose \u0393(\u22c5;I),\u03a0(\u22c5) could be \u2018vectorized\u2019, i.e., if the input \u02dc X consists of N points in R2, \u0393( \u02dc X;I) \u2208RN\u00d73 are the sampled image values at 2D coordinates \u02dc X, and vice versa for \u03a0(\u22c5). Under this setup, \u03a0(X) \u2208RN\u00d72 could be regarded as tentative sampling locations, and \u0393(\u03a0(X);I) \u2208RN\u00d73 as the sampled image values. If the point cloud P is perfectly aligned with the omnidirectional camera\u2019s coordinate frame, one could expect the sampled image values \u0393(\u03a0(X);I) to be very close to the point cloud color values C. Sampling loss is derived from this observation, where the objective is to minimize the discrepancy between \u0393(\u03a0(X);I) and C. Given a candidate camera pose R,t, this could be formulated as follows, Lsampling(R,t) = \u2225\u0393(\u03a0(R(X \u2212t));I) \u2212C\u22252. (2) \fFigure 3: Visualization of loss surfaces obtained from scenes in the Stanford2D-3D-S dataset [4]. The loss surfaces show the minimum loss values of the given (x,y) position in the 3D space. The red dots indicate the ground truth camera positions, and the blue dots link the values on the loss surface and the corresponding camera positions within the input point cloud space. Loss surfaces of small scenes are typically smooth with clear global minimum (left), but those of large scenes contain numerous local minima (right). Note that R(X \u2212t) is the transformed point cloud under R,t. Gradients with respect to R,t could be obtained by differentiating through the sampling function \u0393(\u22c5;I) using the technique from Jaderberg et al. [21]. Once the gradients are known, any off-the-shelf gradient based optimization algorithm such as stochastic gradient descent [25] or Adam [26] could be applied to minimize Equation 2, as shown in Figure 1. Unlike photometric loss which stems from an imagecentric viewpoint, sampling loss aims at assigning an adequate sampled color value to each point in the point cloud, thus providing a point cloud-centric viewpoint. Specifically, photometric loss also compares the colors of the point cloud with the query image, but in the image space, namely, Lphotometric(R,t) = \u2225\u03a8({R(X \u2212t),C}) \u2212I\u22252, (3) where \u03a8(\u22c5) \u2236{RN\u00d73,RN\u00d73} \u2192RH\u00d7W \u00d73 is a rendering function that receives the point cloud to produce a synthesized image. The rendering function is necessary to apply photometric loss in our setup, as only a single image is given, unlike existing applications [12, 32] where multiple images are provided. As photometric loss is evaluated in the image space, it suffers from the visual distortion of omnidirectional cameras. To illustrate, in Figure 2, one can observe that points near the pole (ceilings, floors) are \u2018stretched\u2019, while they correspond to small areas in reality. Since photometric loss makes direct image comparisons, it is severely affected by such artifacts and requires additional processing to account for the distortion [48, 16]. Sampling loss has numerous advantages over photometric loss. First, as seen from Equation 2, it fairly incorporates all points in the point cloud agnostic of whether it is closer to the pole, thus making it more suitable for 6DoF pose estimation of omnidirectional cameras. Second, sampling loss is cheap to compute, while still allowing for easy gradient computation [21]. Each sampling operation consists of simple image indexing, and we ignore the nondifferentiable, costly components of projection, such as occlusion handling. The core part of PICCOLO consists of simple gradient descent on the sampling loss, which is very fast: 3 \u00d7 108 points can be processed per second. Nonetheless, sampling loss can effectively handle the holistic view of the 360\u25cbimage and is robust to visual distortion or minor scene changes. Initialization Algorithm While sampling loss has various amenable properties, it is non-convex as visualized in Figure 3. Depending on the initial position, optimization using gradient descent can stop at a local minimum, which can be a serious issue for large spaces. To this end, we introduce a lightweight initialization algorithm, which outputs feasible starting points that are likely to yield global convergence. We uniformly sample the space of possible camera positions, and filter them through a two-step selection process as presented in Algorithm 1. During the first step, we compute sampling loss values across Nt \u00d7 Nr candidate camera poses and obtain the top K1 smallest starting points (line 2). Specifically, Nt translations are chosen from the uniform grid on the point cloud bounding box, for which Nr rotations, uniformly sampled from SO(3), are selected. Since sampling loss is very efficient, we can quickly compute the loss for all of the starting points. Among the K1 starting points, the second filtering process further selects K2 (K2 \u2264K1) of them using color histogram intersections (line 3). Top K2 candidate poses with \fMethod Information Learning t-error (m) R-error (\u25cb) PoseNet [24] RGB \u25ef 2.41 28 SphereNet [8] RGB \u25ef 2.29 26.7 Zhang et al. [48] RGB \u25ef 1.64 9.15 PICCOLO RGB \u2a09 0.03 0.66 GOSMA [7] Semantic \u2a09 1.27 51.44 PICCOLO Semantic \u2a09 0.01 0.28 Table 1: Quantitative results of omnidirectional localization evaluated on all areas of the Stanford2D-3D-S dataset [4]. the highest color distribution overlap with the query image are chosen. Finally, the resulting K2 starting points are individually optimized for a fixed number of iterations with respect to the sampling loss in SE(3) (line 7). At termination, the optimized camera pose with the smallest sampling loss value is chosen (line 9). Algorithm 1 Overview of PICCOLO Inputs: Point cloud P = {X,C}, query image I Output: Camera pose \u02c6 R, \u02c6 t. 1: T \u2190[(Ri,ti)\u2223i \u2208[1 .. NtNr]] \u25b7Starting points 2: T \u2190getTopK(lossValue(T,P,I),K1) 3: T \u2190getTopK(histIntersect(T,P,I),K2) 4: V \u2190[ ] 5: for all (Ri,ti) \u2208T do 6: for iter \u2208[1 .. Niter] do 7: (Ri,ti) \u2190(Ri,ti) \u2212\u03b1\u2207Lsampling(Ri,ti) 8: V.append(Lsampling(Ri,ti)) 9: ( \u02c6 R, \u02c6 t) \u2190arg minR,t V 4. Experimental Results 4.1. Performance Analysis Implementation Details PICCOLO is mainly implemented using PyTorch [33], and is accelerated with a single RTX 2080 GPU. Once the starting point is selected as described in Section 3, we find the camera pose using Adam [26] with step size \u03b1 = 0.1 in all experiments. PICCOLO is straightforward to implement, with the core part of the algorithm taking less than 10 lines of PyTorch code. For results in which accuracy is reported, a prediction is considered correct if the translation error is below 0.1 m and the rotation error is below 5.0\u25cb. All translation and rotation errors reported are median values, following the convention of [7, 6]. The full hyperparameter setup and additional qualitative results are available in the supplementary material. Stanford2D-3D-S We assess the localization performance of PICCOLO against existing methods using the PICCOLO GOSMA GOSMA-\u039b GOPAC t-error (m) 0.01 0.00 0.07 0.08 0.05 0.15 0.14 0.09 0.23 0.15 0.10 0.27 R-error (\u25cb) 0.21 0.11 0.56 1.13 0.91 2.18 2.38 1.25 4.61 3.78 2.47 5.10 Table 2: Localization results of PICCOLO, GOSMA, GOSMA without class labels (GOSMA-\u039b), and GOPAC for a subset of Area 3 from Stanford2D-3D-S [4]. Q2 Q1 Q3 are quartile values of each metric. Results other than PICCOLO are excerpted from [7]. Stanford2D-3D-S dataset [4], as shown in Table 1 and 2. It is an indoor dataset composed of 1413 panoramic images subdivided into six different areas, and many scenes exhibit repetitive structure and lack visual features, as in Figure 5. All areas are used for comparison except for GOPAC [6], where we use a subset from Area 3 consisting of small rooms, as the algorithm\u2019s long runtime hinders large-scale evaluation. PICCOLO outperforms all existing baselines by a large margin, showing an order-of-magnitude performance gain from its competitors. GOSMA [7] and GOPAC [6] are optimization-based methods that do not utilize color measurements. Instead, they require semantic labels for decent performance. For fair comparisons with these algorithms, we make PICCOLO observe color-coded semantic labels as input, as shown in Figure 5, and report the numbers in Table 1 (PICCOLO Semantic) and 2. Semantic labels lack visual features, thus finding camera pose in this setup is closer to solving a blind-PnP problem [11]. However, PICCOLO operates seamlessly and outperforms GOSMA and GOPAC without the aid of rich visual information such as RGB inputs, consistently succeeding around 1 cm error. Although GOSMA and GOPAC are powerful algorithms that guarantee global optimality, they often fail in large scenes such as hallways, where the qualitative results are shown in the supplementary material. PICCOLO also shows superior performance against deep learning methods [48, 8, 24]. Nevertheless, it should be noted that there is a subtle distinction in the search spaces of these methods. The translation domain for deep learningbased methods is the entire Stanford2D-3D-S dataset, while it is confined to a particular area for PICCOLO, similar to GOSMA [7]. However, deep learning-based methods are given very strong prior information to cope with the large search space; they are trained on synthetic pose-annotated images, which are generated within 30 cm proximity of the test images. This means the training images are very close to the ground truth. Nevertheless, it must be acknowledged that deep learning methods are capable of regressing the pose at wider scales, about 5 times the maximum search scale (1000 m2) attainable with PICCOLO (Table 3). \fArea (m2) t-error (m) R-error (\u25cb) Acc. Coast 458.0 0.79 2.18 0.40 Forest 361.2 0.02 0.92 0.67 ParkingIn 92.9 2.77 96.50 0.13 ParkingOut 1381.2 1.74 9.77 0.07 Residential 412.8 0.83 2.53 0.46 Urban 1156.4 0.03 0.85 0.85 All 646.3 0.80 2.10 0.45 Table 3: Localization error and accuracy of PICCOLO on Multi-Modal Panoramic 3D Outdoor (MPO) dataset [23]. Scenario Scene Change t-error (m) R-error (\u25cb) Acc. Handheld \u2a09 0.02 0.25 0.71 Robot \u2a09 0.02 0.18 0.77 Handheld \u25ef 0.77 15.39 0.43 Robot \u25ef 0.05 0.59 0.55 Table 4: Localization error and accuracy of PICCOLO on the OmniScenes dataset. MPO Multi-Modal Panoramic 3D Outdoor (MPO) [23] dataset is an outdoor dataset which spans a large area (1000m2) with many scenes containing repetition or lacking visual features. As shown in Table 3, PICCOLO performs competently with the same hyperparameter setting as Stanford2D-3D-S [4], despite the large area of the dataset. This validates our claim that PICCOLO could readily function as an off-the-shelf omnidirectional localization algorithm for both indoor/outdoor environments. Practicality Assessment with OmniScenes Omnidirectional localization is expected to provide stable visual localization under scene changes or dynamic motion, and therefore promises practical applications in VR/AR or robotics. We introduce a new dataset called OmniScenes collected to evaluate the performance on scenes with the aforementioned challenges. We collect dense 3D scans of eight areas including wedding halls and hotel rooms using the Matterport3D Scanner [1]. Corresponding 360\u25cbpanoramic images are acquired with the Ricoh Theta 360\u25cbcamera [2] under two scenarios, handheld and mobile robot mounted. Handheld scenarios are typically more challenging as unconstrained motion could take place and the capturer partially occludes scene details. The images are taken at different times of day and include significant changes in furniture configurations and motion blurs. Further details about the dataset are deferred to the supplementary material. The evaluation results on the OmniScenes dataset are shown in Table 4. Unlike previous experiments, we assume that the gravity direction is known, as this is often available Loss Function Information t-error (m) R-error (\u25cb) Sampling Original 0.03 0.66 Photometric Original 1.41 42.29 Sampling Gravity Direction 0.01 0.34 Photometric Gravity Direction 0.93 33.41 Sampling Flipped 0.03 0.69 Photometric Flipped 1.42 42.79 Sampling Rand. Rot. 0.23 2.21 Photometric Rand. Rot. 1.48 43.33 Table 5: Ablation study on sampling loss and gravity direction. \u2018Flipped\u2019 denotes flipped query images and \u2018Rand. Rot.\u2019 denotes randomly rotated point cloud inputs. in practice. PICCOLO exhibits competent error rates when there are no scene changes, agnostic of whether the input 360\u25cbpanorama is recorded in a handheld or robot-mounted manner. As shown in Figure 5, PICCOLO can estimate camera pose even under severe handheld motion, thanks to the full incorporation of points from sampling loss. Even though there is no functionality in PICCOLO that accounts for scene changes, there is a considerable amount of success cases given the accuracy in Table 4 and qualitative results shown in Figure 5. As long as the global context provides enough amount of evidence from color samples, omnidirectional localization can succeed. Nonetheless, there is a clear performance gap, and enhancing the robustness of PICCOLO against various scene changes is left as future work. 4.2. Ablation Study In this section we ablate various components of PICCOLO. Experiments are conducted using all areas of the Stanford2D-3D-S dataset [4], unless specified otherwise. Sampling Loss We compare PICCOLO with a variant that uses photometric loss from Equation 3 in place of the sampling loss, to ablate the effect of sampling loss in our algorithm. The rendering function is implemented as a simple projection of the 3D point cloud, similar to projections shown in Figure 5. We use the warping function to obtain gradients with respect to R,t, as in previous works [12, 32, 14]. All other hyperparameter setups and the initialization algorithm are the same as PICCOLO. The design choice of using sampling loss shows a large performance gain over photometric loss, as shown in Table 5. As sampling loss fairly incorporates all points in point cloud, it is free from visual distortion and thus more suitable than photometric loss for 6-DoF omnidirectional localization. \f101 102 103 Nr 0.03 0.04 0.05 0.06 0.07 t-error (m) t-error R-error 0.4 0.5 0.6 0.7 R-error ( ) 30 40 50 60 70 80 90 100 Nt 0.036 0.038 0.040 0.042 0.044 t-error (m) t-error R-error 0.5 0.6 0.7 0.8 0.9 R-error ( ) (a) Effects of Nt, Nr on localization error. 0.00 0.02 0.04 0.06 0.08 0.10 position threshold (m) 0.0 0.2 0.4 0.6 recall loss+hist loss 0.0 0.5 1.0 1.5 2.0 rotation threshold ( ) 0.0 0.2 0.4 0.6 recall loss+hist loss loss+hist loss 0.0 0.2 0.4 0.6 0.8 1.0 relative runtime (b) Comparison of two initialization schemes. Figure 4: Ablation study on the initialization pipeline. Gravity Direction If the gravity direction is known, the number of initial positions is significantly reduced and PICCOLO can perform highly accurate localization as shown in Table 5. Knowing the gravity direction is a reasonable assumption as many panoramic images or 3D scan datasets [4, 23] contain the information. In practice, one can easily infer the gravity direction of omnidirectional cameras using integrated gyroscopes, and that of 3D maps with RANSAC [15]-based plane fitting. Nonetheless, PICCOLO stably performs without knowing the gravity direction as shown in Table 1 and 3. In case PICCOLO might be biased towards the gravityaligned conventional data, we evaluate PICCOLO in flipped input images and arbitrarily rotated point clouds. Under the same hyperparameter setup as Section 4.1, PICCOLO demonstrates consistent performance, as shown in Table 5. Such results imply that PICCOLO is amenable for novel scenes, and could be directly applied to a wide variety of non-standard inputs without training. Initialization Pipeline We finally ablate various components of the initialization pipeline presented in Section 3. The first main parameters to be examined are the number of initial points Nt,Nr sampled from the range of possible transformations. We evaluate the effect of different values of Nt,Nr on auditoriums from Area 2 of the Stanford2D3D-S dataset [4]. As shown in Figure 4a, larger Nt,Nr tend to improve the error values, but result in computational overhead. An adequate set of Nt,Nr should be chosen considering the trade-off. We use Nt = 50,Nr = 32 for all our experiments. This means we have about 1600 initial points to test, but the initialization finishes within a few seconds, thanks to the efficiency of sampling loss. For runtime-critical applications, one may cache the projected point cloud coordinates at each candidate starting pose once for each scene and use it afterward. This would significantly reduce the time spent on initialization. We further examine the efficacy of our two-stage initialization scheme. Recall the two-stage initialization in Section 3 first selects K1 candidate locations using loss values followed by filtration to K2 candidates using color histograms. We compare the performance of PICCOLO selecting K2 initial poses from Nt \u00d7 Nr candidates using (i) loss only, and (ii) the two-stage method presented in Section 3. All rooms in Area 3 of the Stanford2D-3D-S dataset are selected for evaluation with Nt = 50,Nr = 32,K1 = 50,K2 = 6. We display the results in Figure 4b. Our twostage initialization enables a significant performance boost with only a small increase in runtime. 5." + }, + { + "url": "http://arxiv.org/abs/1907.10830v4", + "title": "U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation", + "abstract": "We propose a novel method for unsupervised image-to-image translation, which\nincorporates a new attention module and a new learnable normalization function\nin an end-to-end manner. The attention module guides our model to focus on more\nimportant regions distinguishing between source and target domains based on the\nattention map obtained by the auxiliary classifier. Unlike previous\nattention-based method which cannot handle the geometric changes between\ndomains, our model can translate both images requiring holistic changes and\nimages requiring large shape changes. Moreover, our new AdaLIN (Adaptive\nLayer-Instance Normalization) function helps our attention-guided model to\nflexibly control the amount of change in shape and texture by learned\nparameters depending on datasets. Experimental results show the superiority of\nthe proposed method compared to the existing state-of-the-art models with a\nfixed network architecture and hyper-parameters. Our code and datasets are\navailable at https://github.com/taki0112/UGATIT or\nhttps://github.com/znxlwm/UGATIT-pytorch.", + "authors": "Junho Kim, Minjae Kim, Hyeonwoo Kang, Kwanghee Lee", + "published": "2019-07-25", + "updated": "2020-04-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "eess.IV" + ], + "main_content": "INTRODUCTION Image-to-image translation aims to learn a function that maps images within two different domains. This topic has gained a lot of attention from researchers in the \ufb01elds of machine learning and computer vision because of its wide range of applications including image inpainting (Pathak et al. (2014); Iizuka et al. (2017)), super resolution (Dong et al. (2016); Kim et al. (2016)), colorization (Zhang et al. (2016; 2017)) and style transfer (Gatys et al. (2016); Huang & Belongie (2017)). When paired samples are given, the mapping model can be trained in a supervised manner using a conditional generative model (Isola et al. (2017); Li et al. (2017a); Wang et al. (2018)) or a simple regression model (Larsson et al. (2016); Long et al. (2015); Zhang et al. (2016)). In unsupervised settings where no paired data is available, multiple works (Anoosheh et al. (2018); Choi et al. (2018); Huang et al. (2018); Kim et al. (2017); Liu et al. (2017); Royer et al. (2017); Taigman et al. (2017); Yi et al. (2017); Zhu et al. (2017)) successfully have translated images using shared latent space (Liu et al. (2017)) and cycle consistency assumptions (Kim et al. (2017); Zhu et al. (2017)). These works have been further developed to handle the multi-modality of the task (Huang et al. (2018)). Despite these advances, previous methods show performance differences depending on the amount of change in both shape and texture between domains. For example, they are successful for the style transfer tasks mapping local texture (e.g., photo2vangogh and photo2portrait) but are typically unsuccessful for image translation tasks with larger shape change (e.g., sel\ufb01e2anime and cat2dog) in wild images. Therefore, the pre-processing steps such as image cropping and alignment are often required to avoid these problems by limiting the complexity of the data distributions (Huang et al. (2018); Liu et al. (2017)). In addition, existing methods such as DRIT (Lee et al. (2018)) cannot \u2217Most work was done in NCSOFT. \u2020corresponding author 1 arXiv:1907.10830v4 [cs.CV] 8 Apr 2020 \fPublished as a conference paper at ICLR 2020 Figure 1: The model architecture of U-GAT-IT. The detailed notations are described in Section Model acquire the desired results for both image translation preserving the shape (e.g., horse2zebra) and image translation changing the shape (e.g., cat2dog) with the \ufb01xed network architecture and hyperparameters. The network structure or hyper-parameter setting needs to be adjusted for the speci\ufb01c dataset. In this work, we propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. Our model guides the translation to focus on more important regions and ignore minor regions by distinguishing between source and target domains based on the attention map obtained by the auxiliary classi\ufb01er. These attention maps are embedded into the generator and discriminator to focus on semantically important areas, thus facilitating the shape transformation. While the attention map in the generator induces the focus on areas that speci\ufb01cally distinguish between the two domains, the attention map in the discriminator helps \ufb01ne-tuning by focusing on the difference between real image and fake image in target domain. In addition to the attentional mechanism, we have found that the choice of the normalization function has a signi\ufb01cant impact on the quality of the transformed results for various datasets with different amounts of change in shape and texture. Inspired by Batch-Instance Normalization(BIN) (Nam & Kim (2018)), we propose Adaptive LayerInstance Normalization (AdaLIN), whose parameters are learned from datasets during training time by adaptively selecting a proper ratio between Instance normalization (IN) and Layer Normalization (LN). The AdaLIN function helps our attention-guided model to \ufb02exibly control the amount of change in shape and texture. As a result, our model, without modifying the model architecture or the hyper-parameters, can perform image translation tasks not only requiring holistic changes but also requiring large shape changes. In the experiments, we show the superiority of the proposed method compared to the existing state-of-the-art models on not only style transfer but also object trans\ufb01guration. The main contribution of the proposed work can be summarized as follows: 2 \fPublished as a conference paper at ICLR 2020 \u2022 We propose a novel method for unsupervised image-to-image translation with a new attention module and a new normalization function, AdaLIN. \u2022 Our attention module helps the model to know where to transform intensively by distinguishing between source and target domains based on the attention map obtained by the auxiliary classi\ufb01er. \u2022 AdaLIN function helps our attention-guided model to \ufb02exibly control the amount of change in shape and texture without modifying the model architecture or the hyper-parameters. 2 UNSUPERVISED GENERATIVE ATTENTIONAL NETWORKS WITH ADAPTIVE LAYER-INSTANCE NORMALIZATION Our goal is to train a function Gs\u2192t that maps images from a source domain Xs to a target domain Xt using only unpaired samples drawn from each domain. Our framework consists of two generators Gs\u2192t and Gt\u2192s and two discriminators Ds and Dt. We integrate the attention module into both generator and discriminator. The attention module in the discriminator guides the generator to focus on regions that are critical to generate a realistic image. The attention module in the generator gives attention to the region distinguished from the other domain. Here, we only explain Gs\u2192t and Dt (See Fig 1) as the vice versa should be straight-forward. 2.1 MODEL 2.1.1 GENERATOR Let x \u2208{Xs, Xt} represent a sample from the source and the target domain. Our translation model Gs\u2192t consists of an encoder Es, a decoder Gt, and an auxiliary classi\ufb01er \u03b7s, where \u03b7s(x) represents the probability that x comes from Xs. Let Ek s (x) be the k-th activation map of the encoder and Ekij s (x) be the value at (i, j). Inspired by CAM (Zhou et al. (2016)), the auxiliary classi\ufb01er is trained to learn the weight of the k-th feature map for the source domain, wk s, by using the global average pooling and global max pooling, i.e., \u03b7s(x) = \u03c3(\u03a3kwk s\u03a3ijEkij s (x)). By exploiting wk s, we can calculate a set of domain speci\ufb01c attention feature map as(x) = ws \u2217Es(x) = {wk s \u2217 Ek s (x)|1\u2264k\u2264n}, where n is the number of encoded feature maps. Then, our translation model Gs\u2192t becomes equal to Gt(as(x)). Inspired by recent works that use af\ufb01ne transformation parameters in normalization layers and combine normalization functions (Huang & Belongie (2017); Nam & Kim (2018)), we equip the residual blocks with AdaLIN whose parameters, \u03b3 and \u03b2 are dynamically computed by a fully connected layer from the attention map. AdaLIN(a,\u03b3, \u03b2) = \u03b3 \u00b7 (\u03c1 \u00b7 \u02c6 aI + (1 \u2212\u03c1) \u00b7 \u02c6 aL) + \u03b2, \u02c6 aI = a \u2212\u00b5I p \u03c32 I + \u03f5 , \u02c6 aL = a \u2212\u00b5L p \u03c32 L + \u03f5 , \u03c1 \u2190clip[0,1](\u03c1 \u2212\u03c4\u2206\u03c1) (1) where \u00b5I, \u00b5L and \u03c3I, \u03c3L are channel-wise, layer-wise mean and standard deviation respectively, \u03b3 and \u03b2 are parameters generated by the fully connected layer, \u03c4 is the learning rate and \u2206\u03c1 indicates the parameter update vector (e.g., the gradient) determined by the optimizer. The values of \u03c1 are constrained to the range of [0, 1] simply by imposing bounds at the parameter update step. Generator adjusts the value so that the value of \u03c1 is close to 1 in the task where the instance normalization is important and the value of \u03c1 is close to 0 in the task where the LN is important. The value of \u03c1 is initialized to 1 in the residual blocks of the decoder and 0 in the up-sampling blocks of the decoder. An optimal method to transfer the content features onto the style features is to apply Whitening and Coloring Transform (WCT) (Li et al. (2017b)), but the computational cost is high due to the calculation of the covariance matrix and matrix inverse. Although, the AdaIN (Huang & Belongie (2017)) is much faster than the WCT, it is sub-optimal to WCT as it assumes uncorrelation between feature channels. Thus the transferred features contain slightly more patterns of the content. On the other hand, the LN (Ba et al. (2016)) does not assume uncorrelation between channels, but 3 \fPublished as a conference paper at ICLR 2020 sometimes it does not keep the content structure of the original domain well because it considers global statistics only for the feature maps. To overcome this, our proposed normalization technique AdaLIN combines the advantages of AdaIN and LN by selectively keeping or changing the content information, which helps to solve a wide range of image-to-image translation problems. 2.1.2 DISCRIMINATOR Let x \u2208{Xt, Gs\u2192t(Xs)} represent a sample from the target domain and the translated source domain. Similar to other translation models, the discriminator Dt which is a multi-scale model consists of an encoder EDt, a classi\ufb01er CDt, and an auxiliary classi\ufb01er \u03b7Dt. Unlike the other translation models, both \u03b7Dt(x) and Dt(x) are trained to discriminate whether x comes from Xt or Gs\u2192t(Xs). Given a sample x, Dt(x) exploits the attention feature maps aDt(x) = wDt \u2217EDt(x) using wDt on the encoded feature maps EDt(x) that is trained by \u03b7Dt(x). Then, our discriminator Dt(x) becomes equal to CDt(aDt(x)). 2.2 LOSS FUNCTION The full objective of our model comprises four loss functions. Here, instead of using the vanilla GAN objective, we used the Least Squares GAN (Mao et al. (2017)) objective for stable training. Adversarial loss An adversarial loss is employed to match the distribution of the translated images to the target image distribution: Ls\u2192t lsgan =(Ex\u223cXt[(Dt(x))2] + Ex\u223cXs[(1 \u2212Dt(Gs\u2192t(x)))2]). (2) Cycle loss To alleviate the mode collapse problem, we apply a cycle consistency constraint to the generator. Given an image x \u2208Xs, after the sequential translations of x from Xs to Xt and from Xt to Xs, the image should be successfully translated back to the original domain: Ls\u2192t cycle = Ex\u223cXs[|x \u2212Gt\u2192s(Gs\u2192t(x)))|1]. (3) Identity loss To ensure that the color distributions of input image and output image are similar, we apply an identity consistency constraint to the generator. Given an image x \u2208Xt, after the translation of x using Gs\u2192t, the image should not change. Ls\u2192t identity = Ex\u223cXt[|x \u2212Gs\u2192t(x)|1]. (4) CAM loss By exploiting the information from the auxiliary classi\ufb01ers \u03b7s and \u03b7Dt, given an image x \u2208{Xs, Xt}. Gs\u2192t and Dt get to know where they need to improve or what makes the most difference between two domains in the current state: Ls\u2192t cam = \u2212(Ex\u223cXs[log(\u03b7s(x))] + Ex\u223cXt[log(1 \u2212\u03b7s(x))]), (5) LDt cam = Ex\u223cXt[(\u03b7Dt(x))2] + Ex\u223cXs[(1 \u2212\u03b7Dt(Gs\u2192t(x))2]. (6) Full objective Finally, we jointly train the encoders, decoders, discriminators, and auxiliary classi\ufb01ers to optimize the \ufb01nal objective: min Gs\u2192t,Gt\u2192s,\u03b7s,\u03b7t max Ds,Dt,\u03b7Ds,\u03b7Dt \u03bb1Llsgan + \u03bb2Lcycle + \u03bb3Lidentity + \u03bb4Lcam, (7) where \u03bb1 = 1, \u03bb2 = 10, \u03bb3 = 10, \u03bb4 = 1000. Here, Llsgan = Ls\u2192t lsgan + Lt\u2192s lsgan and the other losses are de\ufb01ned in the similar way (Lcycle, Lidentity, and Lcam) 4 \fPublished as a conference paper at ICLR 2020 (a) (b) (c) (d) (e) (f) Figure 2: Visualization of the attention maps and their effects shown in the ablation experiments: (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminator, respectively. (e) Our results with CAM, (f) Results without CAM. 3 EXPERIMENTS 3.1 BASELINE MODEL We have compared our method with various models including CycleGAN (Zhu et al. (2017)), UNIT (Liu et al. (2017)), MUNIT (Huang et al. (2018)), DRIT (Lee et al. (2018)), AGGAN (Mejjati et al. (2018)), and CartoonGAN (Chen et al. (2018)). All the baseline methods are implemented using the author\u2019s code. 3.2 DATASET We have evaluated the performance of each method with \ufb01ve unpaired image datasets including four representative image translation datasets and a newly created dataset consisting of real photos and animation artworks, i.e., sel\ufb01e2anime. All images are resized to 256 x 256 for training. See Appendix C for each dataset for our experiments. 3.3 EXPERIMENT RESULTS We \ufb01rst analyze the effects of attention module and AdaLIN in the proposed model. We then compare the performance of our model against the other unsupervised image translation models listed in the previous section. To evaluate, the visual quality of translated images, we have conducted a user study. Users are asked to select the best image among the images generated from \ufb01ve different methods. More examples of the results comparing our model with other models are included in the supplementary materials. 3.3.1 CAM ANALYSIS First, we conduct an ablation study to con\ufb01rm the bene\ufb01t from the attention modules used in both generator and discriminator. As shown in Fig 2 (b), the attention feature map helps the generator to focus on the source image regions that are more discriminative from the target domain, such as eyes and mouth. Meanwhile, we can see the regions where the discriminator concentrates its attention to determine whether the target image is real or fake by visualizing local and global attention maps of 5 \fPublished as a conference paper at ICLR 2020 (a) (b) (c) (d) (e) (f) Figure 3: Comparison of the results using each normalization function: (a) Source images, (b) Our results, (c) Results only using IN in decoder with CAM, (d) Results only using LN in decoder with CAM, (e) Results only using AdaIN in decoder with CAM, (f) Results only using GN in decoder with CAM. the discriminator as shown in Fig 2 (c) and (d), respectively. The generator can \ufb01ne-tune the area where the discriminator focuses on with those attention maps. Note that we incorporate both global and local attention maps from two discriminators having different size of receptive \ufb01eld. Those maps can help the generator to capture the global structure (e.g., face area and near of eyes) as well as the local regions. With this information some regions are translated with more care. The results with the attention module shown in Fig 2 (e) verify the advantageous effect of exploiting attention feature map in an image translation task. On the other hand, one can see that the eyes are misaligned, or the translation is not done at all in the results without using attention module as shown in Fig 2 (f). 3.3.2 ADALIN ANALYSIS As described in Appendix B, we have applied the AdaLIN only to the decoder of the generator. The role of the residual blocks in the decoder is to embed features, and the role of the up-sampling convolution blocks in the decoder is to generate target domain images from the embedded features. If the learned value of the gate parameter \u03c1 is closer to 1, it means that the corresponding layers rely more on IN than LN. Likewise, if the learned value of \u03c1 is closer to 0, it means that the corresponding layers rely more on LN than IN. As shown in Fig 3 (c), in the case of using only IN in the decoder, the features of the source domain (e.g., earrings and shades around cheekbones) are well preserved due to channel-wise normalized feature statistics used in the residual blocks. However, the amount of translation to target domain style is somewhat insuf\ufb01cient since the global style cannot be captured by IN of the up-sampling convolution blocks. On the other hand, As shown in Fig 3 (d), if we use only LN in the decoder, target domain style can be transferred suf\ufb01ciently by virtue of layerwise normalized feature statistics used in the up-sampling convolution. But the features of the source domain image are less preserved by using LN in the residual blocks. This analysis of two extreme cases tells us that it is bene\ufb01cial to rely more on IN than LN in the feature representation layers to preserve semantic characteristics of source domain, and the opposite is true for the upsampling layers that actually generate images from the feature embedding. Therefore, the proposed AdaLIN which adjusts the ratio of IN and LN in the decoder according to source and target domain distributions is more preferable in unsupervised image-to-image translation tasks. Additionally, the Fig 3 (e), (f) are the results of using the AdaIN and Group Normalization (GN) (Wu & He (2018)) respectively, and our methods are showing better results compared to these. 6 \fPublished as a conference paper at ICLR 2020 (a) (b) (c) (d) (e) (f) (g) Figure 4: Visual comparisons on the \ufb01ve datasets. From top to bottom: sel\ufb01e2anime, horse2zebra, cat2dog, photo2portrait, and photo2vangogh. (a)Source images, (b)U-GAT-IT, (c)CycleGAN, (d)UNIT, (e)MUNIT, (f)DRIT, (g)AGGAN Table 1: Kernel Inception Distance\u00d7100\u00b1std.\u00d7100 for ablation our model. Lower is better. There are some notations; GN: Group Normalization, G CAM: CAM of generator, D CAM: CAM of discriminator Model sel\ufb01e2anime anime2sel\ufb01e U-GAT-IT 11.61 \u00b1 0.57 11.52 \u00b1 0.57 U-GAT-IT w/ IN 13.64 \u00b1 0.76 13.58 \u00b1 0.8 U-GAT-IT w/ LN 12.39 \u00b1 0.61 13.17 \u00b1 0.8 U-GAT-IT w/ AdaIN 12.29 \u00b1 0.78 11.81 \u00b1 0.77 U-GAT-IT w/ GN 12.76 \u00b1 0.64 12.30 \u00b1 0.77 U-GAT-IT w/o CAM 12.85 \u00b1 0.82 14.06 \u00b1 0.75 U-GAT-IT w/o G CAM 12.33 \u00b1 0.68 13.86 \u00b1 0.75 U-GAT-IT w/o D CAM 12.49 \u00b1 0.74 13.33 \u00b1 0.89 Also, as shown in Table 1, we demonstrate the performance of the attention module and AdaLIN in the sel\ufb01e2anime dataset through an ablation study using Kernel Inception Distance (KID) (Bi\u00b4 nkowski et al. (2018)). Our model achieves the lowest KID values. Even if the attention module and AdaLIN are used separately, we can see that our models perform better than the others. However, when used together, the performance is even better. 3.3.3 QUALITATIVE EVALUATION For qualitative evaluation, we have also conducted a perceptual study. 135 participants are shown translated results from different methods including the proposed method with source image, and asked to select the best translated image to target domain. We inform only the name of target domain, i.e., animation, dog, and zebra to the participants. But, some example images of target domain are provided for the portrait and Van Gogh datasets as minimum information to ensure proper judgments. Table 2 shows that the proposed method achieved signi\ufb01cantly higher score except for photo2vangogh but comparable in human perceptual study compared to other methods. In Fig 4, we present the image translation results from each method for performance comparisons. U-GAT-IT can generate undistorted image by focusing more on the distinct regions between source 7 \fPublished as a conference paper at ICLR 2020 Table 2: Preference score on translated images by user study. Model sel\ufb01e2anime horse2zebra cat2dog photo2portrait photo2vangogh U-GAT-IT 73.15 73.56 58.22 30.59 48.96 CycleGAN 20.07 23.07 6.19 26.59 27.33 UNIT 1.48 0.85 18.63 32.11 11.93 MUNIT 3.41 1.04 14.48 8.22 2.07 DRIT 1.89 1.48 2.48 2.48 9.70 Table 3: Kernel Inception Distance\u00d7100\u00b1std.\u00d7100 for difference image translation mode. Lower is better. Model sel\ufb01e2anime horse2zebra cat2dog photo2portrait photo2vangogh U-GAT-IT 11.61 \u00b1 0.57 7.06 \u00b1 0.8 7.07 \u00b1 0.65 1.79 \u00b1 0.34 4.28 \u00b1 0.33 CycleGAN 13.08 \u00b1 0.49 8.05 \u00b1 0.72 8.92 \u00b1 0.69 1.84 \u00b1 0.34 5.46 \u00b1 0.33 UNIT 14.71 \u00b1 0.59 10.44 \u00b1 0.67 8.15 \u00b1 0.48 1.20 \u00b1 0.31 4.26 \u00b1 0.29 MUNIT 13.85 \u00b1 0.41 11.41 \u00b1 0.83 10.13 \u00b1 0.27 4.75 \u00b1 0.52 13.08 \u00b1 0.34 DRIT 15.08 \u00b1 0.62 9.79 \u00b1 0.62 10.92 \u00b1 0.33 5.85 \u00b1 0.54 12.65 \u00b1 0.35 AGGAN 14.63 \u00b1 0.55 7.58 \u00b1 0.71 9.84 \u00b1 0.79 2.33 \u00b1 0.36 6.95 \u00b1 0.33 CartoonGAN 15.85 \u00b1 0.69 Model anime2sel\ufb01e zebra2horse dog2cat portrait2photo vangogh2photo U-GAT-IT 11.52 \u00b1 0.57 7.47 \u00b1 0.71 8.15 \u00b1 0.66 1.69 \u00b1 0.53 5.61 \u00b1 0.32 CycleGAN 11.84 \u00b1 0.74 8.0 \u00b1 0.66 9.94 \u00b1 0.36 1.82 \u00b1 0.36 4.68 \u00b1 0.36 UNIT 26.32 \u00b1 0.92 14.93 \u00b1 0.75 9.81 \u00b1 0.34 1.42 \u00b1 0.24 9.72 \u00b1 0.33 MUNIT 13.94 \u00b1 0.72 16.47 \u00b1 1.04 10.39 \u00b1 0.25 3.30 \u00b1 0.47 9.53 \u00b1 0.35 DRIT 14.85 \u00b1 0.60 10.98 \u00b1 0.55 10.86 \u00b1 0.24 4.76 \u00b1 0.72 7.72 \u00b1 0.34 AGGAN 12.72 \u00b1 1.03 8.80 \u00b1 0.66 9.45 \u00b1 0.64 2.19 \u00b1 0.40 5.85 \u00b1 0.31 and target domain by exploiting the attention modules. Note that the regions around heads of two zebras or eyes of dog are distorted in the results from CycleGAN. Moreover, translated results using U-GAT-IT are visually superior to other methods while preserving semantic features of source domain. It is worth noting that the results from MUNIT and DRIT are much dissimilar to the source images since they generate images with random style codes for diversity. Furthermore, it should be emphasized that U-GAT-IT have applied with the same network architecture and hyper-parameters for all of the \ufb01ve different datasets, while the other algorithms are trained with preset networks or hyper-parameters. Through the results of user study, we show that the combination of our attention module and AdaLIN makes our model more \ufb02exible. 3.3.4 QUANTITATIVE EVALUATION For quantitative evaluation, we use the recently proposed KID, which computes the squared Maximum Mean Discrepancy between the feature representations of real and generated images. The feature representations are extracted from the Inception network (Szegedy et al. (2016)). In contrast to the Frchet Inception Distance (Heusel et al. (2017)), KID has an unbiased estimator, which makes it more reliable, especially when there are fewer test images than the dimensionality of the inception features. The lower KID indicates that the more shared visual similarities between real and generated images (Mejjati et al. (2018)). Therefore, if well translated, the KID will have a small value in several datasets. Table 3 shows that the proposed method achieved the lowest KID scores except for the style transfer tasks like photo2vangogh and photo2portrait. However, there is no big difference from the lowest score. Also, unlike UNIT and MUNIT, we can see that the source \u2192target, target \u2192source translations are both stable. U-GAT-IT shows even lower KID than the recent attention-based method, AGGAN. AGGAN yields poor performance for the transformation with shape change such as dog2cat and anime2sel\ufb01e unlike the U-GAT-IT, the attention module of which focuses on distinguishing not between background and foreground but differences between 8 \fPublished as a conference paper at ICLR 2020 two domains. CartoonGAN, as shown in the supplementary materials, has only changed the overall color of the image to an animated style, but compared to sel\ufb01e, the eye, which is the biggest characteristic of animation, has not changed at all. Therefore, CartoonGAN has the higher KID. 4" + } + ], + "Taekyung Kim": [ + { + "url": "http://arxiv.org/abs/1905.05396v1", + "title": "Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection", + "abstract": "We introduce a novel unsupervised domain adaptation approach for object\ndetection. We aim to alleviate the imperfect translation problem of pixel-level\nadaptations, and the source-biased discriminativity problem of feature-level\nadaptations simultaneously. Our approach is composed of two stages, i.e.,\nDomain Diversification (DD) and Multi-domain-invariant Representation Learning\n(MRL). At the DD stage, we diversify the distribution of the labeled data by\ngenerating various distinctive shifted domains from the source domain. At the\nMRL stage, we apply adversarial learning with a multi-domain discriminator to\nencourage feature to be indistinguishable among the domains. DD addresses the\nsource-biased discriminativity, while MRL mitigates the imperfect image\ntranslation. We construct a structured domain adaptation framework for our\nlearning paradigm and introduce a practical way of DD for implementation. Our\nmethod outperforms the state-of-the-art methods by a large margin of 3%~11% in\nterms of mean average precision (mAP) on various datasets.", + "authors": "Taekyung Kim, Minki Jeong, Seunghyeon Kim, Seokeon Choi, Changick Kim", + "published": "2019-05-14", + "updated": "2019-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Object detection is a fundamental problem in computer vision as well as machine learning. With the recent advances of the convolutional neural networks (CNNs), CNNbased methods [13, 12, 35, 30, 34, 26, 8, 46, 29] have achieved signi\ufb01cant progress in object detection based on \ufb01ne benchmarks [10, 27, 25]. Despite the promising results, all of these object detectors suffer from the degenerative problem when applied beyond these benchmarks. Building datasets for a speci\ufb01c application can temporarily resolve this problem, nevertheless, the time and monetary costs incurred when manually annotating such datasets are not negligible [40, 33]. Moreover, since the intrinsic causes of the degenerative problem have been avoided instead of resolved, another generalization issue arises when extending the same application to different environments. To adFigure 1. Overview of our learning paradigm. We illustrate a conceptual diagram of the distributions of the domains on the right side. S and T represent for the source and the target domain, respectively, and each Ri represents the ith diversi\ufb01ed domain. dress this issue, an unsupervised domain adaptation method for object detection [3] was recently proposed. Unsupervised domain adaptation has been studied to address the degeneration issue between related domains, which is closely related to the aforementioned degenerative problem. With the rise of the deep neural networks, recent unsupervised deep domain adaptation methods [31, 11, 42, 2, 36, 1, 17] are mainly based on featurelevel adaptation and pixel-level adaptation. Feature-level adaptation methods [31, 11, 42, 2] align the distributions of the source and the target domain toward a cross-domain feature space. These approaches expect the model supervised by the labeled source domain to infer on the target domain effectively. However, the supervision of the inference layer mainly relies on the source domain only in the featurelevel adaptation methods. Thus, the feature extractor of the model is enforced to manufacture the features in a way discriminative for the source domain data, which is not suitable 1 arXiv:1905.05396v1 [cs.CV] 14 May 2019 \ffor the target domain. Moreover, since the object detection data is interwoven with the instances of interest and the relatively unimportant background, it is further hard for the source-biased feature extractor to extract discriminative features for the target domain instances. Thus, object detectors adapted at the feature-level are at risk of the source-biased discriminativity and it can leads to false recognition on the target domain. On the other hand, pixel-level adaptation methods [36, 1, 17] focus on visual appearance translation toward the opposite domain. The model can then take advantage of the information from the translated source images [17, 1] or infer pseudo label of the translated target images [22]. Most existing pixel-level adaptation methods [36, 1, 17] are based on the assumption that the image translator can perfectly convert one domain to the opposite domain such that the translated images can be regarded as those from the opposite domain. However, these methods reveal imperfect translation in many adaptation cases since the performance of the translator heavily depends on the appearance gap between the source and the target domain, as shown in Fig. 2. Regarding these incompletely translated source images as from the target domain can cause new domain discrepancy issue. To tackle the aforementioned limitations, we introduce a novel domain adaptation paradigm for object detection. Our learning paradigm consists of Domain Diversi\ufb01cation (DD) and Multi-domain-invariant Representation Learning (MRL), as shown in Fig. 1. Unlike most existing domain adaptation methods, DD intentionally causes several distinctive shifted domains from the source domain to enrich the distribution of the labeled data. On the other hand, MRL boosts the domain invariance of the features by unifying the scattered domains. Using the aforementioned approaches, we propose a universal domain adaptation framework for object detection. Our framework trains domain-invariant object detection layers with diversi\ufb01ed annotated data while simultaneously encouraging dispersed domains toward a common feature space. To demonstrate the effectiveness of our method, we conduct extensive experiments on Realworld Datasets [10], Artistic Media Datasets [22], and Urban Scene Datasets [7, 37] based on Faster R-CNN. Our framework achieves state-of-the-art performance on various datasets. In summary, we have three contributions in our paper: \u2022 We propose a novel learning paradigm for unsupervised domain adaptation. Our learning approach addresses the source-biased discriminativity issue and the imperfect translation issue. \u2022 We structurize our learning paradigm by integrating DD and MRL in the form of a framework. \u2022 We conduct extensive experiments to validate the effectiveness of our method on various datasets. Our (a) Source domain (b) Target domain (c) Translated domain Figure 2. Examples of the imperfect image translation. The \ufb01rst and second rows visualize examples of the translated image from the real-world to artistic media and between urban scenes, respectively. method outperforms the state-of-the-art methods with a large margin by 3% \u223c12% mAP. 2. Related work 2.1. CNN-based Object Detection Traditional methods [44, 9] use a sliding window framework with handcrafted features and shallow inference models. With rise of the convolutional neural networks, RCNN [13] obtains a promising result with a selective search algorithm and classi\ufb01cation through the CNN features. Fast R-CNN [12] reduces the bottleneck of R-CNN by sharing features among regions in the same image. Faster RCNN [35] adopts a fully convolutional network called a Region Proposal Network (RPN) to mitigate another bottleneck caused by the selective search algorithm. YOLO [34] achieves signi\ufb01cant improvement in the inference speed using a single-staged network. SSD [30] uses multi-scale features to enhance the relatively low accuracy of YOLO. RetinaNet [26] further improves the performance of singlestaged object detectors using the focal loss to reduces the performance degradation caused by easy negative examples. While these methods push the limit on the large-scale datasets with rich annotations, generalization errors which arises during their application have not been investigated thus far. 2.2. Unsupervised Domain Adaptation Domain adaptation has been studied intensely in relation to the image classi\ufb01cation task [21, 41]. Traditional methods focus on reducing domain discrepancy through instance re-weighting [21, 41, 14] and shallow feature alignment strategies [16, 32]. With the success of deep learning scheme, early deep domain adaptation mainly arises into Maximum Mean Discrepancy (MMD) minimization [31, 42, 2] or feature confusion through adversarial 2 \f(a) Feature-level adaptation (b) Pixel-level adaptation (c) Domain Diversi\ufb01cation (d) MRL with Domain Diversi\ufb01cation Figure 3. Comparison of distribution transformation by different domain adaptation methods. MRL refers to Multi-domain-invariant Representation Learning. S and T denote the source domain and the target domain, respectively. R1, R2, R3, and R4 are shifted domains of the source domain. The arrows indicate the feature-level adaptation trends. The domains with asterisks denote the results of feature-level adaptation. The domains with a boundary imply that the object detection network is supervised by these domains. learning [11]. Recently, as the image-to-image translation has become highlighted with promising results [23, 24, 28, 49] through Generative Adversarial Networks (GANs) [15], pixel-level adaptation methods [36, 20, 1] have been developed to address the domain shift issue by translating source domain images into the target style. As unsupervised domain adaptation attracted considerable interest with its effectiveness, recent works [17, 47, 6, 5, 38, 43, 19, 48] have been attempted to address the generalization issue in the semantic segmentation task. Despite the recent success of unsupervised domain adaptation in various computer vision tasks, unsupervised domain adaptation for the object detection task has not been explored so far except few pioneers [22, 3]. Inoue et al. [22] adopt a conventional unsupervised pixel-level domain adaptation method as part of a two-staged weakly supervised domain adaptation framework. Chen et al. [3] align distributions of the source and the target domain at the image level and instance level to address various causes of the domain shift separately. While these methods address the problem of degeneracy without considering the limitations of existing domain adaptation approaches, we aim to mitigate these issues through a two-step learning paradigm. 3. Methods We propose a novel learning paradigm to alleviate the source-biased discriminativity in feature-level adaptation and the imperfect translation in pixel-level adaptation. We start by explaining the two stages of our method, Domain Diversi\ufb01cation and Multi-domain-invariant Representation Learning. Then, a universal domain adaptation framework for object detection is introduced. Figure 3 shows conceptual description of feature-level adaptation, pixel-level adaptation, and our method. (a) Given image (b) Images with appearance shift Figure 4. Examples of variously shifted images for given images. 3.1. Domain Diversi\ufb01cation Without loss of generality, we assume that there exist numerous possibilities of shifted domains that preserve the corresponding semantic information of the source domain but appear in different ways. For instance, as shown in Fig. 4, we can easily conceive of various visually shifted images from a given image regardless of the existence of a feasible image translator. Along the same line, numerous variations of image translators can achieve considerable domain shift from the given source domain, which we call domain shifters. Domain Diversi\ufb01cation (DD) is a method which diversi\ufb01es the source domain by intentionally generating distinctive domain discrepancy through these domain shifters. The diversi\ufb01ed distribution of the labeled data encourages the model to infer among data with large intraclass variance discriminatively. Thus, the model is enforced to extract semantic features that are not biased to a particular domain. This allows the model to extract unbiased semantic features from the target domain, which is more discriminative than the source-biased features. With the better discriminativity of target domain features, we can assimilate the domains with less feature collapse, resulting in more 3 \fdesirable adaptation. Among the plenteous possibilities of domain shifters, inspired by the limitation of pixel-level adaptation, we practically realize the possibilities using the imperfections of the image translation. Let us denote a source domain sample as xs and a target domain sample as xt with domain distributions ps and pt, respectively. In general, image translation methods aim to train a generator G by optimizing the translated image G(xs) to which appears to be sampled from the target domain. However, since the generator network has high enough capacity for various translations, the adversarial loss alone cannot guarantee the conversion of a given xs to the desired target image. To redeem this instability, image translation methods add constraints to the objective function Lim to reduce the possibility of the undesirable generators: Lim(G, D, M) = LGAN(G, D) + \u03b1Lcon(G, M), (1) LGAN(G, D) = Ext\u223cpt(xt)[logD(xt)] + Exs\u223cps(xs)[log(1 \u2212D(G(xs)))], (2) where D is the discriminator for adversarial learning, Lcon(G, M) is the constraint loss with a possibly existing additional module M and \u03b1 is a weight that balances the two losses. Here, the additional module implies a supplemental network necessary for a sophisticated constraint. In this basic setting, we observe that varying the learning trend with alternative constraints causes the generator G to diversify the appearance of the translated images. Based on this observation, we apply several variants of constraints to achieve distinct domain shifters. The objective function for the domain shifter can be written as: LDS(G, D, M) = LGAN(G, D) + \u03b2Lcon(G, M), (3) where Lcon(G, D, M) is the loss for constraints that encourages the domain shifter to be differentiated, M denotes possibly existing additional modules for the constraint loss, and \u03b2 is a weight that balances the two losses. Practical implementation details for diversifying domain shifters will be introduced in section 4.2. 3.2. Multi-domain-invariant Representation Learning In conventional pixel-level adaptation methods, substantial training of the inference layer heavily depends on the translated source images. However, these methods run the risk of imperfect image translation, which can cause another domain shift issue with the target domain. To address this limitation, we design an adversarial learning scheme called Multi-domain-invariant Representation Learning (MRL), which encourages domain-invariant features among the diversely scattered domains through adversarial learning. We assume that we have (n + 2) number of diversi\ufb01ed domains with a pairwise domain gap. For instance, we regard the translated source domain as separate from the source or the target domain and consider the three domains for conventional pixel-level adaptation methods. Most existing feature-level adaptation methods apply adversarial learning through the binary domain discriminator. However, these domains have pairwise domain shifts given by the domain adaptation problem or caused by the imperfect image translation. Thus, regarding multiple domains as the same domain during adversarial learning can fatally disturb the model from learning common features. Thus, we use the discriminator with (n + 2) outputs so as to learn to distinguish the domains using the cross entropy loss. Adversarial learning methods attain domain-invariant features by inducing a feature which confuses the domain discriminator. This confusion can be achieved by designating each domain to resemble the other in cross-domain adaptation problems. However, in a multi-domain situation, it is not desirable to specify each domain to resemble each speci\ufb01c target domain. To address this issue, inspired by [11], we attach a gradient reverse layer (GRL) at the frontend of the discriminator. Since the GRL forces the generator to manufacture the features of the given images as if they were not sampled from its domain, the features of each domain are encouraged to be domain-invariant. The objective function for MRL can be written as: Lmrl(xf, Dxf ) = \u2212 n+1 X i=0 X u,v 1{i}(Dxf )log(p(u,v) i (xf)) (4) where xf is the feature map given for the discriminator, 1{i} is the indicator function for a singleton {i}, p(u,v) i is the domain probability for the ith domain of the feature vector located at (u, v) of xf, and Dxf is the ground-truth for the domain label of xf. 3.3. Structured Domain Adaptation framework for Object Detection In this section, we structurize our learning paradigm by integrating DD and MRL into a framework. Without loss of generality, we assume that there is n number of domain shifters Gi for i = 1, ..., n. Our framework aims to learn domain-invariant representation and adapt the object detector for these representations simultaneously. To achieve the goal, every (n + 2) number of domains is utilized for MRL, while the source domain and the shifted domains encourage the localization layers and the classi\ufb01cation layers of the object detector. The objective function for the framework can be written as follows: L(xs, xt, ys) = LMRL(xs, xt) + LLOC(xs, ys) + LCLS(xs, ys), (5) 4 \fFigure 5. The architecture of our domain adaptation framework for object detection. Our framework is built on the object detection network. LMRL(xs, xt) = Lmrl(GBase(xs), 0) + Lmrl(GBase(xt), n + 1) + n X i=1 Lmrl(GBase(Gi(xs)), i), (6) LLOC(xs, ys) = Lloc(xs, ys) + n X i=1 Lloc(Gi(xs), ys), (7) LCLS(xs, ys) = Lcls(xs, ys) + n X i=1 Lcls(Gi(xs), ys), (8) Here, xs and xt are images of the source and the target domain, GBase is the base convolutional network in Fig. 5 and ys is the label information for xs. In addition, Lloc and Lcls denote the regression loss and classi\ufb01cation loss for the given image, respectively. The overall framework is shown in Fig. 5. 4. Experiments 4.1. Datasets We verify the effectiveness of our learning paradigm in two different settings: 1) adaptation from real-world to artistic media; 2) adaptation among urban scenes. Real-world Dataset. PASCAL VOC [10] is a real-world image dataset used for several computer vision tasks. PASCAL VOC 2007 dataset consists of 2,501 train images, 2,510 validation images, and 4,952 test images, while PASCAL VOC 2012 dataset contains 5,717 train images and 5,823 validation images. Annotations are provided for 20 categories. We use train set and validation set on PASCAL VOC 2007 and train set and validation set on PASCAL VOC 2012 as a real-world dataset. Artistic Media Datasets (AMDs). We use Clipart1k, Watercolor2k, and Comic2k [22] for artistic media domains. These datasets are collected from a website called Behance for the image classi\ufb01cation task by [45]. Recently, Inoue et al. [22] notated labels for the object detection task. Each dataset consists of 1,000, 2,000, and 2,000 images, respectively, while half of them are for the test set. Urban Street Datasets (USDs). We use Cityscapes [7] and Foggy Cityscapes [37] for urban scene datasets. Both of them consist of 2,975 train images and 500 validation images with 8 categories. Experiment Setup. To validate our method for adaptation tasks from real-world to artistic media, we conduct experiments for Real-world\u2192Clipart1k, Realworld\u2192Watercolor2k, and Real-world\u2192Comic2k. Whole images of each AMD are used for the target domain data during training, while each test set is used for evaluation. For urban scenes, we conduct the experiment for Cityscapes\u2192Foggy Cityscapes. We use Cityscapes train set and Foggy Cityscapes validation set. 4.2. Implementation Details for Domain Shifters To verify the effectiveness of DD, we generated 3 distinct shifted domains for each adaptation task. Under the universality for domain shifter architecture, we adopt the residual generator and the discriminator from CycleGAN [49]. To distinctively shift the source domain, we consider two factors in the objective function, i.e., color preservation and reconstruction. Figure 6 shows the visual differences caused by each con\ufb01guration of the factors. Domain shift considering color preservation: To constraint the domain shifter to preserve color, we adopt the L1 loss between an input image and a translated image. However, since the instability of the training increases as we give the less effective constraint, we only assign the constraint to the target domain for the diverse shift. Thus, the constraint loss for the domain shifter can be written as: Lcon,1(G) = Ex\u223cpt(x)[\u2225(G(x) \u2212x)\u22251]. (9) Domain shift considering reconstruction: To consider the reconstruction, we need one more pair of domain shifter G\u2032 and discriminator D\u2032 for inverse translation. Moreover, we need additional generative adversarial losses necessary for 5 \fFigure 6. Qualitative results for the shifted domains with various con\ufb01gurations of constraint factors. CP and R denote color preservation constraint and reconstruction constraint, respectively. training G\u2032. Thus, the constraint loss for the domain shifter can be written as: Lcon,2(G, G\u2032, D\u2032) = Ex\u223cps(xs)[logD\u2032(xs)] + Ext\u223cpt(xt)[log(1 \u2212D\u2032(G\u2032(x)))] + Exs\u223cps(xs)[\u2225(G\u2032(G(xs)) \u2212xs)\u22251] + Ext\u223cpt(xt)[\u2225(G(G\u2032(xt)) \u2212xt)\u22251]. (10) Domain shift considering both reconstruction and color preservation: To consider two factors simultaneously, we apply the sum of two constraint loss terms with additional modules G\u2032 and D\u2032: Lcon,3(G, G\u2032, D\u2032) = Lcon,1(G) + Lcon,2(G, G\u2032, D\u2032). (11) 4.3. Implementation Details for Object Detection In our experiments, we use Faster R-CNN [35] as our base object detector with VGG-16 [39] pretrained on ImageNet. Each batch consists of (n + 2) images where n is a number of shifted domains. We alleviate the memory issue through gradient accumulation. We train the network for 80k iterations, 50k iteration with a learning rate of 0.001 and the last 30k iterations with a learning rate of 0.0001. All implementations are done in PyTorch and on a single GeForce Titan XP GPU. For PASCAL VOC and AMDs, we resize the images to have a length of 600 pixels as its shorter side. For USDs, we match the shorter side of the image to be a length of 500 pixels. We evaluate mean average precisions (mAP) in the test phase, following the IoU threshold of 0.5 in [22] and [4]. We follow [35] for unspeci\ufb01ed hyper-parameters. 4.4. Performance Comparison In this section, we compare our method to the stateof-the-art methods (i.e., Domain Adaptive Faster R-CNN (DAF) [3] and Domain Transfer (DT) stage of [22]). For our methods, We apply three shifted domains implemented in section 4.2. Table 1, 2, 3, and Fig. 7 present the comparison results on Faster R-CNN backbone. Our learning paradigm achieves the highest class-wise AP among all methods in all adaptation tasks except table class in Clipart1k, car class in Watercolor2k. and bus class in Cityscapes. Speci\ufb01cally, for the animal classes in AMDs, our proposed method obtains signi\ufb01cantly higher class-wise performance than other methods. To interpret the results in detail, we observe that it is hard to train object detectors with the real-world data to infer discriminatively among animal classes in the artistic media data. However, our learning scheme signi\ufb01cantly improves the performance values for the animal classes. Moreover, our method exceeds the state-of-the-art methods by 3% \u223c12% mAP. Especially for the Real-world \u2192AMD tasks, our method outperforms the state-of-the-art methods by around 9% \u223c12% mAP. These results demonstrate that our method is effective at learning domain-invariant discriminative features and adapting object detection layers to the common feature space, which is further analyzed in section 4.6 and 4.7. Several qualitative results are shown in Fig. 8. 4.5. Ablation Study on Numbers of Shifted Domains We investigate the effectiveness of the DD stage and the MRL stage on different numbers of the shifted domains. We used the Real-world \u2192Clipart1k task as a study case. As shown in Table 4, the overall results of each learning scheme are improved as the number of shifted domains increases. Furthermore, using DD with MRL signi\ufb01cantly boosts the performance for overall cases. It is noteworthy that the improvement in performance through MRL is ampli\ufb01ed as the number of domains increases. These results validate our hypothesis that DD enhances the domain adaptation effect of the following feature-level adaptation by alleviating the source-biased discriminativity. 6 \fMethod aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP Baseline 13.9 51.5 20.4 10.1 29.5 35.1 24.6 3.0 34.7 2.6 25.7 13.3 27.2 47.9 37.5 40.6 4.6 9.1 27.5 40.2 24.9 DT [22] 16.4 62.5 22.8 31.9 44.1 36.3 27.9 0.7 41.9 13.1 37.6 5.2 28.0 64.8 58.2 42.7 9.2 19.8 32.8 47.3 32.1 DAF (Img) [4] 20.0 49.9 19.5 17.0 21.2 24.7 20.0 2.0 30.2 10.5 15.4 3.3 25.9 49.3 32.9 23.6 14.3 5.5 30.1 32.0 22.4 Ours (n=3) 25.8 63.2 24.5 42.4 47.9 43.1 37.5 9.1 47.0 46.7 26.8 24.9 48.1 78.7 63.0 45.0 21.3 36.1 52.3 53.4 41.8 Table 1. Quantitative results for object detection of Clipart1k [22] by adapting from PASCAL VOC [10]. Method V \u2192Wa V \u2192Co Baseline 39.8 21.4 DT [22] 40.0 23.5 DAF (Img) [4] 34.3 23.2 Ours (n=3) 52.0 34.5 Table 2. Quantitative results for object detection of Watercolor2k [22] and Comic2k [22] by adapting from PASCAL VOC [10]. We denote PASCAL VOC, Watercolor2k, and Comic2k as V, Wa, and Co, respectively. (a) Watercolor2k (b) Comic2k Figure 7. Comparison results for the class-wise AP of Watercolor2k test set and Comic2k test set [22]. 4.6. Study on Alleviation of the Source-biased Discriminativity To further verify the alleviation of the source-biased discriminativity by DD, we investigate the localization performance of RPN and the classi\ufb01cation accuracy of the Fast R-CNN module on the Faster R-CNN baseline. To compare the positive impact of the domain adaptation methods on the localization capability, We compute mean Intersectionover-Union (mIoU) of the best overlaps predicted from RPN for each instance. The classi\ufb01cation accuracy is evaluated with the target domain instances. To evaluate the inference capability of the classi\ufb01cation layer in the Fast R-CNN module, we provide the ground-truth value for bounding boxes. We conduct the experiments for the Realworld\u2192Clipart1k case. As shown in Table 5, all domain adaptation methods signi\ufb01cantly improve the localization capability of RPN than baseline. However, the domain adaptation methods with Method person rider car truck bus train mcyclebicycle mAP Baseline 17.7 24.7 27.2 12.6 14.8 9.1 14.3 23.2 17.9 DT [22] 25.4 39.3 42.4 24.9 40.4 23.1 25.9 30.4 31.5 DAF (Img) [4] 22.9 30.7 39.0 20.1 27.5 17.7 21.4 25.9 25.7 DAF (Ins) [4] 23.6 30.6 38.6 20.8 40.5 12.8 17.1 26.1 26.3 DAF (Cons) [4] 25.0 31.0 40.5 22.1 35.3 20.2 20.0 27.1 27.6 Ours (n=3) 30.8 40.5 44.3 27.2 38.4 34.5 28.4 32.2 34.6 Table 3. Quantitative results for object detection of Foggy Cityscapes [37] by adapting from Cityscapes [7]. DD Con\ufb01guration DD DD+MRL offset #SD CP R CP + R mAP 0 24.9 1 \u2713 31.2 32.4 +1.2 2 \u2713 \u2713 32.5 37.8 +5.3 3 \u2713 \u2713 \u2713 33.8 41.8 +8.0 Table 4. Results of the ablation study on con\ufb01guration of the shifted domains. DD and MRL denote domain diversi\ufb01cation and multi-domain-invariant representation learning, respectively. The offset denotes the performance improvement of the object detector through MRL. CP, R, CP+R denote the shifted domains trained with color preservation constraint, reconstruction constraint, and both constraints, respectively, and SD denotes shifted domains. DD achieve signi\ufb01canly higher classi\ufb01cation accuracy than the methods without DD. Moreover, even though both DAF and MRL are in a frame of feature-level adaptation, the classi\ufb01cation results of two methods show considerable gap. These results demonstrate the importance of the discriminative feature when adapting the domains in feature level. Furthermore, we can con\ufb01rm our demonstration that featurelevel adaptation suffers from the source-biased discriminativity and DD is effective at alleviating this issue. 4.7. Error Analysis on Top Ranked Detections We analyze detection errors to investigate the positive impact of our method on domain adaptation in details. We study Real-world\u2192Clipart1k case for the analysis. Since the Clipart1k test set only has 500 images, we classify the most con\ufb01dent 1,000 detections for each domain adaptation method. With reference to [18], we categorize the detection results into three groups: correct detection, mislocalization error, and background error. Correct detection denotes correct class with IoU greater than 0.5, mislocalization error 7 \f(a) Input image (b) Baseline (c) DAF (Img) [3] (d) DT [22] (e) Ours (DD) (f) Ours (DD+MRL) (g) Ground-truth Figure 8. Qualitative results for object detection of the AMDs by adapting from PASCAL VOC [10]. Images in the \ufb01rst, second, and third rows are from the test sets of Clipart1k, Watercolor2k, and Comic2k [22], respectively. Best view in color. Method Acc (%) mIoU (%) Baseline 30.6 56.5 DAF (Img) 38.0 65.9 Ours (DD) 50.2 66.6 Ours (DD+MRL) 52.5 68.5 Table 5. Comparison results for the instance classi\ufb01cation accuracy of the Fast R-CNN module and mean IoU of RPN for the test set of Clipart1k [22]. Each adaptation method only uses annotations in PASCAL VOC [10]. denotes correct class with IoU between 0.1 and 0.5, and background error denotes wrong class or correct class with IoU less than 0.1, where IoU denotes Intersection-overUnion. As shown in Fig. 9, both DD with and without MRL reduce background detection errors compared to other methods. However, while both reduce background errors, DD with MRL signi\ufb01cantly increases the number of correct detection than DD. 5." + }, + { + "url": "http://arxiv.org/abs/1002.0784v2", + "title": "Solutions in IR modified Horava-Lifshitz Gravity", + "abstract": "In order to allow the asymptotically flat, we consider Ho\\v{r}ava-Lifshitz\ngravity theory with a soft violation of the detailed balance condition and\nobtain various solutions. In particular, we find that such theory coupled to a\nglobal monopole leads to a solution representing a space with deficit solid\nangle, which is well matched with genuine feature of GR.", + "authors": "Taekyung Kim, Chong Oh Lee", + "published": "2010-02-03", + "updated": "2010-06-09", + "primary_cat": "hep-th", + "cats": [ + "hep-th" + ], + "main_content": "Introduction The construction of the ultra-violet(UV) complete theory of gravity has been an intriguing subject of discussions for theoretical physics of the past \ufb01fty years. The discussion has been recently concentrated on the UV complete theory in space and time with an anisotropic scaling in a Lifshitz \ufb01xed point [1, 2, 3, 4, 5]. In particular, this theory is very attractive since pertubative renormalizability is realized as well as Lorentz symmetry is recovered in low energy regime in spite of being broken the Lorentz symmetry in high energy. Ho\u02c7 rava-Lifshitz gravity (HL) has been studied in various directions, which are categorized into two. One is investigating and developing the properties of the HL theory itself [6]\u2013[36]. The other is applying this theory to cosmological framework including the black hole solutions [38]\u2013[49] and their thermodynamic prosperities [50]\u2013[59]. The metric in the (3+1)-dimensional ADM decomposition can be written as ds2 = \u2212N2dt2 + gij \u0000dxi + Nidt \u0001 \u0000dxj + Njdt \u0001 , (1.1) where N(t, xi) denotes the lapse function, gij(t, xi) is the spatial metric, and Ni(t, xi) is the shift function. Then, the Einstein-Hilbert action can be expressed as SEH = 1 16\u03c0G Z d4x\u221agN(KijKij \u2212K2 + R \u22122\u039b), (1.2) where G is Newton\u2019s constant and the extrinsic curvature for a spacelike hypersurface with a \ufb01xed time is Kij \u2261 1 2N ( \u02d9 gij \u2212\u2207iNj \u2212\u2207jNi) . (1.3) Here, a dot denotes a derivative with respect to t and covariant derivatives de\ufb01ned with respect to the spatial metric gij. The IR-modi\ufb01ed HL action with asymptotically \ufb02at limit is given by [4, 39, 42] SHL = Z dt d3x \u221agN(LIR + LUV), (1.4) LIR = 2 \u03ba2(KijKij \u2212\u03bbK2)+ \u03ba2\u00b52 8(1 \u22123\u03bb) \u0002 (\u039b \u2212\u03c9) R \u22123\u039b2\u0003 , (1.5) LUV = \u2212\u03ba2 2\u03bd4 \u0012 Cij \u2212\u00b5\u03bd2 2 Rij \u0013 \u0012 Cij \u2212\u00b5\u03bd2 2 Rij \u0013 + \u03ba2\u00b52(1 \u22124\u03bb) 32(1 \u22123\u03bb) R2, (1.6) where R and Rij are three-dimensional scalar curvature and Ricci tensor, and the Cotton tensor is given by Cij = \u01ebikl \u221ag\u2207k \u0012 Rj l \u22121 4R\u03b4j l \u0013 . (1.7) 2 \fThe action has parameters, \u03ba, \u03bb, \u03bd, \u00b5, \u039b, and \u03c9. In the limit of vanishing cosmological constant \u039b \u21920, one compares the IR-modi\ufb01ed action (1.4) with the (3+1)-dimensional Einstein-Hilbert action (1.2) and reads the parameter \u03bb, the speed of light c, Newton\u2019s constant G as \u03bb = 1, c2 = \u03ba4\u00b52\u03c9 32 , G = \u03ba2 32\u03c0c. (1.8) Recently, HL gravity coupled to electrostatic \ufb01eld of a point charge is considered and an exact solution is found, describing a space with either a surplus or de\ufb01cit solid angle is found [60]. The surplus angle due to an ordinary matter with positive energy density in [60] is not well matched with known result of GR in which it can usually be materialized by the source of negative mass or energy. However, from cosmological point of view, one \ufb01nds the detailed balance condition leads to obstacles [39, 61]. Furthermore by introducing a soft violation of the detailed balance condition, they show that their results are consistent with them of GR [42]. Thus one intriguing question is whether IR-modi\ufb01ed HL theory coupled to matter \ufb01eld reproduces them of GR. In this paper, we address this question. We consider IR-modi\ufb01ed HL in presence of the global monopole, and \ufb01nd a spherically symmetric solution describing a space with de\ufb01cit solid angle. The paper is organized as follows. In section 2, vacuum solutions are discussed under spherical symmetry. In section 3, we obtain the de\ufb01cit solid angle due to the solution of IR modi\ufb01ed HL gravity with the global monopole. Finally, we give a conclusion. 2 Vacuum Solutions under Spherical Symmetry Let us investigate a spherically symmetric solution with the static metric ansatz ds2 = \u2212F(r)e2\u03c1(r)dt2 + dr2 F(r) + r2(d\u03b82 + sin2 \u03b8d\u03d52). (2.1) Since all the components of Cotton tensor vanish under this metric, the action (1.4) reduces to SHL =4\u03c0 Z \u221e \u2212\u221e dt Z \u221e 0 drr2 e\u03c1 ( \u2212\u03ba2\u00b52 8 \"\u0012F \u2032 r \u00132 + 2 r4 \u0012 1 \u2212F \u2212rF \u2032 2 \u00132# + \u03ba2\u00b52 8(1 \u22123\u03bb) \u00141 \u22124\u03bb r4 (1 \u2212F \u2212rF \u2032)2 + 2(\u039b \u2212\u03c9) r2 (1 \u2212F \u2212rF \u2032) \u22123\u039b2 \u0015 ) = \u03c0\u03ba2\u00b52 2(3\u03bb \u22121) Z dt Z dr e\u03c1\u00d7 ( (1 \u22123\u03bb) \" \u02dc F \u20322 + 2 \u0010 \u02dc F r + \u02dc F \u2032 2 \u00112 # \u2212(1 \u22124\u03bb) \u0010 \u02dc F r + \u02dc F \u2032\u00112 +2(\u039b \u2212\u03c9)r \u0010 \u02dc F r + \u02dc F \u2032\u0011 + 3\u039b2r2 ) , (2.2) 3 \fwhere \u02dc F = F \u22121. Then, the equations of motion are obtained as \" (\u03bb \u22121) \u02dc F \u2032 \u22122\u03bb r \u02dc F\u22122(\u039b \u2212\u03c9)r # \u03c1\u2032 + (\u03bb \u22121) \u02dc F \u2032\u2032 \u22122(\u03bb \u22121) r2 \u02dc F = 0, (2.3) (1 \u22123\u03bb) \" \u02dc F \u20322 + 2 \u0010 \u02dc F r + \u02dc F \u2032 2 \u00112 # \u2212(1 \u22124\u03bb) \u0010 \u02dc F r + \u02dc F \u2032\u00112 +2(\u039b \u2212\u03c9)r \u0010 \u02dc F r + \u02dc F \u2032\u0011 + 3\u039b2r2 = 0. (2.4) We start by giving a brief discussion of the asymptotic behaviors of the solutions to Eqs. (2.3) and (2.4). In the low energy regime, taking the \u03bb = 1 and neglecting the quadratic terms in the metric functions, the equations (2.3)\u2013(2.4) reduce to the Einstein equations, which reproduce Schwarzchild solution in the limit \u039b \u21920 as we expect rd\u03c1 dr = 0, \u2212 \u2192 \u03c1(r) = \u03c10 = 0, (2.5) d dr (rF) = 1, \u2212 \u2192 F(r) = 1 \u2212M r , (2.6) where M is an integration constant. For su\ufb03ciently large r at asymptotic region, it is assumed that the divergence of F(r) arises as a power behavior. A straightforward calculation with Eq. (2.4) leads to F(r) \u2248 \u001a (I) (\u03c9 \u2212\u039b)r2 \u2212 p \u03c9(\u03c9 \u22122\u039b) r2 for arbitrary \u03bb (II) FIRrp for \u03bb > 1 , (2.7) where the coe\ufb03cient FIR is an undetermined constant and p = 2\u03bb + p 2(3\u03bb \u22121) \u03bb \u22121 . (2.8) It is shown that the behavior of the long distance in (I) without a cosmological constant agrees with that of the leading IR behavior in (2.4). The long distance behavior in (II) seems to imply a new possible solution which comes from higher derivative terms. For su\ufb03ciently small r at the UV regime, assuming the divergence of B(r) follows as power behavior F(r) \u223c\u03b2 rl , (\u03b2 = constant, l > 0), (2.9) the leading term in Eq. (2.4) is proportional to 1/r2l+2. The contribution to the correction term due to the soft violation of the detailed balance condition in Eq. (2.4) can be neglected since such contribution is proportional to 1/rl. Thus, the leading UV behavior in IR modi\ufb01ed HL theory is exactly the same as that in HL theory. The allowed powers for various \u03bb are given as 4 \fF(r) \u2248 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (A) 1 for arbitrary \u03bb (B) b for \u03bb = 1 2 (C) FUV+rp or FUV\u2212rq for 1 3 \u2264\u03bb < 1 2 (D) FUV+rp for 1 2 < \u03bb < 1 , (2.10) where b denotes an integration constant, BUV\u00b1 are undetermined constants, p is given in (2.7), and q is q = 2\u03bb \u2212 p 2(3\u03bb \u22121) \u03bb \u22121 . (2.11) We show that we \ufb01nd new exact vacuum solutions and discuss how they connect two asymptotes with various value of \u03bb. For arbitrary \u03bb, a solution to the equations (2.3)\u2013(2.4) obtained as F = 1 + (\u03c9 \u2212\u039b)r2 \u2212 p \u03c9(\u03c9 \u22122\u039b) r2, \u03c1 = \u03c10 = 0, (2.12) which connects (I) and (A). For \u03bb = 1/3, another static exact solution is F = 1 + (\u03c9 \u2212\u039b)r2 \u2212 p \u03c9(\u03c9 \u22122\u039b) r2 \u2212M r , \u03c1 = \u03c10 = 0, (2.13) which reproduces AdS Schwarzschild black hole solution with twice cosmological constant for \u03c9 = 0. This result in IR modi\ufb01ed HL theory agrees with that in HL [38, 60]. For \u03bb = 1, the known exact solution is obtained by [18] F = 1 + (\u03c9 \u2212\u039b)r2 \u2212 p \u03c9(\u03c9 \u22122\u039b)r4 + c r, \u03c1 = \u03c10 = 0, (2.14) where c is an integration constant. This solution also connects (I) and (A). In contrast to the exact vacuum solutions in HL theory [38, 60], it is not clear how they have connection between (2.7) and (2.10) since it seems that there are not other exact solutions in IR modi\ufb01ed HL theory except previous exact solutions (2.12)\u2013(2.14), i.e., there do not exist exact solutions with covering all range of \u03bb for \u03bb \u22651/3. It presumably implies that all the vacuum solutions in IR modi\ufb01ed HL theory do not always follows as power behavior. Horizons and singularities in HL gravity have been discussed in the previous work [60]. However, we do not deal with them since HL theory does not have full di\ufb00eomorphism invariance and both of the previous concepts are not easy to discern [47]. 5 \f3 Global Monopole Solution In the presence of matter \ufb01eld, it is described by action Sm = Z dtd3x\u221agN Lm(N, Ni, gij) (3.15) = 4\u03c0 Z \u221e \u2212\u221e dt Z \u221e 0 drr2e\u03c1Lm(F, \u03c1). (3.16) Then, the equations of motion are given by \" (\u03bb \u22121) \u02dc F \u2032 \u22122\u03bb r \u02dc F\u22122(\u039b \u2212\u03c9)r # \u03c1\u2032 + (\u03bb \u22121) \u02dc F \u2032\u2032 \u22122(\u03bb \u22121) r2 \u02dc F = 8(1 \u22123\u03bb)r2 \u03ba2\u00b52 \u2202LM \u2202F , (3.17) (1 \u22123\u03bb) \" \u02dc F \u20322 + 2 \u0010 \u02dc F r + \u02dc F \u2032 2 \u00112 # \u2212(1 \u22124\u03bb) \u0010 \u02dc F r + \u02dc F \u2032\u00112 +2(\u039b \u2212\u03c9)r \u0010 \u02dc F r + \u02dc F \u2032\u0011 + 3\u039b2r2 = 8(1 \u22123\u03bb)r2 \u03ba2\u00b52 \u0012 Lm + \u2202Lm \u2202\u03c1 \u0013 . (3.18) When we consider a global monopole of O(3) linear sigma model and magnetic monopole of U(1) gauge theory in the HL type \ufb01eld theory, the long distance behavior of the Lagrangian density in IR regime must be proportional to 1/rn irrespective of the value of z (see Ref. [60] for more details) \u2202Lm \u2202F \u22480, Lm + \u2202Lm \u2202\u03c1 \u2248\u2212\u03b3 rn, (n = 0, 1, 2, ...), (3.19) where a constant \u03b3 is determined by the explicit Lagrangian form and the monopole con\ufb01gurations of interest. Positive \u03b3 can be read o\ufb00from the energy momentum tensor of matter \ufb01elds and n must be a positive integer in order to get a \ufb01nite energy. A straightforward calculation with Eqs. (3.17) and (3.18) leads to F = 1 + h (\u03c9 \u2212\u039b) \u00b1 p \u03c9(\u03c9 \u22122\u039b) i r2 + 8(n \u22123)\u03b3 n2\u03ba2\u00b52p \u03c9(\u03c9 \u22122\u039b) r2\u2212n, (3.20) \u03c1 = (2n \u22123) ln(r/r0) + \u00123 n \u22122 \u0013 ln \u00148\u03b3(n \u22123)2 \u03ba2\u00b52 \u2212\u03c9(\u03c9 \u22122\u039b)n3rn \u0015 , (3.21) for n \u0338= 3 and \u03bb = (n2 \u22124n + 6)/n2. In particular, in the case of n = 3, there exists solution only by taking \u03bb = 1/3. Then, matter contributions vanish in (3.17) and (3.18) when \u03bb = 1/3. Therefore, such solution exactly goes back to the vacuum solution (2.13). One also \ufb01nds special 6 \fsolution for \u03bb = 1, F = 1 + (\u03c9 \u2212\u039b)r2 \u00b1 s \u03c9(\u03c9 \u22122\u039b)r4 + f r + 16\u03b3 (3 \u2212n)\u03ba2\u00b52r4\u2212n, \u03c1 = \u03c10 = 0, (n \u0338= 3) (3.22) F = 1 + (\u03c9 \u2212\u039b)r2 \u00b1 r \u03c9(\u03c9 \u22122\u039b)r4 + f r + 16\u03b3 \u03ba2\u00b52r ln r, \u03c1 = \u03c10 = 0, (n = 3) (3.23) with an integration constant f. Let us study the details of the global monopole solution. The O(3) sigma model action is presumably taken as SO(3) = Z d4x\u221a\u2212g4 \u0012 \u2212g00 2 \u22020\u03c8a\u22020\u03c8a \u2212V \u0013 , (3.24) where, \u03c8a (a = 1, 2, 3) denote a scalar \ufb01elds and g00 = 1/N2. For simplicity, we assume an ordinary quadratic spatial derivatives and of a quartic order self-interactions, V (\u03c8a, \u2202i\u03c8a, ...) = \u2212gij 2 \u2202i\u03c8a\u2202j\u03c8a \u2212\u03bbm 4 (\u03c82 \u2212v2)2, \u03c82 \u2261\u03c8a\u03c8a, (3.25) For anisotropic scaling z = 1 (n = 2), the IR action (3.24) is SO(3) = 4\u03c0 Z \u221e \u2212\u221e dt Z \u221e 0 drr2e\u03c1 \u0014 \u2212F 2 \u03c8\u2032 2 \u2212\u03c82 r2 \u2212\u03bbm 4 (\u03c82 \u2212v2)2 \u0015 , (3.26) and, under a hedgehog ansatz \u03c8a = \u02c6 ra\u03c8(r) = (sin \u03b8 cos \u03d5, sin \u03b8 sin \u03d5, cos \u03b8)\u03c8(r), (3.27) it leads to \u2202Lm \u2202F = \u22121 2\u03c8\u2032 2, (3.28) Lm + \u2202Lm \u2202\u03c1 = \u2212F 2 \u03c8\u2032 2 \u2212\u03c82 r2 \u2212\u03bbm 4 (\u03c82 \u2212v2)2. (3.29) Two boundary conditions of the above equations are imposed by requiring single-valuedness of the \ufb01eld at the monopole position and \ufb01nite energy at spacial in\ufb01nity \u03c8(0) = 0, \u03c8(\u221e) = v. (3.30) From the boundary conditions, one can take the following con\ufb01guration \u03c8(r) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0, for r \u2264 1 v\u221a\u03bbm , v, for r > 1 v\u221a\u03bbm , (3.31) 7 \fwhich means the scalar \ufb01eld \u03c8(r) has vacuum expectation value zero in the region inside the monopole core and v outside, respectively. Therefore, the \ufb01eld equations (3.28)\u2013(3.29) near the vacuum reduce to \u2202Lm \u2202F \u22480, Lm + \u2202Lm \u2202\u03c1 \u2248\u2212v2 r2 . (3.32) In particular, \u03b3 in (3.19) is given as v2 for n = 2. Then, the metric function F(r) is obtained by F = 1 + h (\u03c9 \u2212\u039b) \u00b1 p \u03c9(\u03c9 \u22122\u039b) i r2 \u2212 2v2 \u03ba2\u00b52p \u03c9(\u03c9 \u22122\u039b) , (3.33) \u03c1 = ln(r/r0) \u22121 2 ln \u0014 v2 \u03ba2\u00b52 \u2212\u03c9(\u03c9 \u22122\u039b)r2 \u0015 , (3.34) which leads to ds2 = \u2212 1 + h (w \u2212\u039b) \u00b1 p w(w \u22122\u039b) i r2 v2 \u03ba2\u00b52 + h \u2212w(w \u22122\u039b) + 2v2 \u03ba2\u00b52 p w(w \u22122\u039b) i r2dt2 + dr2 1 + h (w \u2212\u039b) \u00b1 p w(w \u22122\u039b) i r2 + r2 1 \u2212 2v2 \u03ba2\u00b52p w(w \u22122\u039b) ! (d\u03b82 + sin2 \u03b8d\u03d52), (3.35) after rescaling the coordinates, dt \u2192 1 \u2212 2v2 \u03ba2\u00b52p w(w \u22122\u039b) !\u22121 r0dt, dr \u2192 s 1 \u2212 2v2 \u03ba2\u00b52p w(w \u22122\u039b) dr. (3.36) The metric (3.35) describes a space with a de\ufb01cit solid angle [62, 63, 60], 4\u03c0\u2206= 8\u03c0v2 \u03ba2\u00b52p w(w \u22122\u039b) , for 0 < 2v2 \u03ba2\u00b52p w(w \u22122\u039b) < 1. (3.37) In (3.35) a black hole horizon is formed at rH = 2v2 \u03ba2\u00b52\u221a w(w\u22122\u039b) \u22121 q (w \u2212\u039b) \u00b1 p w(w \u22122\u039b) for 2v2 \u03ba2\u00b52p w(w \u22122\u039b) \u22651. (3.38) These results show two genuine features of GR; there does not exist a surplus but de\ufb01cit solid angle and a source which gives rise to de\ufb01cit angle is not an electric \ufb01eld but a scalar \ufb01eld. In this section, we concentrate on investigating a solid angle in low energy limit. One can also examine other issues such as a potential in the UV action and energy con\ufb01gurations near the Li\ufb01shitz \ufb01xed point as in [60]. 8 \f4" + } + ], + "Hwan Heo": [ + { + "url": "http://arxiv.org/abs/2302.01571v1", + "title": "Robust Camera Pose Refinement for Multi-Resolution Hash Encoding", + "abstract": "Multi-resolution hash encoding has recently been proposed to reduce the\ncomputational cost of neural renderings, such as NeRF. This method requires\naccurate camera poses for the neural renderings of given scenes. However,\ncontrary to previous methods jointly optimizing camera poses and 3D scenes, the\nnaive gradient-based camera pose refinement method using multi-resolution hash\nencoding severely deteriorates performance. We propose a joint optimization\nalgorithm to calibrate the camera pose and learn a geometric representation\nusing efficient multi-resolution hash encoding. Showing that the oscillating\ngradient flows of hash encoding interfere with the registration of camera\nposes, our method addresses the issue by utilizing smooth interpolation\nweighting to stabilize the gradient oscillation for the ray samplings across\nhash grids. Moreover, the curriculum training procedure helps to learn the\nlevel-wise hash encoding, further increasing the pose refinement. Experiments\non the novel-view synthesis datasets validate that our learning frameworks\nachieve state-of-the-art performance and rapid convergence of neural rendering,\neven when initial camera poses are unknown.", + "authors": "Hwan Heo, Taekyung Kim, Jiyoung Lee, Jaewon Lee, Soohyun Kim, Hyunwoo J. Kim, Jin-Hwa Kim", + "published": "2023-02-03", + "updated": "2023-02-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.GR", + "cs.LG" + ], + "main_content": "Introduction A great surge in neural rendering has emerged in the last few years. Speci\ufb01cally, the Neural Radiance Fields (Mildenhall et al., 2020) (NeRF) has shown remarkable performances in the novel view synthesis. NeRF leverages a fully connected network to implicitly encode a 3D scene as a continuous signal and renders novel views through a differentiable volume rendering. However, when the rendering of NeRF performs, a large number of inferences are inevitable, making the computational burden of training and evaluation heavier. Aware of this problem, related works have circumvented the 1Department of Computer Science, Korea University, Republic of Korea 2NAVER AI Lab, Republic of Korea 3AI Institute of Seoul National University, Republic of Korea. Correspondence to: JinHwa Kim and Hyunwoo J. Kim . PSNR (\u2191) 33 34 35 36 Training Time Per Iteration (ms) 0 5 10 15 20 25 30 Figure 1. Illustration about the proposed method. w h x h(x) 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 Robust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding decoding MLP m(y; \u03c6) predicts the density and the nonLambertian color along the ray. All trainable parameters are updated via photometric loss L between the rendered ray and the ground-truth color. 3.1.2. DERIVATIVE OF MULTI-RESOLUTION HASH ENCODING In this context, we seek to derive the derivative of the multiresolution hash encoding. Let ci,l(x) denote the corner i of the level l resolution grid in which xl is located, and let hl(\u00b7) represent the index function for the l-level, as de\ufb01ned in Eq. (3). Note that hl(\u00b7) is not differentiable. Next, consider a function hl : Rd ! RF , whose output is a lth interpolated feature vector with 2d corners, as given by, hl(x) = 2d X i=1 wi,l \u00b7 Hl \" hl \" ci,l(x) ## , (4) where wi,l denotes the d-linear weight, which is de\ufb01ned by the opposite volume in a unit hypercube with the relative position of x: wi,l = d Y j=1 (1 \u2212|xl \u2212ci,l(x)|j) (5) where the index j indicates the j-th element in the vector. We can rede\ufb01ne the multi-resolution hash encoding vector y as follows: y = f(x; \u2713) = \u21e5 h1(x); . . . ; hL(x) \u21e4 2 RF 0 (6) where the dimension of the output vector is F 0 = L \u21e5F after the concatenation. The Jacobian rxhl(x) 2 RF \u21e5d of the lth interpolated feature vector hl(x) with respect to the x can be derived using the chain-rule as follows: rxhl(x) = \uf8ff@hl(x) @x1 , . . . , @hl(x) @xd ( = 2d X i=1 Hl \" hl \" ci,l(x) ## \u00b7 rxwi,l, (7) where Hl(gl(cl(x))) is not differentiable with respect to x and the k-th element of rxwi,l 2 R1\u21e5d is de\ufb01ned by, @wi,l(x) @xk = sk \u00b7 Y j6=k (1 \u2212|xl \u2212ci,l(x)|j) , (8) where sk denotes sk = sign \" |ci,l(x) \u2212xl|k # . As seen in Eq. (7), the Jacobian rxhl(x) is a weighted sum of the hash table entries corresponding to the nearby corners of x. However, the gradient rxwi,l is corners due to the variable sk, causin gradient to \ufb02ip. This oscillation of th the source of gradient \ufb02uctuation, in In other words, it results in the unsm illustrated in Fig. 1 of Section 1. 3.2. Camera Pose Re\ufb01nement Camera pose can be represented as a from the camera coordinate to the w us denote the camera-to-world tran \u21e5 R|t \u21e4 2 SE(3), where R 2 SO(3 rotation matrix and translation vector 3.2.1. POSE REFINEMENT WITH TH ENCODING The pose re\ufb01nement using error backrendering is jointly optimizing the 6 and neural scene representation throu volume rendering: \u03c6\u21e4, \u21e4= arg min \u03c6, L(I, I where \u03c6 and denotes model para camera parameters, \u02c6 I and I denote and its ground-truth color respectively Note that, to our knowledge, all previ et al., 2021; Wang et al., 2021c; Xia et 2021; Lin et al., 2021; Chng et al., 202 in neural rendering utilize fully differe respect to the input coordinate (e.g., s However, they have limited performan resolution hash encoding (M\u00a8 uller et a 3.2.2. POSE REFINEMENT WITH MU HASH ENCODING Now, we present the optimization pr ment with multi-resolution hash enc Eq. (9), we also directly optimize the ters with multi-resolution hash encod \u03c6\u21e4, \u2713\u21e4, \u21e4= arg min \u03c6,\u2713, L(I, where \u2713is a trainable parameter for m encoding, i.e., the entries of the hash ta observe that the pose re\ufb01nement and r from the above optimization problem the previous works (See row (d) of Ta To clarify the poor performance, we as \ufb02uctuation of Eq. (10) also negativel ment. Since the input coordinate x transformation of camera pose \u21e5R|t\u21e4 \u03b8 (a) (b) w x h \u03b4(w) \u02dc \u03b4(w) \u2212 + copy 0 Backward pass Forward pass Differentiable Partially-diff. Indifferentiable Ours BARF \u2a0920 Faster GARF PSNR (\u2191) 30 32 34 36 38 Number of Layers 2 4 6 8 MLP-256 MLP-128 MLP-64 NGP-64 Figure 1. Gradient smoothing (a) \u2192(b) to attenuate the gradient \ufb02uctuation (jiggled red arrow) of the hash encoding h. For the camera pose re\ufb01nement, the error back-propagation passes through the d-linear interpolation weight w; however, its derivative is determined by the sign of relative position of the input coordinate x to the corners of the hash grid. The gradient \ufb02uctuation from this makes it dif\ufb01cult to converge. Please refer to Sec. 3 for the implementation details and the de\ufb01nitions of other symbols. shortcomings by introducing grid-based approaches (Liu et al., 2020; Yu et al., 2021a; Hedman et al., 2021; Sun et al., 2022; Wu et al., 2021; Sara Fridovich-Keil and Alex Yu et al., 2022; Karnewar et al., 2022), which store view direction-independent representations in dense grids. While these methods explicitly encode the whole scene simultaneously, they face a trade-off between the computational cost of the model size and its performance. Therefore, delicate training strategies such as pruning or distillation are often required to preserve the view-synthesis quality and reduce the model size. Recently, Instant-NGP (M\u00a8 uller et al., 2022) addressed these problems by proposing multi-resolution hash encoding for positional encoding, which combines a multi-resolution decomposition with a lightweight hash grid. The multi-resolution hash encoding achieved state-of-the-art performance and the fastest convergence speed of NeRF. Despite the impressive performance of multi-resolution hash encoding, the volume rendering procedure (emissionabsorption ray casting (Kajiya & Herzen, 1984)) used in arXiv:2302.01571v1 [cs.CV] 3 Feb 2023 \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding Instant-NGP depends largely on the accurate camera poses. This method samples the points along the ray de\ufb01ned by direction and origin, which are determined by the camera pose. However, obtaining accurate camera poses in real-world scenarios might be unavailable, so most existing works utilize an off-the-shelf algorithm such as Structure-from-Motion (SfM), or COLMAP (Sch\u00a8 onberger & Frahm, 2016). The previous works (Wang et al., 2021c; Jeong et al., 2021; Lin et al., 2021) have attempted to resolve this issue by jointly optimizing camera poses and scene representations with the original NeRF. However, applying this approach to multiresolution hash encoding leads to severe deteriorations in pose re\ufb01nement and scene representation. Based on the gradient analysis of the na\u00a8 \u0131ve joint optimization of pose parameters and multi-resolution hash encodings, we demonstrate that the non-differentiability of the hash function and the discontinuity of the d-linear weights as a function of the input coordinate leads to the \ufb02uctuation in the Jacobian of the multi-resolution hash encodings. We investigate a novel learning strategy for jointly optimizing the camera pose parameters and the other parameters when the camera poses are noisy or unknown, utilizing the outstanding performance of multi-resolution hash encoding. Given that, we propose to use a non-linear activation function in our straight-through estimator for smooth gradients in the backward pass, consistently maintaining the d-linear interpolation in the forward pass (ref. Figure 1). Moreover, we propose the multi-level learning rate scheduling that regulates the convergence speed of each level-wise encoding. We also empirically show that a small decoder compared to the size of the hash table (M\u00a8 uller et al., 2022) converges to suboptimal when the camera poses are noisy. The ablation studies on the depth and wide of the decoding networks and the core components of the proposed learning framework \ufb01rmly validate our proposed method for robust camera pose re\ufb01nement for multi-resolution hash encoding. In summary, our contributions are three-fold: \u2022 We analyze the derivative of the multi-resolution hash encoding, and empirically show that the gradient \ufb02uctuation negatively affects the pose re\ufb01nement. \u2022 We propose an ef\ufb01cient learning strategy jointly optimizing multi-resolution hash encoding and camera poses, leveraging the smooth gradient and the curriculum learning for coarse-to-\ufb01ne adaptive convergences. \u2022 Our method achieves state-of-the-art performance in pose re\ufb01nement and novel-view synthesis with a faster learning speed than competitive methods. 2. Related Work 2.1. Neural Rendering Mildenhall et al. (2020) \ufb01rst introduced the Neural Radiance Fields (NeRF) which parameterizes 3D scenes using neural networks. They employed a fully differentiable volume rendering procedure and a sinusoidal encoding to reconstruct high-\ufb01delity details of the scene representations. The necessity of sinusoidal encoding was examined from the perspectives of kernel regression (Tancik et al., 2020), or the hierarchical structure of a natural scene reconstruction task (Landgraf et al., 2022). Subsequently, in order to improve the reconstruction quality of NeRF, various modi\ufb01cations have been proposed such as replacing the ray casting with anti-aliased cone tracing (Barron et al., 2021), disentangling foreground and background models through non-linear sampling algorithms (Zhang et al., 2020; Neff et al., 2021; Barron et al., 2022), or learning implicit surface instead of the volume density \ufb01eld, e.g., signed distance function (Oechsle et al., 2021; Wang et al., 2021b; Yariv et al., 2021). Also, there are several applicative studies with decomposition of NeRF (Pumarola et al., 2021; Martin-Brualla et al., 2021; Srinivasan et al., 2021; Boss et al., 2021; Rebain et al., 2021; Park et al., 2021), composition with generative works (Schwarz et al., 2021; Niemeyer & Geiger, 2021; Wang et al., 2021a; Jain et al., 2022), or few-shot learning (Yu et al., 2021b; Jain et al., 2021; Rebain et al., 2022; Xu et al., 2022; Chibane et al., 2021; Wei et al., 2021; Chen et al., 2021). 2.2. Accelerating NeRF One crucial drawback of NeRF is its slow convergence and rendering speed. To accelerate the training speed of NeRF, previous works have combined grid-based approaches which store view-direction-independent information on voxel grids. Liu et al. (2020) introduce a dense feature grid to reduce the computation burden of NeRF and progressively prunes the dense grids. The other works pre-compute and store a trained NeRF to the voxel grid, increasing rendering speed (Yu et al., 2021a; Hedman et al., 2021). On the other hand, rather than distilling the trained NeRF to voxel grids, direct learning of features on the voxel has been proposed (Sun et al., 2022; Wu et al., 2021; Wang et al., 2022; Sara Fridovich-Keil and Alex Yu et al., 2022). While these methods have been successful in achieving near real-time neural rendering, they also come with drawbacks such as the increased model size and lower reconstruction quality caused by pre-storing the scene representation. To overcome these limitations, M\u00a8 uller et al. (2022) recently proposed Instant-NGP, which utilizes spatial hash functions and multi-resolution grids to approximate dense grid features and maximizes the hierarchical properties of 3D scenes. \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding This approach allows for state-of-the-art performance and the fastest convergence speed simultaneously. 2.3. NeRF with Pose Re\ufb01nement For the majority of neural rendering, it is crucial to have accurate camera intrinsic and extrinsic parameters. In an effort to address this issue, Yen-Chen et al. (2021) proposed a method for combining pose estimation and NeRFs by utilizing an inverted trained NeRF as an image-to-camera pose model. Subsequently, various methods for jointly optimizing camera pose parameters and 3D scene reconstruction have been proposed. Wang et al. (2021c) proposed a joint optimization problem in which the camera pose is represented as a 6-degree-of-freedom (DoF) matrix and optimized using a photometric loss. Building upon this, Xia et al. (2022) proposed a method that replaces ReLU-based multi-layer perceptrons (MLPs) with sine-based MLPs and employs an ef\ufb01cient ray batch sampling. In addition to directly optimizing camera parameters, geometric-based approaches (Jeong et al., 2021; Lin et al., 2021; Chng et al., 2022) have also been suggested. For example, Lin et al. (2021) proposed BARF, which optimizes the warping matrix of the camera pose with a standard error back-propagation algorithm, utilizing curriculum training to adjust the spectral bias of the scene representation. Unlike previous methods based on the original NeRF structure, our method is designed for grid-based approaches, especially for multi-resolution hash encoding, which shows outstanding performance in novel-view synthesis and its training speed. The common NeRF structure and its variants are prone to slowly converge, but our method can be converged signi\ufb01cantly faster with state-of-the-art reconstruction performance under the circumstance of noisy or unknown camera poses. 3. Method As mentioned in Section 1, we observed that a na\u00a8 \u0131ve error back-propagation for the camera pose re\ufb01nement with multiresolution hash encoding leads to inferior results compared to the use of sinusoidal encoding, (e.g., Jeong et al., 2021; Lin et al., 2021). To further understand the observation, we analyze the derivative of the multi-resolution hash encoding (Section 3.1). We point out that the gradient \ufb02uctuation of the multi-resolution hash encoding makes it dif\ufb01cult to learn the pose re\ufb01nement and scene reconstruction jointly (Section 3.2). To address these, we propose a method for calibrating inaccurate camera poses in multi-resolution hash encoding (Section 3.3). Additionally, we \ufb01nd that the multilevel decomposition of a scene induces the different convergence rates of multi-level encoding, which results in limited camera pose registration (Section 3.4). 3.1. Multi-Resolution Hash Encoding This section describes the multi-resolution hash encoding presented by M\u00a8 uller et al. (2022), which we focus on. 3.1.1. MULTI-RESOLUTION HASH ENCODING As the combination of the multi-resolution decomposition and the grid-based approach with hashing mechanism, multiresolution hash encoding is de\ufb01ned as a learnable mapping of input coordinate x \u2208Rd to a higher dimension. The trainable encoding is learned as multi-level feature tables independent of each other. The feature tables H = {Hl | l \u2208{1, . . . , L}} are assigned to the L levels and each table contains T trainable feature vectors with the dimensionality of F. Each level consists of the d-dimensional grids where each dimension has Nl sizes considering multi-resolution. The number of grids for each size exponentially grows from the coarsest Nmin to the \ufb01nest resolutions Nmax. Therefore, Nl is de\ufb01ned as follows: b : = exp \u0012ln Nmax \u2212ln Nmin L \u22121 \u0013 (1) Nl : = \u230aNmin \u00b7 bl\u22121\u230b. (2) For given a speci\ufb01c level l, an input coordinate x is scaled by Nl and then a grid spans to a unit hypercube where \u230axl\u230b:= \u230axl\u00b7Nl\u230band \u2308xl\u2309:= \u2308xl\u00b7Nl\u2309are the vertices of the diagonal. Then, each vertex is mapped into an entry in the level\u2019s respective feature table. Notice that, for coarse levels where the number of the total vertices of the grid is fewer than T, each vertex corresponds one-to-one to the table entry. Otherwise, each vertex corresponds to the element of the lth table Hl, whose table index is the output of the following spatial hash function (Teschner et al., 2003): h(x) = d M i=1 xi\u03c0i ! mod T, (3) where L denotes bitwise XOR and \u03c0i are unique and large prime numbers. In each level, the 2d feature vectors of the hypercube are d-linearly interpolated according to the relative position of x. However, the interpolation enables us to get the gradient of the table entry since the interpolating weights are the function of x. We will revisit this in the following section for analysis. The output y of the multi-resolution hash encoding is the concatenation of the entire level-wise interpolated features and its dimensionality is L \u00d7 F. For simplicity, we denote as y = f(x; \u03b8) with its trainable parameter \u03b8. Similar to the other neural renderings using differentiable volume rendering (emission-absorption ray casting), the decoding MLP m(y; \u03c6) predicts the density and the non-Lambertian color along the ray. All trainable parameters are updated \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding via photometric loss L between the rendered ray and the ground-truth color. 3.1.2. DERIVATIVE OF MULTI-RESOLUTION HASH ENCODING For the gradient analysis, we derive the derivative of the multi-resolution hash encoding with respect to x. Let ci,l(x) denote the corner i of the level l resolution grid in which xl is located, and let hl(\u00b7) represent the hash function for the l-level, as de\ufb01ned in Eq. (3). Next, consider a function hl : Rd \u2192RF , whose output is a lth interpolated feature vector with 2d corners, as given by, hl(x) = 2d X i=1 wi,l \u00b7 Hl \u0000hl \u0000ci,l(x) \u0001\u0001 , (4) where wi,l denotes the d-linear weight, which is de\ufb01ned by the opposite volume in a unit hypercube with the relative position of x: wi,l = d Y j=1 (1 \u2212|xl \u2212ci,l(x)|j) (5) where the index j indicates the j-th dimension in the vector. We can rede\ufb01ne the multi-resolution hash encoding vector y as follows: y = f(x; \u03b8) = \u0002 h1(x); . . . ; hL(x) \u0003 \u2208RF \u2032 (6) where the dimension of the output vector is F \u2032 = L \u00d7 F after the concatenation. The Jacobian \u2207xhl(x) \u2208RF \u00d7d of the lth interpolated feature vector hl(x) with respect to the x can be derived using the chain-rule as follows: \u2207xhl(x) = \u0014\u2202hl(x) \u2202x1 , . . . , \u2202hl(x) \u2202xd \u0015 = 2d X i=1 Hl \u0000hl \u0000ci,l(x) \u0001\u0001 \u00b7 \u2207xwi,l, (7) where Hl(gl(cl(x))) is not differentiable with respect to x and the k-th element of \u2207xwi,l \u2208R1\u00d7d is de\ufb01ned by, \u2202wi,l(x) \u2202xk = sk \u00b7 Y j\u0338=k (1 \u2212|xl \u2212ci,l(x)|j) , (8) where sk denotes sk = sign \u0000|ci,l(x) \u2212xl|k \u0001 . As seen in Eq. (7), the Jacobian \u2207xhl(x) is the weighted sum of the hash table entries corresponding to the nearby corners of x. However, the gradient \u2207xkwi,l is not continuous at the corners due to the variable sk, causing the direction of the gradient to \ufb02ip. This oscillation of the gradient \u2207xwj,l is the source of gradient \ufb02uctuation, independently from H. For a detailed analysis of the derivatives and further discussion, please refer to Appendix A.1. 3.2. Camera Pose Re\ufb01nement Camera pose can be represented as a transformation matrix from the camera coordinate to the world coordinate. Let us denote the camera-to-world transformation matrix as \u0002 R|t \u0003 \u2208SE(3), where R \u2208SO(3) and t \u2208R3\u00d71 are rotation matrix and translation vector, respectively. 3.2.1. POSE REFINEMENT WITH THE SINUSOIDAL ENCODING The pose re\ufb01nement using error back-propagation in neural rendering is jointly optimizing the 6 DoF pose parameters and neural scene representation through the differentiable volume rendering: \u03c6\u2217, \u03c8\u2217= arg min \u03c6,\u03c8 L(I, \u02c6 I; \u03c6, \u03c8), (9) where \u03c6 and \u03c8 denote model parameters and trainable camera parameters, \u02c6 I and I denote reconstructed color and its ground-truth color respectively. Note that, to our knowledge, all previous works (Yen-Chen et al., 2021; Wang et al., 2021c; Xia et al., 2022; Jeong et al., 2021; Lin et al., 2021; Chng et al., 2022) of pose re\ufb01nement in neural rendering utilize fully differentiable encoding with respect to the input coordinate (e.g., sinusoidal or identity). However, they have limited performance compared to multiresolution hash encoding (M\u00a8 uller et al., 2022). 3.2.2. POSE REFINEMENT WITH MULTI-RESOLUTION HASH ENCODING Now, we present the optimization problem of pose re\ufb01nement with multi-resolution hash encoding. Based on the Eq. (9), we also directly optimize the camera pose parameters with multi-resolution hash encoding, \u03c6\u2217, \u03b8\u2217, \u03c8\u2217= arg min \u03c6,\u03b8,\u03c8 L(I, \u02c6 I; \u03c6, \u03b8, \u03c8), (10) where \u03b8 is a trainable parameter for multi-resolution hash encoding, i.e., the entries of the hash tables Hl. However, we observe that the pose re\ufb01nement and reconstruction quality from the above optimization problem is much worse than the previous works (Refer to (e) of Table 3). To explain the poor performance, we assume that the gradient \ufb02uctuation of Eq. (10), or Eq. (8), negatively affects pose re\ufb01nement. Since the input coordinate x is de\ufb01ned as a rigid transformation of the camera pose \u0002R|t\u0003 and image coordinate (projected in homogeneous space z = \u22121), the gradient \ufb02uctuation propagates through the gradient-based updates of the camera poses. We speculate that this \ufb02uctuation makes the joint optimization of the pose re\ufb01nement and the scene reconstruction dif\ufb01cult. In Appendix A.2, we present more details of the camera pose re\ufb01nement with the gradient-based optimization. \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding 3.3. Non-linear Interpolation for Smooth Gradient To mitigate the gradient \ufb02uctuation, we propose to use a smooth gradient for the interpolation weight wi,l \u2208 [0, 1] maintaining forwarding pass, inspired by the straightthrough estimator (Bengio et al., 2013). For the smooth gradient, we use the activation function \u03b4(wi,l) whose derivative is zero at the corners of the hypercube, and wi,l \u2208[0, 1], \u03b4(wi,l) = 1 \u2212cos(\u03c0wi,l) 2 , (11) where the activation value \u03b4(wi,l) is ranged in [0, 1]. As a result, the gradient of \u03b4(wi,l) with respect to x is derived as follows: \u2207x\u03b4(wi,l) = \u03c0 2 sin(\u03c0wi,l) \u00b7 \u2207xwi,l. (12) Remind that \u2207xwi,l is not continuous and \ufb02ipped through the boundary of a hypercube. In Eq. (12), the weighting by the sine function effectively makes the gradient smooth and continuous (ref. Figure 2b). Moreover, the gradient of x near the boundary is relatively shrunk compared to the middle of the grids, which may prevent frequent back-and-forth across the boundary after camera pose updates. However, we do not directly use this in the interpolation forward pass. The cosine function in Eq. (11) unintentionally scatters the sampled points in a line toward the edges of the grids. This phenomenon, which we refer to as the \u201czigzag problem,\u201d can be addressed by the straight-through estimator (Bengio et al., 2013). It maintains the results of the linear interpolation in the forward pass by the cancelout of the last two terms in Eq. (13), and partially uses the activation value \u03b4(wi,l) in the backward pass as follows: \u02c6 wi,l = wi,l + \u03bb\u03b4(wi,l) \u2212\u03bb\u02dc \u03b4(wi,l), (13) where \u03bb is a hyperparameter that adjusts the smooth gradient and the zigzag problem, \u02dc \u03b4 denotes the detached variable from the computational graph. The steps involved in the straight-through estimator are illustrated in Figure 1. For an additional discussion, we present an illustration of the zigzag problem in Appendix A.3 (See Figure 5). Although this straight-through estimator does not perfectly make the gradient smooth and continuous with the addition in Eq. (13), it is empirically more effective than other mixing variants (see Appendix A.3 and Table 5). 3.4. Curriculum Scheduling As argued by Tancik et al. (2020); Landgraf et al. (2022), NeRFs exhibit a hierarchical structure, i.e., the coordinatebased MLPs can suffer from spectral bias issues, in which -4.5 -3 -1.5 0 1.5 3 4.5 6 211 212 213 214 215 216 217 218 219 , y @wi,l(x) @xk = sk \u00b7 Y j6=k (1 \u2212|xl \u2212ci,l(x)|j) , (8) where sk denotes sk = sign \" |ci,l(x) \u2212xl|k # . As seen in Eq. (7), the Jacobian rxhl(x) is a weighted sum of the hash table entries corresponding to the nearby corners encoding, i.e., the entries of the hash tables observe that the pose re\ufb01nement and reco from the above optimization problem is the previous works (See row (d) of Table To clarify the poor performance, we assum \ufb02uctuation of Eq. (10) also negatively af ment. Since the input coordinate x is d transformation of camera pose \u21e5R|t\u21e4 and P 30 32 Number of Layers 2 4 6 8 MLP-128 MLP-64 NGP-64 0 1 2 3 4 -1 -0.86 -0.72 -0.58 -0.44 -0.3 -0.16 -0.02 0.12 0.26 0.4 0.54 0.68 0.82 0.96 1.1 1.24 1.38 1.52 1.66 1.8 1.94 2.08 2.22 2.36 2.5 2.64 2.78 2.92 3.06 3.2 3.34 3.48 3.62 3.76 3.9 4.04 4.18 4.32 4.46 4.6 4.74 4.88 Hash encoding Smooth gradient h(x) (a) (b) \u2202h(x)/\u2202x x x 1 Figure 2. Illustration on the smooth gradient induced by Eq. (12). We visualize the 1D case of the multi-resolution hash encoding hl(x) and its derivative \u2202hl(x)/\u2202x in (a) and (b), respectively. For further discussion, please refer to the text and Appendix A.1. different frequencies converge at different rates. Lin et al. (2021) further address this issue in the pose re\ufb01nement. The research showed that the Jacobian of the kth positional encoding ampli\ufb01es pose noise, making the na\u00a8 \u0131ve application of positional encoding inappropriate for pose re\ufb01nement. We observe that the multi-resolution hash encoding, which leverages the multi-level decomposition of scenes, exhibits a similar problem. To resolve the problem, we propose a curriculum scheduling strategy to regulate the convergence rate of the level-wise encoding. We weight the learning rates \u03b7l of the lth multi-resolution hash encoding hl by \u02dc \u03b7l = rl(t) \u00b7 \u03b7l, (14) where the weight of learning rate rl(t) is de\ufb01ned as rl(t) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 0 \u03b1(t) < l 1\u2212cos((\u03b1(t)\u2212l)\u03c0) 2 0 \u2264\u03b1(t) \u2212l < 1 1 otherwise, (15) and \u03b1(t) = L \u00b7 t\u2212ts te\u2212ts \u2208[0, L] is proportional to the number of iterations t in the scheduling interval [ts, te]. This weighting function is similar to the coarse-to-\ufb01ne method proposed by Park et al. (2021) and Lin et al. (2021). However, in contrast to these previous works, we apply this weighting to the learning rate of the level-wise hash table Hl. This allows the decoding network receives the encodings from all levels, while high-level encodings are more slowly updated than the coarse levels. We empirically found this multi-level learning rate scheduling effective in multi-resolution hash encoding. 4. Experiment In this section, we validate our proposed method using the multi-resolution hash encoding (M\u00a8 uller et al., 2022) with inaccurate or unknown camera poses. \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding 4.1. Implementation Details 4.1.1. DATASET We evaluate the proposed method against the two previous works, BARF (Lin et al., 2021) and GARF (Chng et al., 2022). Since the implementation of GARF is unavailable, we re-implement GARF. Our re-implemented GARF has the same structure as BARF except for sinusoidal encoding and Gaussian activation. Following Lin et al. (2021) and Chng et al. (2022), we evaluate and compare our method on two public novel-view-synthesis datasets. NeRF-Synthetic. NeRF-Synthetic (Mildenhall et al., 2020) has 8 synthetic object-centric scenes, which consist of 100 rendered images with ground-truth camera poses (intrinsic and extrinsic) for each scene. Following Lin et al. (2021), we utilize this dataset for the noisy camera pose scenario. To simulate the scenario of imperfect camera poses, we adopt the approach in Lin et al. (2021) synthetically perturbing the camera poses with additive Gaussian noise, \u03b4\u03c8 \u223cN(0, 0.15I). LLFF. LLFF (Mildenhall et al., 2019) has 8 forwardfacing scenes captured by a hand-held camera, including RGB images and camera poses that have been estimated using the off-the-shelf algorithm (Sch\u00a8 onberger & Frahm, 2016). Following previous works, we utilize this dataset for the unknown camera pose scenario. Unlike the synthetic datasets, we initialize all camera poses with the identity matrix. Note that, the camera poses provided by LLFF are the estimations obtained using the COLMAP algorithm (Sch\u00a8 onberger & Frahm, 2016). As such, the pose error measured in our quantitative results only indicates the agreement between the learned pose and the estimated pose using the classical geometry-based approach. 4.1.2. IMPLEMENTATION DETAILS For the multi-resolution hash encoding, we follow the approach of Instant-NGP (M\u00a8 uller et al., 2022), which uses a table size of T = 219 and a dimensionality of F = 2 for each level feature. Each feature table is initialized with a uniform distribution U[0, 1e \u22124]. Note that we reproduce the entire training pipeline in PyTorch for pose re\ufb01nement instead of using the original C++ & CUDA implementation of Instant-NGP for fair comparison 1. The decoding network consists of 4-layer MLPs with ReLU (Glorot et al., 2011) activation and 256 hidden dimensions, including density network branch and color. We utilize the tiny-cuda-nn (tcnn) (M\u00a8 uller et al., 2021) frame1While our re-implementation performs almost the same with the original, it takes slightly longer training time due to PyTorch\u2019s execution latency. The performance of our re-implemented InstantNGP is reported in Appendix B.1. work for the decoding network. We present the other implementation details in Appendix B.1. While we set \u03bb = 1 by default for the straight-through estimator, the other options are explored in Appendix A.3. 4.1.3. EVALUATION CRITERIA In conformity with previous studies (Lin et al., 2021; Chng et al., 2022), we evaluate the performance of our experiments in two ways: 1) the quality of view-synthesis for the 3D scene representation and 2) the accuracy of camera pose registration. We measure the PSNR, SSIM, and LPIPS scores for view-synthesis quality, as employed in the original NeRF (Mildenhall et al., 2020). The rotation and translation errors are de\ufb01ned as follows: E(R) = cos\u22121 \u0010tr(R\u2032 \u00b7 RT) \u22121 2 \u0011 , (16) E(t) = |t\u2032 \u2212t|2 2, (17) where \u0002R\u2032|t\u2032\u0003 \u2208SE(3) denotes the ground-truth camera-toworld transformation matrix and tr(\u00b7) denotes trace operator. Like the Lin et al. (2021), all the metrics are measured after the pre-alignment stage using the Procrustes analysis. In experiments, all the camera poses \u03c8 are parameterized by the se(3) Lie algebra with known intrinsics. 4.2. Quantitative Results 4.2.1. SYNTHETIC OBJECTS IN NERF-SYNTHETIC Table 1 demonstrates the quantitative results of the NeRFSynthetic. In Table 1, the proposed method achieves stateof-the-art performances in both pose registration and reconstruction \ufb01delity across all scenes. The results align with M\u00a8 uller et al. (2022) showing impressive performance on the scenes with high geometric details. On the other hand, M\u00a8 uller et al. (2022) previously demonstrated that multi-resolution hash encoding is limited to the scenes with complex and view-dependent re\ufb02ections, i.e., Materials. Although they attributed this limitation to their shallow decoding networks, we observed similar performance when utilizing deeper decoding networks. We hypothesize that frequency-based encodings, such as sinusoidal or spherical harmonics, might be more appropriate for addressing complex and view-dependent re\ufb02ections. We will further investigate this issue in future work. 4.2.2. REAL-WORLD SCENES IN LLFF We report the quantitative results of the LLFF dataset in Table 2. Note that GARF utilizes 6-layer decoding networks for this dataset. In Table 2, the proposed method outperforms the previous methods regarding reconstruction \ufb01delity and pose recovery, especially for translation. These results suggest that the learned pose from our method is closely \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding Table 1. Quantitative results of the NeRF-Synthetic dataset. Scene Camera Pose Registration View Synthesis Quality Rotation (\u25e6) \u2193 Translation \u2193 PSNR \u2191 SSIM \u2191 LPIPS \u2193 GARF BARF Ours GARF BARF Ours GARF BARF Ours GARF BARF Ours GARF BARF Ours Chair 0.113 0.096 0.085 0.549 0.428 0.365 31.32 31.16 31.95 0.959 0.954 0.962 0.042 0.044 0.036 Drum 0.052 0.043 0.041 0.232 0.225 0.214 24.15 23.91 24.16 0.909 0.900 0.912 0.097 0.099 0.087 Ficus 0.081 0.085 0.079 0.461 0.474 0.479 26.29 26.26 28.31 0.935 0.934 0.943 0.057 0.058 0.051 Hotdog 0.235 0.248 0.229 1.123 1.308 1.123 34.69 34.54 35.41 0.972 0.970 0.981 0.029 0.032 0.027 Lego 0.101 0.082 0.071 0.299 0.291 0.272 29.29 28.33 31.65 0.925 0.927 0.973 0.051 0.050 0.036 Materials 0.842 0.844 0.852 2.688 2.692 2.743 27.91 27.84 27.14 0.941 0.936 0.911 0.059 0.058 0.062 Mic 0.070 0.071 0.068 0.293 0.301 0.287 31.39 31.18 32.33 0.971 0.969 0.975 0.047 0.048 0.043 Ship 0.073 0.075 0.079 0.310 0.326 0.287 27.64 27.50 27.92 0.862 0.849 0.879 0.119 0.132 0.110 Mean 0.195 0.193 0.189 0.744 0.756 0.722 28.96 28.84 29.86 0.935 0.930 0.943 0.063 0.065 0.056 Table 2. Quantitative results of the LLFF dataset. Scene Camera Pose Registration View Synthesis Quality Rotation (\u25e6) \u2193 Translation \u2193 PSNR \u2191 SSIM \u2191 LPIPS \u2193 GARF BARF Ours GARF BARF Ours GARF BARF Ours GARF BARF Ours GARF BARF Ours Fern 0.470 0.191 0.110 0.250 0.102 0.102 24.51 23.79 24.62 0.740 0.710 0.743 0.290 0.311 0.285 Flower 0.460 0.251 0.301 0.220 0.224 0.211 26.40 23.37 25.19 0.790 0.698 0.744 0.110 0.211 0.128 Fortress 0.030 0.479 0.211 0.270 0.364 0.241 29.09 29.08 30.14 0.820 0.823 0.901 0.150 0.132 0.098 Horns 0.030 0.304 0.049 0.210 0.222 0.209 22.54 22.78 22.97 0.690 0.727 0.736 0.330 0.298 0.290 Leaves 0.130 1.272 0.840 0.230 0.249 0.228 19.72 18.78 19.45 0.610 0.537 0.607 0.270 0.353 0.269 Orchids 0.430 0.627 0.399 0.410 0.404 0.386 19.37 19.45 20.02 0.570 0.574 0.610 0.260 0.291 0.213 Room 0.270 0.320 0.271 0.200 0.270 0.213 31.90 31.95 32.73 0.940 0.949 0.968 0.130 0.099 0.098 T-Rex 0.420 1.138 0.894 0.360 0.720 0.474 22.86 22.55 23.19 0.800 0.767 0.866 0.190 0.206 0.183 Mean 0.280 0.573 0.384 0.269 0.331 0.258 24.55 23.97 24.79 0.745 0.723 0.772 0.216 0.227 0.197 related to that of the classical geometric algorithm, indicating that our proposed method can learn camera poses from scratch using the multi-resolution hash encoding. In terms of rotation angle registration, our method outperforms BARF, achieving comparable performance to GARF. Still, notice that our method achieves the best view-synthesis quality compared to the other methods. Also, in Table 4, we investigate the interaction with the COLMAP camera pose initialization and our method. Please refer to Appendix B.2 for the details. Here, the underbar denotes runners-up. 4.3. Ablation Study We present additional ablation studies to examine the proposed method\u2019s effectiveness. Similar to the InstantNGP (M\u00a8 uller et al., 2022), all the following experiments are conducted on the Hotdog in the NeRF-Synthetic dataset for comparison. Note that other scenes behave similarly. 4.3.1. COMPONENT ANALYSIS In Table 3, we perform the ablation study for our method to examine the role of each element. As shown in row (b) compared with (c), the smooth gradient signi\ufb01cantly helps with pose re\ufb01nements, resulting in more accurate pose registration and higher view-synthesis quality. Also, from (a) and (b), we observe that the straight-through estimator prevents unintentional jittering from the non-linear weighting showing outperformance. Lastly, as shown in (a) and (d), our proposed multi-level learning rate scheduling reasonably enhances pose estimation and scene reconstruction qualities. 4.3.2. TIME COMPLEXITY In Figure 3, we visualize the comparison of the training speed between the proposed method and the previous works (Lin et al., 2021; Chng et al., 2022). By utilizing fast convergence of multi-resolution hash encoding, the proposed method achieves more than 20\u00d7 faster training speed compared to the previous works. Remind that the proposed method outperforms previous methods both in pose registration and view synthesis. 4.3.3. DECODER SIZE Here, we examine the design criteria for decoding networks m(y; \u03c6) in terms of model capacity. The original implementation of Instant-NGP (M\u00a8 uller et al., 2022) utilizes shallow decoding networks, resulting in the feature table H having a relatively larger number of learnable parameters than the decoding networks, i.e., |\u03b8| \u226b|\u03c6|. We \ufb01nd that this often leads to the suboptimal convergence of both the multiresolution hash encoding and the camera pose registration. Figure 4 presents the view-synthesis quality with respect \fRobust Camera Pose Re\ufb01nement for Multi-Resolution Hash Encoding Table 3. Ablation study on the components of the proposed method. Experiments are conducted on the Hotdog in the NeRF-Synthetic dataset. Three components are the straight-through estimator in Eq. (13), the smooth gradient with cosine activation in Eq. (11), and the curriculum scheduling in Sec. 3.4. Component Ablation Evaluation Metric w/ Straight-Through w/ Smooth Grad. w/ Curriculum Scheduling Rotation (\u25e6) \u2193 Translation \u2193 PSNR \u2191 (a) \u2713 \u2713 \u2713 0.234 1.124 35.41 (b) \u2713 \u2713 0.245 1.130 35.03 (c) \u2713 0.977 3.210 29.89 (d) \u2713 \u2713 0.447 1.921 32.19 (e) 2.779 6.423 25.41 Table 4. Quantitative results of the proposed method in the LLFF dataset with the COLMAP initialization (PSNR \u2191). Experimental Setting LLFF w/ COLMAP w/ Pose Re\ufb01nement Fern Flower Fortress Horns Leaves Orchids Room T-Res Average (a) \u2713 25.83 26.56 28.00 26.46 18.89 20.15 31.96 26.51 25.55 (b) \u2713 24.62 25.19 30.14 22.97 19.45 20.02 32.73 23.19 24.79 (c) \u2713 \u2713 26.41 28.00 30.99 27.35 19.97 21.26 33.02 26.83 26.73 PSNR (\u2191) 33 34 35 36 Training Time Per Iteration (ms) 0 50 100 150 200 250 300 Figure 1. Illustration about the proposed method. w h x h(x) VATIVE OF MULTI RESOLUTION HASH ODING xt, we seek to derive the derivative of the multiash encoding. Let ci,l(x) denote the corner i resolution grid in which xl is located, and let nt the index function for the l-level, as de\ufb01ned ote that hl(\u00b7) is not differentiable. er a function hl : Rd ! RF , whose output is a ed feature vector with 2d corners, as given by, hl(x) = 2d X i=1 wi,l \u00b7 Hl \" hl \" ci,l(x) ## , (4) enotes the d-linear weight, which is de\ufb01ned by volume in a unit hypercube with the relative : wi,l = d Y j=1 (1 \u2212|xl \u2212ci,l(x)|j) (5) dex j indicates the j-th element in the vector. \ufb01ne the multi-resolution hash encoding vector f(x; \u2713) = \u21e5 h1(x); . . . ; hL(x) \u21e4 2 RF 0 (6) mension of the output vector is F 0 = L \u21e5F catenation. n rxhl(x) 2 RF \u21e5d of the lth interpolated feal(x) with respect to the x can be derived using e as follows: hl(x) = \uf8ff@hl(x) @x1 , . . . , @hl(x) @xd ( = 2d X i=1 Hl \" hl \" ci,l(x) ## \u00b7 rxwi,l, (7) (cl(x))) is not differentiable with respect to x element of rxwi,l 2 R1\u21e5d is de\ufb01ned by, l(x) xk = sk \u00b7 Y j6=k (1 \u2212|xl \u2212ci,l(x)|j) , (8) notes sk = sign \" |ci,l(x) \u2212xl|k # . q. (7), the Jacobian rxhl(x) is a weighted sum able entries corresponding to the nearby corners illustrated in Fig. 1 of Section 1. 3.2. Camera Pose Re\ufb01nement Camera pose can be represented as a transformation matrix from the camera coordinate to the world coordinate. Let us denote the camera-to-world transformation matrix as \u21e5 R|t \u21e4 2 SE(3), where R 2 SO(3) and t 2 R3\u21e51 are rotation matrix and translation vector respectively. 3.2.1. POSE REFINEMENT WITH THE SINUSOIDAL ENCODING The pose re\ufb01nement using error back-propagation in neural rendering is jointly optimizing the 6 DoF pose parameters and neural scene representation through the differentiable volume rendering: \u03c6\u21e4, \u21e4= arg min \u03c6, L(I, \u02c6 I; \u03c6, ), (9) where \u03c6 and denotes model parameters and trainable camera parameters, \u02c6 I and I denotes reconstructed color and its ground-truth color respectively. Note that, to our knowledge, all previous works (Yen-Chen et al., 2021; Wang et al., 2021c; Xia et al., 2022; Jeong et al., 2021; Lin et al., 2021; Chng et al., 2022) of pose re\ufb01nement in neural rendering utilize fully differentiable encoding with respect to the input coordinate (e.g., sinusoidal or identity). However, they have limited performance compared to multiresolution hash encoding (M\u00a8 uller et al., 2022). 3.2.2. POSE REFINEMENT WITH MULTI-RESOLUTION HASH ENCODING Now, we present the optimization problem of pose re\ufb01nement with multi-resolution hash encoding. Based on the Eq. (9), we also directly optimize the camera pose parameters with multi-resolution hash encoding, \u03c6\u21e4, \u2713\u21e4, \u21e4= arg min \u03c6,\u2713, L(I, \u02c6 I; \u03c6, \u2713, ), (10) where \u2713is a trainable parameter for multi-resolution hash encoding, i.e., the entries of the hash tables Hl. However, we observe that the pose re\ufb01nement and reconstruction quality from the above optimization problem is much worse than the previous works (See row (d) of Table 3 in Section 4.4). To clarify the poor performance, we assume that the gradient \ufb02uctuation of Eq. (10) also negatively affects pose re\ufb01nement. Since the input coordinate x is de\ufb01ned as a rigid transformation of camera pose \u21e5R|t\u21e4 and image coordinate \u03b8 (a) (b) w x h \u03b4(w) \u02dc \u03b4(w) \u2212 + copy 0 Backward pass Forward pass Differentiable Partially-diff. Indifferentiable Ours BARF \u2a0920 Faster GARF PSNR (\u2191) 30 32 34 36 38 Number of Layers 2 4 6 8 MLP-256 MLP-128 MLP-64 NGP-64 1 Figure 3. Comparison of the averaged training time per iteration on the Hotdog in the NeRF-Synthetic dataset. Our method takes only 10.8 ms, signi\ufb01cantly faster than the previous works, GARF and BARF, which are 213 ms and 252 ms, respectively. to varying model sizes of the decoding network. Unlike the \ufb01ndings of M\u00a8 uller et al. (2022), who did not observe improvement with deeper decoder MLPs (as shown by the dashed line in the Figure), we observe that the decoder size heavily impacts both the view synthesis and the pose registration. Therefore, in cases where the camera pose is inaccurate, we assume a suf\ufb01cient number of parameters in the decoder is necessary. Informed by this analysis, we employ deeper and wider decoding networks than the original Instant-NGP: 4-layer MLPs with 256 neurons. Note that competitive methods (Lin et al., 2021; Chng et al., 2022) utilize a deeper decoder network with 8-layer MLPs with 256 neurons having more parameters. 4.4. Qualitative Results We present the qualitative results of our method compared with competitive methods. Please refer to Appendix B.3. PSNR (\u2191) 33 34 35 36 Training Time Per Iteration (ms) 0 5 10 15 20 25 30 Figure 1. Illustration about the proposed method. w x h(x) 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 lth interpolated feature vector with 2d corners, as given by, hl(x) = 2d X i=1 wi,l \u00b7 Hl \" hl \" ci,l(x) ## , (4) where wi,l denotes the d-linear weight, which is de\ufb01ned by the opposite volume in a unit hypercube with the relative position of x: wi,l = d Y j=1 (1 \u2212|xl \u2212ci,l(x)|j) (5) where the index j indicates the j-th element in the vector. We can rede\ufb01ne the multi-resolution hash encoding vector y as follows: y = f(x; \u2713) = \u21e5 h1(x); . . . ; hL(x) \u21e4 2 RF 0 (6) where the dimension of the output vector is F 0 = L \u21e5F after the concatenation. The Jacobian rxhl(x) 2 RF \u21e5d of the lth interpolated feature vector hl(x) with respect to the x can be derived using the chain-rule as follows: rxhl(x) = \uf8ff@hl(x) @x1 , . . . , @hl(x) @xd ( = 2d X i=1 Hl \" hl \" ci,l(x) ## \u00b7 rxwi,l, (7) where Hl(gl(cl(x))) is not differentiable with respect to x and the k-th element of rxwi,l 2 R1\u21e5d is de\ufb01ned by, @wi,l(x) @xk = sk \u00b7 Y j6=k (1 \u2212|xl \u2212ci,l(x)|j) , (8) where sk denotes sk = sign \" |ci,l(x) \u2212xl|k # . As seen in Eq. (7), the Jacobian rxhl(x) is a weighted sum of the hash table entries corresponding to the nearby corners 3.2.1. POSE REFINEMENT WITH THE SIN ENCODING The pose re\ufb01nement using error back-prop rendering is jointly optimizing the 6 DoF and neural scene representation through t volume rendering: \u03c6\u21e4, \u21e4= arg min \u03c6, L(I, \u02c6 I; \u03c6, where \u03c6 and denotes model paramete camera parameters, \u02c6 I and I denotes rec and its ground-truth color respectively. Note that, to our knowledge, all previous w et al., 2021; Wang et al., 2021c; Xia et al., 2 2021; Lin et al., 2021; Chng et al., 2022) of in neural rendering utilize fully differentiab respect to the input coordinate (e.g., sinuso However, they have limited performance co resolution hash encoding (M\u00a8 uller et al., 20 3.2.2. POSE REFINEMENT WITH MULTIHASH ENCODING Now, we present the optimization problem ment with multi-resolution hash encodin Eq. (9), we also directly optimize the cam ters with multi-resolution hash encoding, \u03c6\u21e4, \u2713\u21e4, \u21e4= arg min \u03c6,\u2713, L(I, \u02c6 I; \u03c6 where \u2713is a trainable parameter for multi encoding, i.e., the entries of the hash tables observe that the pose re\ufb01nement and recon from the above optimization problem is m the previous works (See row (d) of Table 3 To clarify the poor performance, we assume \ufb02uctuation of Eq. (10) also negatively aff ment. Since the input coordinate x is d transformation of camera pose \u21e5R|t\u21e4 and i \u03b8 w x a(w) \u02dc a(w) \u2212 + copy 0 Differentiable Partially-diff. Indifferentiable Ours BARF \u2a0920 Faster GARF PSNR (\u2191) 30 32 34 36 38 Number of Layers 2 4 6 8 NN-256 NN-128 NN-64 NGP-64 1 Figure 4. Performance depends on the decoder size. We plot as the depth of the decoder increases, varying the hidden size from 64 to 256. The dashed line denotes the NGP\u2019s with the hidden size of 64 using the ground-truth camera poses as the upper bound. 5." + } + ], + "Jiyoung Lee": [ + { + "url": "http://arxiv.org/abs/2402.13605v5", + "title": "KorNAT: LLM Alignment Benchmark for Korean Social Values and Common Knowledge", + "abstract": "For Large Language Models (LLMs) to be effectively deployed in a specific\ncountry, they must possess an understanding of the nation's culture and basic\nknowledge. To this end, we introduce National Alignment, which measures an\nalignment between an LLM and a targeted country from two aspects: social value\nalignment and common knowledge alignment. Social value alignment evaluates how\nwell the model understands nation-specific social values, while common\nknowledge alignment examines how well the model captures basic knowledge\nrelated to the nation. We constructed KorNAT, the first benchmark that measures\nnational alignment with South Korea. For the social value dataset, we obtained\nground truth labels from a large-scale survey involving 6,174 unique Korean\nparticipants. For the common knowledge dataset, we constructed samples based on\nKorean textbooks and GED reference materials. KorNAT contains 4K and 6K\nmultiple-choice questions for social value and common knowledge, respectively.\nOur dataset creation process is meticulously designed and based on statistical\nsampling theory and was refined through multiple rounds of human review. The\nexperiment results of seven LLMs reveal that only a few models met our\nreference score, indicating a potential for further enhancement. KorNAT has\nreceived government approval after passing an assessment conducted by a\ngovernment-affiliated organization dedicated to evaluating dataset quality.\nSamples and detailed evaluation protocols of our dataset can be found in\nhttps://huggingface.co/datasets/jiyounglee0523/KorNAT .", + "authors": "Jiyoung Lee, Minwoo Kim, Seungho Kim, Junghwan Kim, Seunghyun Won, Hwaran Lee, Edward Choi", + "published": "2024-02-21", + "updated": "2024-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Large Language Models (LLMs) (Brown et al., 2020; Ouyang et al., 2022; OpenAI et al., 2023) * Equal Contribution. Social Value Dataset (4K) Q. Describe the poem \u2018When the Day Comes\u2019 by Shim Hoon. (1) This poem embodies an optimistic and future-looking nature.\u2028 (2) This poem exhibits a determined and passionate nature.\u2028 (3) This poem reflects both longing for utopia and disillusionment.\u2028 (4) I am not sure what \u2018When the Day Comes\u2019 by Shim Hoon is. Common Knowledge Dataset (6K) Q. It has been revealed that only 19% of users of personal mobility\u2028 devices such as kick scooters wear helmets. With an increasing\u2028 number of users, the annual fatality rate is also on the rise. Should the Road Traffic Act be amended to require mandatory insurance for the use of personal mobility devices? (1) Strongly disagree (2) Disagree (3) Neutral (4) Agree (5) Strongly agree 0.028 0.122 0.068 0.541 0.241 Figure 1: Translated examples from each alignment dataset. The social value dataset has a ground truth distribution constructed using an average of 219 survey responses for each question, while the common knowledge dataset has a single ground truth, shown with a green checkmark. have attracted global attention due to their impressive performance and their ease of access for worldwide users. Recent research has concentrated on aligning LLMs with human values (Gabriel, 2020; Kenton et al., 2021; Ouyang et al., 2022), with the goal of ensuring LLMs behave in ways aligned with human expectations. It is, however, essential to recognize that human values and their importance are different across cultures, countries, and time periods (Davani et al., 2023; Sorensen et al., 2023). Answers that are acceptable in one culture may be entirely inappropriate in another. This becomes more important when considering that many current LLMs exhibit a bias towards English-speaking cultures (Wang et al., 2023; Zhang et al., 2023; Cao et al., 2023; Havaldar et al., 2023). Furthermore, cultural alignments have not been extensively studied in diverse cultures, as most datasets (Forbes et al., 2020; Solaiman and Dennison, 2021; Askell et al., 2021) are constructed from Western perspectives. arXiv:2402.13605v5 [cs.CL] 23 May 2024 \fTo this end, we introduce National Alignment, which measures how much an LM is aligned with a targeted country from two dimensions: social values and common knowledge. Social values refer to the collective viewpoints of a nation\u2019s citizens on critical issues to their society. Common knowledge refers to common knowledge broadly recognized and understood by the populace, often considered as basic knowledge. While certain fields of knowledge, such as mathematics and science, have universal relevance, subjects like history and literature display strong national-specific characteristics. In summary, a nationally well-aligned model should (1) reflect the general opinions of the nation, further referred to as social value alignment, and (2) integrate nation-specific common knowledge, further referred to as common knowledge alignment. In this paper, we constructed KorNAT (Korean National Alignment Test), the first benchmark that measures national alignment with South Korea. Samples are in a multiple choice question format, offering five answer choices for social values and four answer choices for common knowledge, as shown in Figure 1. For the social value dataset, we created questions based on trending topics in Korea and obtained the ground truth label distribution by surveying people, receiving an average of 219 responses per question. The survey engaged a total of 6,174 unique Korean participants to accurately capture the general opinions of Korea. For the common knowledge dataset, the questions are based on the compulsory education curriculum in Korea. Our dataset curation is meticulously designed based on a survey theory (Scheaffer et al., 2011) and undergoes multiple rounds of human revisions. KorNAT has a total of 10K samples, with 4K in the social value dataset and 6K in the common knowledge dataset. We also introduce metrics to measure national alignment with three variations of social value alignment. Although our dataset is currently centered on Korea as of 2023, the dataset creation framework is generalizable and can be adapted to any other nations and time periods. We tested seven LLMs on KorNAT. For social value alignment, only two of the seven models exceeded our reference score. For common knowledge alignment, only three models surpassed our reference score, with one model, which has been extensively trained on Korean, demonstrating outstanding performance. These findings suggest that most current LLMs are not sufficiently aligned with South Korea, underscoring a room for improvement. Characterized for its conscientious creation process and high quality, KorNAT has passed both qualitative and quantitative assessments by the Telecommunications Technology Association of Korea (TTA), an organization tasked by the Korean government for reviewing the dataset quality, thus being approved by the government. Detailed plans for the dataset release and the evaluation protocols are outlined in Section 6. Our contributions can be summarized as follows: \u2022 To the best of our knowledge, our work is the first to introduce national alignment, an alignment of an LLM with a targeted nation from social values and common knowledge perspectives. We also introduce metrics to measure national alignment, with three variations of social value alignment. \u2022 We constructed KorNAT, consisting of 10K samples, with 4K on social values and 6K on common knowledge. Our dataset curation is carefully designed based on a survey theory and undergoes multiple rounds of human revisions. \u2022 KorNAT passed a thorough evaluation against both qualitative and quantitative standards by TTA, a government-affiliated organization tasked with assessing dataset quality, thus earning government approval. We plan to launch a public leaderboard in June 2024 for benchmarking on our dataset. 2 Related Works Social Value Dataset. Existing several datasets (Hendrycks et al., 2020; Forbes et al., 2020; Solaiman and Dennison, 2021) assess LMs\u2019 basic ethics or their alignment to global values (e.g., opposition to human inequalities). However, these datasets fall short in measuring national alignment, as they solely focus on universal moral principles rather than values specific to each nation. Others (Parrish et al., 2022; Li et al., 2020; Selvam et al., 2022; Gupta et al., 2023) test social biases or stereotypes but are predominately constructed from Western perspectives. While there are efforts to reflect nation-specific social biases (Lee et al., 2023b; Jin et al., 2023; Huang and Xiong, 2023), their research is limited to social biases or stereotypes, which are insufficient to assess comprehensive national alignment. Several works (Wang et al., 2023; Durmus et al., 2023; Santy et al., 2023) have focused on measuring the extent to which LLMs incorporate \fKorean Textbooks GED Reference Books Question Generation Timely Keywords Sources: News articles from the most recent 12 months Social Conflict Keywords Sources: Social Conflict Reports, KoSBi Subjects: Korean, Social Studies, Korean History, Common Sense, Mathematics, Science, English Gather Sources Topic Selection Social Value Dataset (4K) Question News Article Korean Population Stratification Sampling Generation Guidelines Revision Response Adjustment Strongly Disagree Disagree Neutral Agree Strongly Agree Choose one of the following: Survey Common Knowledge Dataset (6K) Question Question Keyword Reference Book Workers Question Generation Over 6000 participants Quality Control Revision 1 Revision 2 Guidelines 60s+ Male 20s Male 20s Female 60s+ Female Response Response Response Response Survey Participants Question Responses Figure 2: Overview of KorNAT curation process. opinions from diverse countries, which diverges from our work which focuses on alignment with one specific country. Additionally, their datasets cannot be directly used to evaluate national alignment as the questions do not account for countryspecific characteristics. For example, Durmus et al. (2023) utilized general questions from global surveys, and Santy et al. (2023) sub-sampled questions from Social Chemistry (Forbes et al., 2020) and Dynahate (Vidgen et al., 2020), which do not reflect country-specific characteristics. Questions from Wang et al. (2023) are also limited as they only reflect two aspects, traditional and survivals, and each question has relatively small participant responses, ranging from 10 to 20, failing to adequately represent the general opinions in the respective countries. SQuARe (Lee et al., 2023a) tests if models can keep non-toxic discussions on sensitive topics, however, it also includes few responses. In contrast, our social value dataset differentiates itself by focusing on broader nation-specific topics not limited to biases and stereotypes, and gathering a substantial number of participant responses, an average of 219 per question. The dataset creation process of Santurkar et al. (2023) is similar to ours. However, our contribution comes from focusing on Korea, which is under-represented in the AI industry. Furthermore, our work is distinguished in three additional aspects. First, while questions and topics are chosen by the experts in Santurkar et al. (2023), our questions are made upon keywords extracted from monthly social conflict reports and last 12 months of news articles, ensuring they accurately reflect current Korean interests and public opinions. Our method of generating questions captures broader and more timely topics than the previous work. Second, all questions have undergone two rounds of human revisions to ensure high quality and elaborateness. Third, we applied statistical sampling theory in developing our dataset, aiming to enhance its representativeness to the best of our abilities. Therefore, we provide more accurate reflections of the general population\u2019s views. Common Knowledge Dataset. Several datasets test necessary reasoning for everyday situations (Huang et al., 2019; Zellers et al., 2019; Bisk et al., 2020). Earlier knowledge datasets (Lai et al., 2017; Clark et al., 2018) were designed to measure basic knowledge at middle or high school levels. Recent knowledge datasets include more complex questions involving multi-hop reasoning (Khot et al., 2020), open-book question answering (Mihaylov et al., 2018), and a wide range of topics covering 57 subjects (Hendrycks et al., 2021). Lin et al. (2022) designed a dataset to test if a model can identify highly likely imitative falsehoods. Existing datasets overlook the fact that common knowledge can vary by country, as exemplified in each country\u2019s college entrance exams. Our knowledge dataset is centered on this idea, aiming to develop a country-specific common knowledge dataset based on the compulsory education curriculum. This approach ensures that the dataset aligns with the education standards and basic knowledge of the targeted country. Our common knowledge dataset has seven subjects, selected from the Korean GED curriculum, and thus can serve as a benchmark for Korean common knowledge benchmark. \f3 Dataset Construction This section provides a detailed explanation of KorNAT construction. A visual overview of the dataset creation process is shown in Figure 2. Samples from the dataset can be found in Appendix B.11 and C.5. After the creation, our dataset passed both qualitative and quantitative reviews by TTA, an organization tasked by the Korean government for reviewing the dataset quality. 3.1 Social Value Dataset The construction of the social value dataset follows four sequential steps: (1) selecting topics, (2) generating questions, (3) conducting a survey, and (4) adjusting responses. 3.1.1 Topic Selection We extracted two types of keywords: social conflict keywords and timely keywords. Social conflict keywords are those related to Korean social conflicts such as conflicts in gender, age, or wealth gap. For these keywords, we referred to monthly social conflict reports published by Hankook Research1 and Korean social demographics in KoSBi (Lee et al., 2023b). Timely keywords are those that represent significant concerns in Korea, such as new policies or emerging social phenomena. We extracted these keywords from news articles. We used monthly lists of 200 high-frequency keywords from each of the social, political, and economic news articles published by 54 Korean press companies provided by the Open Government Data portal2. We compiled keywords spanning the period between 2022/08/01 and 2023/07/31, encapsulating the most recent twelve months at the time of dataset construction, and eliminated the duplicates. In the end, we have 1,644 unique keywords, with 125 social conflict keywords and 1,519 timely keywords. The examples of the keywords are in Appendix B.1. 3.1.2 Question Generation We utilized GPT-3.5-Turbo to generate questions using the extracted keywords. To ensure the questions reflect the current issues in Korea, we crawled an average of eight news articles per keyword from Naver News platform3, a portal site hosting a collection of Korean news articles. The collected articles are also published within the last twelve 1https://www.hrc.co.kr/ 2https://www.data.go.kr/index.do 3https://news.naver.com/ months at the time of dataset creation. For each keyword, we provided the model with the keyword, one of the crawled news articles, and question generation guidelines. This process was repeated for all collected news articles for every keyword. The guidelines include that generated questions should not be lengthy, reflect timely social values in Korea, and be relevant to the provided news article. Questions generated by the model have been refined through two rounds of human review. In the first round, we employed 34 workers, all college graduates or above, to ensure understanding of Korean social values, for editing the model-generated questions. Workers were provided with a generated question, its associated keyword, and the news articles used for its generation. They were instructed to revise questions to make them timely, reflective of current Korean social values, and suitable for surveys. In the second round, seven workers, those who were acknowledged for their diligence in the first round, double-checked whether the questions met the revision guidelines and made necessary modifications. More information about generation revision guidelines is outlined in Appendix B. 3.1.3 Survey One challenging yet intriguing aspect of social values is that the \u2018correct\u2019 answer to each question is not definitive, as social values vary by time, regions, and individual perspectives. Consequently, rather than a few AI researchers arbitrarily assigning answers, we approximated the true answers by surveying a large subset of Korean population. We conducted a survey on 6,174 Korean citizens over the age of 19. We first recruited survey participants for each combination of age and gender group, then gathered an average of 22 responses per question from each group. Survey participants were instructed to select one of the five responses: (1) Strongly Disagree, (2) Disagree, (3) Neutral, (4) Agree, and (5) Strongly Agree. To ensure the response quality, we presented distractor questions which appear randomly with a probability of 10%. These questions feature implausible scenarios, where a thoughtful participant would always choose a particular answer. Responses from participants who did not choose the expected answers were entirely discarded. Additionally, we checked a participant\u2019s answer consistency by preparing 100 semantically identical but differently phrased questions. Similar to distractor questions, consistency questions also appear ran\fdomly with a probability of 10%. To check a participant\u2019s consistency, we aggregated the \u2018Strongly Disagree\u2019 with \u2018Disagree\u2019, and \u2018Strongly Agree\u2019 with \u2018Agree\u2019 responses from the consistency questions and found the most selected opinion. If three or more answers did not match with the most selected option, then all of the participant\u2019s answers were discarded. Given that there was no minimum number of responses required, a participant could respond to only one survey question, thereby avoiding any distractor or consistency questions. Thus, we rejected responses from those that did not answer at least one distractor question and three consistency questions. As a result, we collected an average of 219 responses per question to achieve an averaged error bound of 5.5% (min: 5.2%, max: 5.7%) of the true answer distribution of Korea following the survey sampling theory (Scheaffer et al., 2011). Proofs are available in Appendix B.6. Responses per question are approximately evenly distributed across different genders and age groups. More information on survey instruction, survey interface, distractors, consistency checks, and participant demographics is available in Appendix B. 3.1.4 Response Adjustment Due to the limitation of using online survey platform, we were unable to recruit participants who accurately reflected the socio-demographic distribution of Korea\u2019s population, including aspects such as gender, age, and area of residence. For example, while individuals aged 60 and above constitute 19.96% of Korea\u2019s population, only 11.47% of our survey respondents belong to this age group. To mitigate this discrepancy, we adjusted the responses by up-weighting those from under-represented groups and down-weighting those from over-represented groups. As previously mentioned, our initial step involved recruiting individuals across various age and gender groups, a strategy known as stratification. Following this, we collected an average of 22 responses per question from these specific age and gender groups, a method known as sampling. Therefore, our adjustments for disparities in age and gender include two distinct processes: Stratification Adjustment and Sampling Adjustment. Stratification Adjustment aims to rectify demographic imbalances between the Korean population and our survey respondents. Meanwhile, Sampling Adjustment is designed to modify the selection probability for participants who are either over-represented or under-represented in specific groups. For Stratification Adjustment, we calculated the weight wst,i for the i-th group (e.g., males in their 20s) by dividing the proportion of that group among the Korean population (PK,i) by the corresponding proportion in the survey population (PS,i), as shown in Eq. 1. wst,i = PK,i PS,i (1) For Sampling Adjustment, the weight wsa,qj,i from the i-th group for the j-th question was calculated by dividing the total number of participants in the i-th group (Ni) by the number of responses to the jth question from the i-th group (Nqj,i), as described in Eq. 2. wsa,qj,i = Ni Nqj,i (2) We also adjusted for education level, area of residence, and annual income, noting significant discrepancies between the survey\u2019s distributions and the actual distributions in Korea. These weights were calculated in a manner similar to the Stratification Adjustment, by comparing the actual proportions with those founded in the survey population. In conclusion, for the j-th question, if a response comes from an individual in the i-th age and gender group, the k-th education level group, the l-th area of residence group, and the m-th annual income group, the responses is weighted as shown in Eq. 3. r = 1\u00b7(wst,i \u00b7wsa,qj,i \u00b7wedu,k \u00b7wres,l \u00b7win,m) (3) Finally, we normalize the weighted responses for each question by dividing them by the total sum. More details on response adjustment and further analysis of social value dataset can be found in Appendix B. 3.2 Common Knowledge Dataset We created the questions and the four answer options based on Korean textbooks and Korean GED reference materials spanning elementary to high school levels, covering seven subjects: Korean, Social Studies, Korean History, Common Sense, Mathematics, Science, and English. These subjects are chosen because they are in the Korean GED curriculum. The samples are divided into two types: simple and complex. Simple samples are those that require only one fact (e.g., \u201cWhat is the era during which the differentiation of classes occurred?\u201d). On the other hand, complex samples are those that \frequire two related facts. Examples include \u201cWhat are the artifacts from the era during which the differentiation of classes occurred?\u201d To answer this question, one must know both the era and the artifacts. To avoid any AI-induced errors, we refrained from using language models during the dataset construction. Instead, we recruited 21 human workers, all college graduates or above, to paraphrase questions from the references.4 For complex questions, we applied stricter recruitment criteria, requiring workers to meet at least one of the following: scoring in the top 4% in Korean SAT, having experience in education, or holding a college degree or higher in the relevant subject. We utilized a total of 39 reference materials listed in Appendix C.1 Table 7. The workers were tasked with rephrasing the material from the reference books into a multiple-choice question format. Then, we conducted a quality control with a subset of the workers. The revision guidelines include double-checking the correctness with the referred material, standardizing the length of each answer option to mitigate model bias towards longer answers, and correcting typographical errors. Each question underwent two rounds of revisions, handled by different individuals for each question. More information about the dataset curation and the example samples are in Appendix C. 4 National Alignment Score Social Value Alignment. Assigning a single ground truth label based on the majority vote may ignore valuable information in the responses on other options (Aroyo and Welty, 2013; Cheplygina and Pluim, 2018; Davani et al., 2022). Therefore, we use the distribution of responses from the survey to measure the social value alignment. Let rij be the ratio of participants choosing the j-th option for the i-th question, qi. If a model predicts the k-th option for qi, it receives an alignment score of rik. Thus, the model earns a score between 0 and 1 for each question. The final social alignment score is the average across all questions. We call this metric Social Value Alignment (SVA). Intuitively, a model achieving a score higher than 0.5 would mean that it aligns with the majority of the Korean population. With SVA, however, the maximum achievable score is empirically calcu4Note that for the social values dataset, there were no reference material. Therefore we decided to use GPT-3.5-Turbo to create initial questions, minimizing the chance of human bias being involved. lated as 0.450. This indicates variability in social values within the Korean population of a given question using the five levels of agreement. To alleviate this problem, we introduce Aggregated Social Value Alignment (A-SVA) with modified ground truth distributions. For A-SVA, the ground truth distribution is narrowed down to three options by aggregating \u2018Strongly Disagree\u2019 with \u2018Disagree\u2019, and \u2018Strongly Agree\u2019 with \u2018Agree\u2019. By A-SVA, the maximum achievable score increases to 0.626, suggesting a moderate level of agreement among Korean citizens. As a third metric, we additionally propose Neutral-processed Social Value Alignment (N-SVA), because it can be argued that choosing \u2018Neutral\u2019 is more suitable for questions with no significantly preferred opinions. For N-SVA we maintain the five options but change it into a Neutral one-hot distribution if neither of the aggregated options surpass a value of 0.5. Common Knowledge Alignment. Since the common knowledge dataset has one correct answer for each question, we use accuracy to measure the common knowledge alignment score. Considering that the Korean GED cut-off score is 60 points, we also set the accuracy of 0.6 as the standard score and acknowledge models with the above score have sufficient national common knowledge. 5 Experiments 5.1 Experiment Settings In our experiments, a model is prompted with an instruction (e.g., \"Choose an answer from the following choices.\"), a question, and corresponding choices and then asked to generate a response in a zero-shot manner. For generated responses that do not exactly match with any of the choices, we employed gpt-4-1106-preview to assign the generated response to one of the choices. Considering the instability of prompting strategies (Liu et al., 2021; Min et al., 2022), we conducted experiments using five distinct yet semantically similar prompts. We tested seven models which are Llama-2 (70B) (Touvron et al., 2023), GPT-3.5-Turbo (Ouyang et al., 2022), GPT-4 (OpenAI et al., 2023), Claude1, HyperCLOVA X (Yoo et al., 2024) from NAVER, PaLM-2 (Anil et al., 2023), and Gemini Pro (Team et al., 2023). HyperCLOVA X is a Korean LLM extensively trained on a large Korean corpus. Prompts, post-processing of generated responses, and other additional details of experiment settings are in Appendix D. \fNo Adjustment Adjustment w/ Age & Gender Final Adjustment Model SVA A-SVA N-SVA SVA A-SVA N-SVA SVA A-SVA N-SVA Best 0.421 0.613 0.612 0.422 0.614 0.613 0.450 0.626 0.625 All-Neutral 0.196 0.196 0.408 0.194 0.194 0.407 0.190 0.190 0.388 Llama-2 0.253\u00b10.009 0.319\u00b10.017 0.386\u00b10.012 0.252\u00b10.010 0.318\u00b10.017 0.385\u00b10.012 0.252\u00b10.009 0.315\u00b10.015 0.370\u00b10.011 GPT-3.5-Turbo 0.286\u00b10.008 0.435\u00b10.017 0.314\u00b10.004 0.287\u00b10.008 0.435\u00b10.017 0.314\u00b10.004 0.290\u00b10.008 0.435\u00b10.016 0.315\u00b10.003 GPT-4 0.263\u00b10.026 0.449\u00b10.040 0.308\u00b10.025 0.262\u00b10.026 0.448\u00b10.040 0.307\u00b10.025 0.260\u00b10.024 0.448\u00b10.036 0.300\u00b10.023 Claude-1 0.282\u00b10.030 0.407\u00b10.042 0.317\u00b10.044 0.282\u00b10.030 0.406\u00b10.041 0.318\u00b10.044 0.286\u00b10.027 0.407\u00b10.037 0.321\u00b10.039 HyperCLOVA X 0.256\u00b10.005 0.324\u00b10.010 0.431\u00b10.001 0.255\u00b10.005 0.322\u00b10.010 0.431\u00b10.001 0.253\u00b10.005 0.318\u00b10.009 0.414\u00b10.001 PaLM-2 0.330\u00b10.007 0.531\u00b10.004 0.300\u00b10.007 0.330\u00b10.007 0.532\u00b10.004 0.300\u00b10.010 0.331\u00b10.007 0.532\u00b10.004 0.302\u00b10.006 Gemini Pro 0.304\u00b10.006 0.513\u00b10.004 0.317\u00b10.010 0.312\u00b10.007 0.312\u00b10.004 0.318\u00b10.010 0.303\u00b10.006 0.513\u00b10.003 0.312\u00b10.009 Table 1: Average and standard deviation of social value alignments from No Adjustment, Adjustment with Age & Gender, and Final Adjustment utilizing five different prompts. The best scores in each category are highlighted in bold. 5.2 Social Value Alignment 5.2.1 Quantitative Results Table 1 presents social value alignment in three scenarios: \u2018No Adjustment,\u2019 where raw survey results are used without response adjustments; \u2018Adjustment with Age and Gender,\u2019 where responses are adjusted for age and gender; and \u2018Final Adjustment,\u2019 which further adjusts responses for annual income, area of residence, and education levels. We also show Best Score, the maximum achievable score under each scenario, and All-Neutral, which is obtained when a model answers \u2018Neutral\u2019 for all questions. In both SVA and A-SVA, all models exceed \u2018AllNeutral\u2019, suggesting that they have higher social value alignments compared to a naive model that blindly responds with \u2018Neutral\u2019 to all questions. In all three cases, PaLM-2 shows the highest social value alignment in SVA and A-SVA, whereas HyperCLOVA X achieves the best score in N-SVA, being the only model to outperform \u2018All-Neutral\u2019. These findings highlight the unique characteristics of each model. All models except Llama-2 and HyperCLOVA X score higher in A-SVA than N-SVA, indicating a tendency to express their viewpoints rather than maintain neutrality. Conversely, HyperCLOVA X tends to avoid engaging in topics with divided opinions. We also calculated social value alignment under each gender and age groups, and the results are presented in Appendix D.4 Table 11. 5.2.2 Cross-national Prompting Cross-national Prompting (CP) (Durmus et al., 2023) is a prompting method that includes the question, \u2018How would someone from [country X] respond to this question?\u2019. We conducted experiments by replacing \u2018[country X]\u2019 with Korea and Model SVA A-SVA N-SVA GPT-3.5-Turbo 0.290\u00b10.008 0.435\u00b10.016 0.315\u00b10.003 Korean CP 0.334\u00b10.004 0.503\u00b10.008 0.286\u00b10.003 USA CP 0.324\u00b10.006 0.486\u00b10.011 0.283\u00b10.006 GPT-4 0.260\u00b10.024 0.448\u00b10.036 0.300\u00b10.023 Korean CP 0.332\u00b10.011 0.528\u00b10.012 0.332\u00b10.009 USA CP 0.309\u00b10.016 0.455\u00b10.024 0.377\u00b10.009 Claude-1 0.286\u00b10.027 0.407\u00b10.037 0.321\u00b10.039 Korean CP 0.227\u00b10.016 0.276\u00b10.018 0.354\u00b10.026 USA CP 0.220\u00b10.032 0.274\u00b10.040 0.310\u00b10.039 HyperCLOVA X 0.253\u00b10.005 0.318\u00b10.009 0.414\u00b10.001 Korean CP 0.332\u00b10.020 0.505\u00b10.032 0.299\u00b10.004 USA CP 0.319\u00b10.007 0.492\u00b10.012 0.290\u00b10.008 Gemini Pro 0.303\u00b10.006 0.513\u00b10.003 0.312\u00b10.009 Korean CP 0.333\u00b10.020 0.505\u00b10.032 0.299\u00b10.005 USA CP 0.319\u00b10.007 0.492\u00b10.012 0.290\u00b10.008 Table 2: Average and standard deviation of social value alignment using Cross-national Prompting on Final Adjustment. Bold indicates the better performance among Korean and USA CP. USA, respectively. Table 2 presents the social value alignment in the Final Adjustment. Both Korean an USA CP improved the alignment scores, except for Claude1. When comparing Korean and USA CP, Korean CP generally performed well across all metrics in all models, except for one case. This suggests that the structure of prompts influences the social value alignment of LLMs. 5.2.3 Human Evaluation To further prove that models with higher scores in social value alignment are more aligned with the Korean population, we perform a human evaluation with Llama-2 and PaLM-2, which are the least and the most aligned models in A-SVA in Final Adjustment. We newly prepared the model outputs of its \f10 20 30 40 50 60 70 80 90 100 Questions (Sorted by ratio) 0.0 0.0 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.4 0.5 0.5 0.6 0.6 0.7 0.7 0.8 0.8 0.9 0.9 Ratio of responses Ratio of Preferred Responses PaLM-2 Llama-2 Figure 3: Distribution of ratios of preferred responses for each question. The x-axis is questions sorted by the preference ratio for PaLM-2 and the y-axis is the preference ratio for the two models. agreement (disagree, neutral, or agree) and their reasoning on social value questions. We filtered out those that were inconsistent with the agreement in the main results, and sampled 100 questions mirroring the label distribution of the social value dataset. Survey participants were presented with pairs of model-generated outputs and each was asked to select the one that aligned more with their opinions considering both the agreement and the reasoning. For each question, we collected 107 responses from participants evenly distributed across gender and age. Figure 3 illustrates the ratios of preferred responses from Llama-2 (blue) and PaLM-2 (orange) for all questions. Interestingly, PaLM-2, the most aligned model, is much preferred by the survey participants. Specifically, PaLM-2 was preferred over Llama-2 by more than half of the respondents in 94 out of 100 questions. Moreover, the preference ratios for PaLM-2 over Llama-2 were predominantly within the range of 0.7 to 0.95, indicating a strong preference. This finding is closely correlated with the main results, underscoring the effectiveness of our metric in reflecting social values. Further details on the human evaluation process are in Appendix D.4. 5.3 Common Knowledge Alignment Table 3 shows common knowledge alignment across seven subjects and the total score. The average scores per subject show that only English exceeds the reference score of 0.6, whereas the others fall below the score. Notably, Mathematics and Science, which are typically perceived as having universal relevance, got average score of 0.333 and 0.468, respectively. All models achieved higher scores in English than Korean, indicating a closer linguistic familiarity with English than with Korean. HyperCLOVA X outperforms the other models across most subjects, except for Mathematics and Science, with particularly high scores in Korean and Korean History. This suggests that models specifically trained for the Korean context are particularly effective at capturing Korean common knowledge. We hypothesize that this superior understanding stems from an enhanced capability in linguistically processing Korean, and exposure to similar Korean common knowledge during the pretraining through their training corpus. Upon examining the samples where only HyperCLOVA X answered correctly, we noted that the samples either had answer choices with similar structures and vocabulary or demanded an advanced understanding of Korean culture, including academic terminology. We conjecture that HyperCLOVA X excels in discerning between similar sentences and demonstrating an advanced understanding of Korea-specific knowledge. Based on the total scores, HyperCLOVA X, PaLM-2, and Gemini Pro surpassed the reference score of 0.6 by only 0.107 at most, emphasizing the room for improving common knowledge alignment. The samples where only HyperCLOVA X answered correctly and common knowledge alignment for both simple and complex samples are in Appendix D.6 and D.7, respectively. 5.4 Omitted Responses While the given instructions clearly ask the models to pick one of the given options, the generated texts do not always correspond to one of the options, even after post-processing the responses using GPT-4 as described in Appendix D.3. We omit such responses and categorize them as either refrained or invalid. Refrained responses are where the model explicitly expresses that it will not answer a question. Otherwise, if the response is not matched to any option, it is considered invalid. The number of refrained and invalid responses are organized in Table 4. We can see that the refrained responses arise much more frequently in social value questions. As discussed in Section 4, social value questions do not have a single right answer. Thus, some models intentionally refrain from answering such questions to ensure that they avoid \fModel Korean Social Studies Korean History Common Sense Mathematics Science English Total Llama-2 0.323\u00b10.007 0.346\u00b10.003 0.314\u00b10.007 0.316\u00b10.008 0.258\u00b10.012 0.292\u00b10.007 0.403\u00b10.009 0.322\u00b10.003 GPT-3.5-Turbo 0.311\u00b10.007 0.367\u00b10.022 0.269\u00b10.007 0.324\u00b10.017 0.260\u00b10.025 0.305\u00b10.014 0.405\u00b10.026 0.320\u00b10.011 GPT-4 0.370\u00b10.012 0.421\u00b10.024 0.335\u00b10.011 0.408\u00b10.013 0.305\u00b10.009 0.387\u00b10.032 0.473\u00b10.017 0.386\u00b10.006 Claude-1 0.337\u00b10.012 0.367\u00b10.023 0.302\u00b10.014 0.335\u00b10.019 0.267\u00b10.014 0.307\u00b10.021 0.428\u00b10.021 0.335\u00b10.009 HyperCLOVA X 0.783\u00b10.005 0.791\u00b10.010 0.761\u00b10.004 0.765\u00b10.007 0.316\u00b10.034 0.666\u00b10.009 0.869\u00b10.008 0.707\u00b10.009 PaLM-2 0.652\u00b10.002 0.777\u00b10.006 0.531\u00b10.003 0.707\u00b10.004 0.475\u00b10.007 0.673\u00b10.007 0.834\u00b10.006 0.664\u00b10.002 Gemini Pro 0.625\u00b10.015 0.752\u00b10.021 0.491\u00b10.009 0.707\u00b10.010 0.450\u00b10.039 0.648\u00b10.023 0.798\u00b10.047 0.639\u00b10.021 Average 0.486 0.546 0.429 0.509 0.333 0.468 0.601 0.482 Table 3: Average and standard deviation of common knowledge alignment utilizing five different prompts. The best scores in each category are highlighted in bold. Model Social Value Common Knowledge Refrained Invalid Refrained Invalid Llama-2 0.00\u00b10.00 1.00\u00b10.63 1.20\u00b10.75 13.00\u00b15.40 GPT-3.5-Turbo 0.00\u00b10.00 0.00\u00b10.00 0.80\u00b10.40 0.40\u00b10.49 GPT-4 557.20\u00b1293.90 0.80\u00b10.98 3.00\u00b12.53 0.40\u00b10.49 Claude-1 479.60\u00b1387.21 0.40\u00b10.80 10.80\u00b114.26 1.40\u00b12.33 HyperCLOVA X 59.00\u00b121.04 3.00\u00b10.00 0.00\u00b10.00 1.20\u00b10.40 PaLM-2 0.00\u00b10.00 0.20\u00b10.40 1.60\u00b11.85 4.60\u00b13.07 Gemini Pro 0.00\u00b10.00 0.60\u00b10.49 6.00\u00b10.63 90.80\u00b1155.57 Average 156.54 (3.91%) 0.86 (0.02%) 3.34 (0.06%) 15.97 (0.27%) Table 4: Average and standard deviation of the number of refrained and invalid responses across the five different prompts. We show the ratio out of the total number of responses for the average across models. expressing opinions that not everyone may agree on. Most notably, GPT-4 and Claude-1 are the models that refrained the most with 557.2 (13.93%) and 479.6 (11.99%) refrained responses, respectively. Further analysis and samples of omitted responses are shown in Appendix D.3. 6" + }, + { + "url": "http://arxiv.org/abs/2308.01525v3", + "title": "VisAlign: Dataset for Measuring the Degree of Alignment between AI and Humans in Visual Perception", + "abstract": "AI alignment refers to models acting towards human-intended goals,\npreferences, or ethical principles. Given that most large-scale deep learning\nmodels act as black boxes and cannot be manually controlled, analyzing the\nsimilarity between models and humans can be a proxy measure for ensuring AI\nsafety. In this paper, we focus on the models' visual perception alignment with\nhumans, further referred to as AI-human visual alignment. Specifically, we\npropose a new dataset for measuring AI-human visual alignment in terms of image\nclassification, a fundamental task in machine perception. In order to evaluate\nAI-human visual alignment, a dataset should encompass samples with various\nscenarios that may arise in the real world and have gold human perception\nlabels. Our dataset consists of three groups of samples, namely Must-Act (i.e.,\nMust-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity\nof visual information in an image and further divided into eight categories.\nAll samples have a gold human perception label; even Uncertain (severely\nblurry) sample labels were obtained via crowd-sourcing. The validity of our\ndataset is verified by sampling theory, statistical theories related to survey\ndesign, and experts in the related fields. Using our dataset, we analyze the\nvisual alignment and reliability of five popular visual perception models and\nseven abstention methods. Our code and data is available at\nhttps://github.com/jiyounglee-0523/VisAlign.", + "authors": "Jiyoung Lee, Seungho Kim, Seunghyun Won, Joonseok Lee, Marzyeh Ghassemi, James Thorne, Jaeseok Choi, O-Kil Kwon, Edward Choi", + "published": "2023-08-03", + "updated": "2023-10-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction AI alignment [65] seeks to align models to act towards human-intended goals [50, 81], preferences [69, 64], or ethical principles [29]. Alignment is a prerequisite before deploying AI models in the real world. Misaligned models may show unexpected and unsafe behaviors which can bring about negative outcomes, including loss of human lives [57, 81]. This is particularly true for high-capacity models like deep neural networks, where there is little manual control of feature interaction. In such cases, analyzing the alignment between models and humans can be a proxy measure for safe behavior [47]. Well-aligned models induce more agreeable and acceptable results to human society in the targeted domain [38]. In this paper, we particularly focus on alignment in visual perception, henceforth referred to as AI-human visual alignment, and propose a new dataset for measuring this alignment. Note that recent 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks. arXiv:2308.01525v3 [cs.CV] 20 Oct 2023 \fOrdinary, generic pictures of mammals Sources: ImageNet, images.cv Category 1 Category 2 Spurious correlations between the mammal and background Source: Stable Diffusion Category 3 Images that belong to Category 1 with adversarial perturbations Source: FGSM Must-Act Category 7 Animal representations that are not photo-like Sources: DomainNet, ImageNet-R Category 6 Close biological relatives of\u2028 in-class mammals Source: ImageNet Category 5 Animals with multiple mammals\u2019 characteristics combined Source: Stable Diffusion Category 4 Anything other than the 10 mammals Sources: ImageNet, Describable Textures, Caltech 10 Must-Abstain VisAlign: AI-Human Visual Alignment Dataset Category 8 Images that belong to Category 1 with cropping or 15 corruptions applied Sources: Cropping, ImageNet-C corruptions Uncertain Figure 1: The overview of VisAlign. The example images are given with reference to the class Zebra. Category 1. A photo-realistic image of a zebra. Category 2. A zebra crossing a road. Category 3. A slight noise is added to the Category 1 image. Category 4. A picture of a truck. Category 5. A head and two limbs of an elephant with the remaining body of a zebra. Category 6. A donkey. Category 7. A zebra illustrated on a piece of clothing. Category 8. Two pictures, one with cropping and the other frosted glass blur, respectively, of a zebra. work in AI-human alignment tends to focus on societal topics with ethical implications, such as racial or gender bias [73, 12, 44]. In this work, however, we use image classification as the target task, which is more fundamental to machine perception but is less contentious. Despite its seeming simplicity, image classification presents significant challenges for deployed visual AI systems due to noise, artifacts, and spurious correlations in the images. When confronted with an image lacking any object from the designated classes, humans typically abstain from making an incorrect decision. In contrast, machine learning models may still generate an output unless they are explicitly trained to abstain from making predictions under certain confidence levels. Similarly, when an image provides imperfect information (e.g., due to blurred vision or a dark environment), human decisions tend to waver between a correct prediction and abstention. Conversely, machines often make overconfident predictions [48]. Given this discrepancy between human and model behaviors, we focus on image classification as a foundational starting point. Before we delve into more complex and potentially contentious topics, we view this work as a crucial initial step in measuring visual perception alignment. As AI alignment aims to guide an AI to resemble human behaviors and values for a safe use of AI, AI-human visual alignment, being a subcategory of AI alignment, aims to guide the AI to resemble the aforementioned human behaviors in visual perception (i.e., abstaining from making incorrect decisions, wavering between a correct prediction and abstention) to ensure safety across diverse use cases. Our dataset, VisAlign, encapsulates these behaviors across three distinct groups: Must-Act, Must-Abstain, and Uncertain. Must-Act contains identifiable photo-realistic images that humans can correctly classify (see Figure 1 green box). Must-Abstain includes images that most humans would abstain from classifying due to their lack of photo-realism or because they clearly contain no objects within the target classes (see Figure 1 red box). Uncertain category hosts images that have been cropped or corrupted in diverse ways and at varying intensities (see Figure 1 orange box). For this last group, we provide gold human labels from multiple annotators via crowd-sourcing. Given a moderately corrupted image, some people might be able to recognize the true class, while others might not. In Section 3, we further elaborate on crucial requirements that a visual alignment dataset must meet and provide details about our survey design, which has been validated using relevant statistical theories. 2 \fMust-Act and Must-Abstain have been addressed in previous studies under the purview of robustness [23, 75, 26] and Out-of-Distribution Detection (OOD) [52, 77], respectively. However, most studies overlook Uncertain samples, which are frequently found in real-world scenarios where visual input can continuously vary in aspects such as brightness and resolution. To the best of our knowledge, VisAlign is the first dataset to explore the diverse aspects of visual perception, including Uncertain samples, under the concept of AI-human visual alignment. Furthermore, all decisions regarding the construction of VisAlign were based strictly on statistical methods for survey design [67, 9] and expert consultations to maximize the validity of the alignment measure (see Section 3). We benchmark various image classification methods on our dataset using two different metrics. Firstly, we measure the visual alignment between the gold human label distribution and the model\u2019s output distribution using the distance-based method (Section 4.1). Secondly, considering visual alignment as a potential proxy method for measuring a model\u2019s reliability, we evaluate the model\u2019s reliability score (Section 4.2). We test models with various architectures, each combined with various ad-hoc abstention functions that endow the model with the ability to abstain. Our findings suggest that current robustness and OOD detection methods cannot be directly applied to AI-human visual alignment, thus highlighting the unique challenges posed by our task as compared to conventional ones. Our contributions can be summarized as follows: \u2022 To the best of our knowledge, this is the first work to construct a test benchmark for quantitatively measuring the visual perception alignment between models and humans, referred to as AI-human visual alignment, across diverse scenarios (8 categories in total). \u2022 We propose VisAlign, a dataset that captures varied real-world situations and includes gold human labels. The construction of our dataset was carried out meticulously, adhering to statistical methods in survey designs (i.e., the number of samples in a dataset [9], intra and inter-consistency in surveys [15], and the required minimum number of participants [67]) and expert consultations. \u2022 We benchmarked visual alignment and reliability on VisAlign using five baseline models and seven popular abstention functions. The results underscore the inadequacy of existing methods in the context of visual alignment and emphasize the need for novel approaches to address this specific task. 2 Related Works Related Datasets. Previous datasets only focus on one aspect or do not have human gold labels. Mazeika et al. [43] focus on subjective interpretations and collected human annotations on emotions (e.g., amusement, interest, adoration). Existing corruptions datasets [23, 45, 75] apply slight corruptions to study the robustness of deep neural networks. These works overlook the moderately or severely corrupted images that appear in the real world. Although the dataset by Park et al. [51] applied brightness corruptions on hand X-ray images with multiple severities, they do not have gold human labels. Out-of-Distribution (OOD) datasets [52, 77] only handle two cases where label space or semantic space changes. OpenOOD [77] includes both cases by dividing two situations as far-OOD and near-OOD. Plex [72] uses a compilation of different datasets to study the reliability of models; however, it does not test on ambiguous or uncertain samples. CIFAR10H [54] is a dataset that collects a distribution of soft human labels for CIFAR10 images [31] to represent human perceptual uncertainty. However, the images\u2019 uncertainty only comes from low fidelity, which does not represent diverse cases. Our dataset sits aside from existing datasets by handling various scenarios and providing gold human labels. Similarly, Schmarje et al. [68] collected multiple annotations per image. There are three key differences that distinguish our dataset from prior works that focus on uncertainty in object recognition. First, we applied corruption and cropping with different intensities ranging from 1 to 10 to reflect the continuity of uncertainty. As uncertainty is continuous and it is critical to test models on samples where uncertainty may increase in stages. Second, we obtained 134 human annotations per image to obtain numerically robust annotations. Third, while previous dataset include soft labels distributed only among classes, we include soft labels distributed among classes and abstention, which can represent recognizability uncertainty (i.e., , whether an image itself is recognizable or not). Visual perception includes not only object identification (predicting that it is an elephant) but also object recognizability (the object itself is recognizable). In this sense, we cover broader scenarios compared to previous works as we include object recognizability uncertainty in our uncertain category. 3 \fVisual Alignment with Humans. Alignment is more broadly studied, including the gap between data collection and model deployment [2], natural language modeling [38], and object similarity [30, 53]. For visual alignment, specifically, previous works [19, 20, 55, 80] use only corrupted or perturbed datasets to compare the humans\u2019 and models\u2019 decisions. Similarly, Rajalingham et al. [56] analyzes patterns that confuse the decision-making process of deep neural networks, humans, and monkeys. Other studies [39, 17] induce models to take similar steps as humans before making the final prediction. Jozwik et al. [30] compares the semantic space where the task is to predict the human-generated semantic similarities given different object images. Zhang et al. [79] and Bomatter et al. [5] show that both model and human have better object recognition when given more context information. Both papers provided human-model correlations to describe their relative trends across conditions. However, our study on visual perception alignment is not about following human trends, but about measuring how well the model replicates human perception sample-wise. Geirhos et al. [18] and Bhojanapalli et al. [4] test the robustness of models to perturbations that does not affect the object identity. Peterson et al. [54] only test their models on in-class (i.e., Category 1) and out-of-class samples (i.e., Category 4 and Category 6) and Schmarje et al. [68] only tested their models on in-class samples (i.e., Category 1). In order to thoroughly evaluate visual alignment, models should also be tested under various scenarios with out of distribution properties (i.e., Category 5 and Category 7). We prepared VisAlign to include these out of distribution properties, and if needed, generated the samples by ourselves, of which details are in Section 3.2. Furthermore, they showed only accuracy and cross entropy or KL divergence. (which is analogous to KL divergence) of the models. Therefore, they did not test their models on various possible scenarios and did not use proper measurement, as KL divergence is not an optimal choice for visual perception alignment as will be described in Section 4.1. Therefore, although previous works trained their models with the goal of achieving visual perception alignment, none of the works have thoroughly verified how much the models have actually achieved visual perception alignment under diverse situations with an appropriate measurement. In contrast, we quantitatively measured visual perception alignment across various scenarios with multiple human annotations on uncertain images. In addition, we borrowed Hellinger distance to precisely calculate the visual perception alignment after careful consideration of other distance-based metrics. More details of comparison to previous works are in Appendix H 3 Dataset Construction We have carefully considered what conditions must be met in a visual alignment dataset during the process of selecting the classes and the contents of VisAlign. We define four requirements that a visual alignment dataset must satisfy: Requirement 1: Clear Definition of Each Class. Each class must be distinctly and precisely defined. This criterion proves more challenging to meet than initially anticipated, given that most everyday objects are defined in relatively vague terms and therefore do not lend themselves to rigorous classification. For example, the term \"automobile,\" which is defined by the Cambridge Dictionary as a synonym for \"car\", is described as \"a vehicle with an engine, four wheels, and seats for a few people.\"1 The phrase \"seats for a few people\" is ambiguous, and the definition is broad enough to encompass trucks. Despite this, certain parties may contend that \"automobile\" and \"truck\" are distinctly separate classes, a view reflected in datasets like CIFAR-10 [31] and STL-10 [8], which treat automobiles and trucks as separate classes. Requirement 2: Class Familiarity to Average Individuals. The classification target (i.e., each class) must be known to average people. This is because we employ hundreds of MTurk workers to derive statistically robust ground-truth labels for a subset of images. Requirement 3: Coverage of Diverse and Realistic Scenarios. The dataset must contain samples covering a wide range of scenarios that are likely to occur in reality. This includes samples outside of defined classes, out of distributions (i.e., Category 5 or 7) and confusing samples where people might not able to recognize or identify. The test will fail to sufficiently evaluate the AI\u2019s alignment with human visual perception without this diversity. 1https://dictionary.cambridge.org/dictionary/english/car 4 \fRequirement 4: Ground Truth Label for Each Sample. Each sample must have an indisputable or, at the very least, reasonable ground truth. Our dataset\u2019s ground truth is human-derived, as we aim to measure the degree of alignment between AI and human visual perception. 3.1 Class Selection For our dataset to serve as a universal benchmark that any model can be tested on, the classes should have clear definitions so that model developers can easily prepare their models and training strategy. To meet Requirement 1, we cannot choose under-specified class definitions. For example, the class definitions in CIFAR10 [31] can be disputed, as shown in the example of \u2019automobile\u2019 and \u2019truck\u2019 in Requirement 1. Likewise, the MNIST [35] classes cannot be used since numbers are recognized via trivial geometric patterns. After careful consideration, we use the taxonomic classification in biology, which is the meticulous product of decades of effort by countless domain experts to hierarchically distinguish each species as accurately as possible. Following Requirement 2, familiarity is one of the critical criteria since we conducted an MTurk survey to build a subset of our dataset. For example, CIFAR100 [31] uses species of flowers (orchids, poppies) that may not be commonly known. The ImageNet [63] class space is also challenging to use for similar reasons. Therefore, among animal species, we select mammals that are familiar to the average person. In summary, animal species were selected that 1) can be grouped under one scientific name for clear definitions, 2) are visually distinguishable from other species to avoid multiple correct answers, 3) have characteristic visual features allowing them to be identified by a single image, and 4) are familiar to humans, facilitating participation in our survey. The final 10 classes are Tiger, Rhinoceros, Camel, Giraffe, Elephant, Zebra, Gorilla, Kangaroo, Bear, and Human. This selection was revised and verified by two zoologists according to the aforementioned criteria. The scientific names and subspecies for each class can be found in Table 6 of Appendix C. 3.2 Sample Categories Our dataset, depicted in Figure 1, is partitioned into three groups based on the quantity and clarity of visual information: Must-Act, Must-Abstain, and Uncertain. To avoid misclassifications due to background objects, all samples exclusively contain one object. The authors manually scrutinized all test samples to ensure this. In line with Requirement 3, these three groups are further subdivided into eight categories to account for as many real-world scenarios as possible. Each category comprises 100 samples, with the exception of Category 8 comprising 2002, totaling 900 samples. To establish the reliability of the dataset as a valid benchmark, Cronbach\u2019s alpha [9] was used, a metric that evaluates the reliability of tests. The dataset was deemed reliable, with a minimum of 100 samples per category. The complete calculation for Cronbach\u2019s alpha is detailed in Appendix D.1. \u2022 MUST-ACT contains clearly identifiable photo-realistic samples belonging to only one of the 10 classes. We intentionally restricted our dataset to photo-realistic samples to avoid ambiguous boundaries between in-class and out-of-class, such as abstract paintings or sculptures (e.g., , claiming that a box with four sticks at the bottom and a sinusoidal line on the side is an elephant). Individuals with no visual impairments and familiarity with the 10 mammals can consistently classify these images correctly. \u2013 Category 1: Unaltered samples from the designated classes are included. This category serves as the most basic step required for visual perception alignment. We sourced images from ImageNet1K [63] and images.cv3. \u2013 Category 2: Image classification models have been known to sometimes base decisions based on unrelated features, such as the background of an image [26, 60]. We aim to challenge the models by testing them with samples that feature incongruous backgrounds, i.e., , images of animals in environments where they are not commonly seen. Well-aligned models should accurately classify objects regardless of the changes in the background. Samples were generated using Stable Diffusion [62]. Examples of text prompts used for generating samples are provided in the Appendix D.2. 2As category 8 contains a diverse set of croppings and corruptions of varying intensities, we double the number of samples for more reliable evaluation. 3https://images.cv/ 5 \f\u2013 Category 3: Another case of images that humans can easily identify but models cannot are perturbed images used for adversarial attacks [21, 32]. Well-aligned models would not be influenced by noise or adversarial attacks intentionally designed to deceive them. Here we include Category 1 samples with adversarial perturbation to test such cases. We use Fast Gradient Sign Method (FGSM) [21] to inject adversarial perturbations. The gradients are produced by pre-trained image classifiers available in PyTorch4. \u2022 MUST-ABSTAIN are images that qualified individuals always abstain from classifying. \u2013 Category 4: This category includes images that do not belong to any one of VisAlign\u2019s 10 mammals. Examples might include other animal species (e.g., birds, cats, dogs), textures (e.g., bubbly, banded), or objects (e.g., truck, inline skate, guitar). This category tests the model\u2019s ability to abstain from classifying objects outside its defined scope. Well-aligned models should be able to disregard infinitely diverse objects outside the target classes. The space of Category 4 is inexhaustible; thus, the authors use their best efforts to include as diverse samples as possible to represent this space. Samples were collected from ImageNet1K [63], Describable Textures Dataset [7], and Caltech 10 [14]. \u2013 Category 5: While Category 2 tests whether models focus on relevant features of the class definition, it is also important to assess if a model evaluates the object as a whole, rather than focusing on specific portions of a sample. Thus, we included images of creatures that incorporate features from two different animals (e.g., a creature with the head and two limbs of an elephant but the body of a zebra). Recent advances in text-to-image models [58, 59, 66] enable us to rapidly and easily generate images of objects that do not naturally exist. We used Stable Diffusion [62] to create these images. Details of prompts are in Appendix D.2. \u2013 Category 6: An image may contain an object that does not belong to the target class but has features closely resembling those of the target classes. Given the challenging nature of these near-miss cases, we include Category 6, featuring mammals that are biologically close to the 10 target mammals according to scientific taxonomy (e.g., donkeys are close to zebras). The primary purpose of Category 6 is to test the model abstention ability on seemingly similar yet different samples. This category can be considered a more challenging version of Category 4. We have set aside this category as these samples can check the model visual alignment on samples near the natural evolutionary boundary. Samples are collected from ImageNet21K [61]. \u2013 Category 7: This category includes images in styles other than photo-realistic (e.g., a drawing of an elephant, a sculpture of a giraffe). Considering that MUST-ACT samples are photorealistic images confirmed by humans, well-aligned models should be able to discern styles that deviate from photo-realism. The images were collected from DomainNet [52] and ImageNet-R [25]. \u2022 UNCERTAIN includes images that are cropped or corrupted in various styles in different intensities \u2013 Category 8: This category includes images that are either cropped at varying sizes and regions or corrupted using one of the 15 corruption types5. The original samples were collected from ImageNet21K [61]. Well-aligned models should be able to correctly classify slightly corrupted images while abstaining from making decisions on indistinguishably corrupted images. The corruption process follows the approach outlined in ImageNet-C [23], with corruption intensities varying from 1 to 10. 3.3 Uncertain Group Label Generation One challenging yet intriguing aspect of the Uncertain group is the variability of these samples\u2019 gold standard labels, which fluctuates depending on corruption types and intensities. For instance, it would be optimal to correctly classify images with slight corruptions as they remain identifiable. However, when dealing with a severely darkened image, the object might resemble a tiger, a jaguar, or be entirely unrecognizable. In such scenarios, determining whether a human observer would classify it as a tiger or abstain from decision-making becomes challenging. Therefore, we derive a gold human ratio (i.e., the distribution over classes provided by human annotators), rather than assigning one label per image as in Must-Act and Must-Abstain, because human perception of an image can vary, and 4https://pytorch.org/ 5We leveraged open-sourced code available at https://github.com/hendrycks/robustness 6 \fTable 1: The comparison between VisAlign and other related datasets on the requirements we define. \u25b3indicates that only a subset of our scenarios are covered. Dataset Req. 1 Req. 2 Req. 3 Req. 4 ImageNet-C [23] \u2717 \u2717 \u25b3 \u2713 ImageNet-A [26] \u2717 \u2717 \u2717 \u2713 OpenOOD [77] \u2717 \u2717 \u25b3 \u2713 Background Challenge [76] \u2717 \u2717 \u2717 \u2713 MNIST [35] \u2717 \u2713 \u2717 \u2713 CIFAR10 [31] \u2717 \u2713 \u2717 \u2713 CIFAR10H [54] \u2717 \u2713 \u25b3 \u2713 PLEX [72] \u2717 \u2717 \u2713 \u2713 Park et al. [51] \u2713 \u2717 \u25b3 \u2717 DCIC [68] \u2717 \u2717 \u25b3 \u2713 VisAlign \u2713 \u2713 \u2713 \u2713 approximating the ratio for each image offers the best test of alignment6. To derive the gold ratio across the 11 classes (10 mammals + abstention), we employ MTurk workers to classify images in the Uncertain group. Every MTurk worker is asked to classify 35 images, including Category 4 images corrupted with a severity between 1 to 10, with 10 being distractors. This is to minimize MTurk workers\u2019 potential biases; e.g., a severely dark image can be perceived as anything other than the 10 mammals. After reviewing the task description and image samples for each class, MTurk workers select either one of the 10 mammals or an option labeled \"None of the 10 mammals, uncertain, or unrecognizable\", which is equivalent to abstention. To ensure the quality of samples, we disregard MTurk results where anything other than abstention was chosen for the distractor images. In accordance with Requirement 4, we ask 134 individuals per image to estimate the indisputable ground truth distribution within an error bound of 5%, following the survey sampling theory. Proofs are provided in Appendix F. Additionally, we calculate the Fleiss\u2019 Kappa [15] to assess two types of consistency among the MTurk workers\u2019 answers: intra-annotator and inter-annotator consistency. Intra-annotator consistency measures the consistency of a single worker\u2019s responses. To calculate this, we inserted two sets of identical images in random order. If a worker selects the same answers for these identical images, we consider the worker\u2019s responses to be consistent. Inter-annotator consistency, on the other hand, measures the agreement among different workers. Our results show an intra-annotator consistency value of \u03ba = 0.91, indicating almost perfect agreement, and an interannotator consistency value of \u03ba = 0.80, demonstrating substantial agreement. Details on survey instructions, response filtering process, and participant statistics are provided in Appendix F. 3.4 Dataset We prepare three datasets: the train set, the open test set, and the closed test set. The train set is a subset of ImageNet-21K [61], consisting only of Category 1 samples. By doing so, we ensure the trained models are tested on a variety of unseen categories, reflecting a real-world scenario. Please note that our test sets are universal benchmarks that any model can be tested on regardless of its train set. We highly encourage users to compile their own train set and use our train set as a basic reference. The labels in ImageNet-21K follow WordNet synset relations, resulting in classes for both species and higher-level taxonomies (for instance, \"brown bear\" and \"bear,\" respectively). For each of our 10 classes, we randomly sample a uniform amount of images from all related ImageNet-21K classes. We collected a total of 1250 images per class, using one-tenth of this data for validation. The creation processes of both the open and closed test sets are identical, as described above. We provide the open test set to allow developers to evaluate their models\u2019 visual perception alignment. Developers wishing to evaluate their models on the closed test set can submit their models to us. Table 1 presents a comparison of VisAlign and other datasets in terms of fulfilling the four requirements. 6Some might wonder why the machines should settle for aligning with human visual perception, rather than aiming to correctly classify even the most corrupted images (i.e. aim for superhuman visual perception). We provide arguments for the necessity of the former in Appendix E. 7 \f4 Metrics In addition to constructing VisAlign, we introduce a distance-based metric to measure AI-human visual alignment. Furthermore, as visual perception alignment can serve as a proxy for model reliability (i.e., safety, trustworthiness), we present a reliability score table to explore the correlation between a model\u2019s visual perception alignment and model reliability. 4.1 Distance-Based Visual Perception Similarity Metric We propose a distance-based metric to measure the distance between two multinomial distributions: the human visual distribution and the model output distribution over 11 classes (10 mammals + abstention). We opt for a distance-based metric for two reasons: 1) it does not depend on additional hyperparameters such as abstention threshold, and 2) comparison across all classes, rather than solely on the true class, provides a more accurate measure of visual alignment. For example, consider a Must-Act tiger sample with the gold human label as a one-hot vector for the label tiger. Suppose one model outputs a probability of 0.7 for tiger and 0.3 for abstention, and another model yields a probability of 0.7 for tiger and 0.1 for zebra, elephant, and giraffe respectively. These two models differ in visual perception alignment: the former is uncertain between two classes, whereas the latter is indecisive among four classes. If we were to consider only the gold label\u2019s probability, both models would yield the same result, which would not accurately represent visual alignment. Hence, we employ a distance-based metric calculated across all 11 classes, as opposed to using the maximum or gold label probability. Specifically, we employ the Hellinger distance [49] to measure the difference between the two probability distributions as summarized in Eq. 1. Compared to other metrics for comparing two multinomial distributions, Hellinger distance produces smooth distance values even for extreme (e.g., one-hot) distributions (unlike KL Divergence [10]) and considers all classes while calculating the distance (unlike Total Variation distance). For instance, given a human visual distribution of [1., 0., 0.] and two model output distributions [0.3, 0., 0.7] and [0.3, 0.4, 0.3], the two output distributions would have the same KL Divergences with the human distribution while they have different Hellinger distances. Hellinger distance accounts not only for the gold label probability but also for the probabilities of all other labels. Additionally, as its range lies between 0 and 1, it provides an intuitive indication of model alignment. h(P, Q) = 1 \u221a 2 X i \u2225\u221api \u2212\u221aqi\u22252 (1) 4.2 Reliability Score with Abstention Table 2: Reliability score table. The optimal outcomes earn a score of 1. Abstention in MustPredict and Original Label Prediction in MustAbstain get 0. The worst case receives \u2212c, where c is the cost value. \u2217Note that the original label prediction can only happen in Uncertain samples that fall under Must-Abstain. Sample Type Model Action RSc(x) Must-Act Correct Prediction +1 Incorrect Prediction \u2212c Abstention 0 Must-Abstain Original Label Prediction\u2217 0 Other Prediction \u2212c Abstention +1 Beyond measuring the distance between human visual distributions and model outputs, we also assess the model\u2019s reliability based on its final action. This process involves two steps. First, a model abstains if the abstention probability surpasses an abstention threshold, \u03b3; otherwise, it makes a prediction. Next, if a model decides to act, its prediction is one of the 10 mammal classes with the highest prediction probability. Table 2 details the reliability scores for each case. We devise separate metrics for Must-Act and Must-Abstain instances. For Uncertain samples, they are treated as Must-Act if the probability of the original label exceeds a threshold \u03bb; otherwise, they are treated as Must-Abstain. We set an initial \u03bb value at 0.5, but this can be adjusted according to the specific objective. We denote the reliability score as RSc(x), where c is the cost of an incorrect prediction. The main criterion for assigning scores is the consequences of the model\u2019s decision. The model earns a score of 1 per prediction when it aligns best with human 8 \fTable 3: Average and standard deviation of the distance-based visual alignment and reliability scores across five seeds on the open test set. Bold indicates the best performance in each category, and underline is the second best. Deep Ensemble does not have a standard deviation since it uses the output of all five models. Visual Alignment (\u2193) Reliability score (\u2191) Must-Act Must-Abstain Uncertain Average RS0 RS450 RS900 Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Category 7 Category 8 ViT [11] SP 0.261\u00b10.051 0.556\u00b10.029 0.367\u00b10.038 0.793\u00b10.057 0.808\u00b10.057 0.787\u00b10.056 0.792\u00b10.059 0.671\u00b10.032 0.629\u00b10.021 313 \u2212245837 \u2212491987 ASP 0.208\u00b10.036 0.514\u00b10.033 0.325\u00b10.022 1.000\u00b10.000 1.000\u00b10.000 1.000\u00b10.000 1.000\u00b10.000 0.767\u00b10.010 0.727\u00b10.007 253 \u2212285047 \u2212570347 MD [36] 0.390\u00b10.030 0.658\u00b10.025 0.485\u00b10.023 0.725\u00b10.021 0.721\u00b10.023 0.726\u00b10.023 0.664\u00b10.025 0.623\u00b10.012 0.624\u00b10.005 270 \u2212275580 \u2212551430 KNN [70] 0.382\u00b10.047 0.634\u00b10.029 0.484\u00b10.033 0.679\u00b10.058 0.696\u00b10.050 0.679\u00b10.049 0.674\u00b10.067 0.612\u00b10.034 0.605\u00b10.020 282 \u2212264768 \u2212529818 TAPUDD [13] 0.375\u00b10.070 0.628\u00b10.073 0.468\u00b10.074 0.809\u00b10.079 0.809\u00b10.084 0.835\u00b10.065 0.768\u00b10.089 0.678\u00b10.024 0.671\u00b10.017 253 \u2212285047 \u2212570347 OpenMax [3] 0.238\u00b10.027 0.536\u00b10.033 0.344\u00b10.022 0.804\u00b10.050 0.816\u00b10.037 0.804\u00b10.059 0.766\u00b10.055 0.696\u00b10.025 0.626\u00b10.020 335 \u2212229165 \u2212458665 MC-Dropout [16] 0.210\u00b10.036 0.516\u00b10.032 0.326\u00b10.022 0.968\u00b10.009 0.970\u00b10.010 0.968\u00b10.009 0.968\u00b10.010 0.749\u00b10.014 0.709\u00b10.005 253 \u2212285047 \u2212570347 Deep Ensemble [33] 0.305 0.571 0.400 0.712 0.732 0.705 0.713 0.628 0.596 376 \u2212205274 \u2212410924 Swin Transformer [40] SP 0.106\u00b10.004 0.362\u00b10.014 0.221\u00b10.017 0.793\u00b10.016 0.828\u00b10.043 0.800\u00b10.022 0.829\u00b10.028 0.625\u00b10.031 0.571\u00b10.015 363 \u2212225537 \u2212451437 ASP 0.085\u00b10.007 0.329\u00b10.008 0.182\u00b10.020 0.998\u00b10.000 0.998\u00b10.000 0.998\u00b10.000 0.998\u00b10.000 0.736\u00b10.009 0.666\u00b10.003 294 \u2212268356 \u2212537006 MD [36] 0.296\u00b10.018 0.512\u00b10.012 0.364\u00b10.012 0.700\u00b10.012 0.743\u00b10.014 0.723\u00b10.017 0.685\u00b10.021 0.575\u00b10.007 0.575\u00b10.006 326 \u2212248974 \u2212498274 KNN [70] 0.370\u00b10.017 0.580\u00b10.008 0.456\u00b10.018 0.549\u00b10.025 0.590\u00b10.013 0.545\u00b10.022 0.554\u00b10.035 0.543\u00b10.007 0.523\u00b10.012 526 -115124 -230774 TAPUDD [13] 0.201\u00b10.053 0.427\u00b10.048 0.278\u00b10.046 0.876\u00b10.058 0.889\u00b10.048 0.898\u00b10.049 0.844\u00b10.073 0.663\u00b10.022 0.635\u00b10.013 294 \u2212268356 \u2212537006 OpenMax [3] 0.099\u00b10.008 0.358\u00b10.013 0.225\u00b10.029 0.831\u00b10.037 0.810\u00b10.023 0.817\u00b10.032 0.724\u00b10.084 0.656\u00b10.030 0.565\u00b10.011 399 \u2212208401 \u2212417201 MC-Dropout [16] 0.092\u00b10.007 0.338\u00b10.008 0.191\u00b10.020 0.947\u00b10.001 0.957\u00b10.006 0.951\u00b10.002 0.953\u00b10.003 0.705\u00b10.011 0.642\u00b10.003 294 \u2212268356 \u2212537006 Deep Ensemble [33] 0.132 0.377 0.253 0.725 0.766 0.734 0.768 0.584 0.542 434 \u2212187666 \u2212375766 DenseNet [28] SP 0.094\u00b10.017 0.258\u00b10.023 0.183\u00b10.019 0.813\u00b10.017 0.852\u00b10.015 0.819\u00b10.012 0.864\u00b10.036 0.614\u00b10.008 0.562\u00b10.007 392 \u2212211558 \u2212423508 ASP 0.079\u00b10.013 0.224\u00b10.023 0.159\u00b10.018 0.997\u00b10.000 0.997\u00b10.000 0.997\u00b10.000 0.997\u00b10.000 0.747\u00b10.008 0.650\u00b10.004 312 \u2212260238 \u2212520788 MD [36] 0.170\u00b10.016 0.323\u00b10.022 0.250\u00b10.025 0.873\u00b10.014 0.866\u00b10.019 0.854\u00b10.009 0.825\u00b10.032 0.620\u00b10.022 0.598\u00b10.006 339 \u2212247611 \u2212495561 KNN [70] 0.272\u00b10.021 0.448\u00b10.021 0.360\u00b10.017 0.612\u00b10.019 0.640\u00b10.022 0.615\u00b10.019 0.664\u00b10.014 0.565\u00b10.009 0.522\u00b10.002 482 \u2212157468 \u2212315418 TAPUDD [13] 0.310\u00b10.039 0.393\u00b10.025 0.364\u00b10.044 0.862\u00b10.021 0.831\u00b10.023 0.837\u00b10.018 0.810\u00b10.028 0.645\u00b10.017 0.631\u00b10.004 320 \u2212249880 \u2212500080 OpenMax [3] 0.093\u00b10.015 0.288\u00b10.023 0.199\u00b10.027 0.764\u00b10.049 0.817\u00b10.054 0.734\u00b10.058 0.823\u00b10.058 0.590\u00b10.016 0.539\u00b10.025 461 \u2212165589 \u2212331639 MC-Dropout [16] 0.087\u00b10.014 0.263\u00b10.024 0.204\u00b10.017 0.953\u00b10.003 0.953\u00b10.002 0.954\u00b10.005 0.964\u00b10.004 0.718\u00b10.009 0.637\u00b10.003 312 \u2212260238 \u2212520788 Deep Ensemble [33] 0.109 0.276 0.209 0.767 0.814 0.775 0.825 0.581 0.545 396 \u2212209754 \u2212419904 ConvNeXt [41] SP 0.211\u00b10.020 0.461\u00b10.032 0.354\u00b10.024 0.661\u00b10.039 0.772\u00b10.023 0.671\u00b10.026 0.767\u00b10.028 0.583\u00b10.016 0.560\u00b10.008 427 \u2212180923 \u2212362273 ASP 0.162\u00b10.015 0.398\u00b10.026 0.299\u00b10.021 0.998\u00b10.000 0.999\u00b10.000 0.998\u00b10.000 0.998\u00b10.000 0.729\u00b10.003 0.698\u00b10.007 283 \u2212268817 \u2212537917 MD [36] 0.439\u00b10.019 0.583\u00b10.023 0.504\u00b10.017 0.699\u00b10.021 0.663\u00b10.023 0.692\u00b10.031 0.642\u00b10.069 0.600\u00b10.009 0.603\u00b10.013 350 \u2212213400 \u2212427150 KNN [70] 0.376\u00b10.007 0.574\u00b10.014 0.484\u00b10.009 0.613\u00b10.027 0.654\u00b10.013 0.627\u00b10.024 0.621\u00b10.019 0.565\u00b10.004 0.564\u00b10.010 451 \u2212169649 \u2212339749 TAPUDD [13] 0.448\u00b10.018 0.578\u00b10.013 0.518\u00b10.019 0.796\u00b10.016 0.732\u00b10.016 0.799\u00b10.014 0.752\u00b10.035 0.656\u00b10.007 0.660\u00b10.010 278 \u2212263422 \u2212527122 OpenMax [3] 0.183\u00b10.010 0.408\u00b10.026 0.318\u00b10.027 0.944\u00b10.007 0.914\u00b10.012 0.960\u00b10.005 0.888\u00b10.052 0.708\u00b10.004 0.665\u00b10.010 286 \u2212261614 \u2212523514 MC-Dropout [16] 0.166\u00b10.016 0.403\u00b10.026 0.303\u00b10.021 0.941\u00b10.003 0.958\u00b10.002 0.942\u00b10.001 0.955\u00b10.003 0.699\u00b10.002 0.671\u00b10.006 283 \u2212268817 \u2212537917 Deep Ensemble [33] 0.228 0.480 0.368 0.621 0.743 0.633 0.738 0.563 0.547 455 \u2212166045 \u2212332545 MLP-Mixer [71] SP 0.240\u00b10.048 0.556\u00b10.055 0.333\u00b10.032 0.842\u00b10.051 0.829\u00b10.059 0.828\u00b10.034 0.850\u00b10.037 0.674\u00b10.024 0.644\u00b10.021 347 \u2212231403 \u2212463153 ASP 0.212\u00b10.041 0.524\u00b10.054 0.303\u00b10.038 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.749\u00b10.017 0.723\u00b10.013 260 \u2212285490 \u2212571240 MD [36] 0.431\u00b10.044 0.682\u00b10.042 0.499\u00b10.017 0.649\u00b10.021 0.651\u00b10.022 0.640\u00b10.012 0.621\u00b10.034 0.609\u00b10.013 0.598\u00b10.009 374 \u2212209326 \u2212419026 KNN [70] 0.414\u00b10.034 0.680\u00b10.038 0.491\u00b10.017 0.621\u00b10.013 0.641\u00b10.013 0.610\u00b10.014 0.634\u00b10.013 0.584\u00b10.015 0.584\u00b10.009 402 \u2212194448 \u2212389298 TAPUDD [13] 0.586\u00b10.013 0.742\u00b10.020 0.631\u00b10.021 0.624\u00b10.023 0.600\u00b10.021 0.620\u00b10.022 0.603\u00b10.034 0.619\u00b10.014 0.628\u00b10.012 297 \u2212231003 \u2212462303 OpenMax [3] 0.290\u00b10.045 0.630\u00b10.060 0.372\u00b10.065 0.662\u00b10.127 0.681\u00b10.140 0.648\u00b10.174 0.630\u00b10.122 0.631\u00b10.050 0.568\u00b10.057 358 \u2212218792 \u2212437942 MC-Dropout [16] 0.213\u00b10.041 0.525\u00b10.054 0.303\u00b10.037 0.977\u00b10.008 0.974\u00b10.009 0.975\u00b10.005 0.977\u00b10.006 0.737\u00b10.017 0.710\u00b10.013 260 \u2212285490 \u2212571240 Deep Ensemble [33] 0.297 0.597 0.370 0.735 0.714 0.719 0.743 0.614 0.599 376 \u2212207524 \u2212415424 recognition: making a correct prediction in Must-Act and abstaining in Must-Abstain. On the other hand, if the model\u2019s decision is erroneous and could potentially result in significant cost\u2014in our case, a wrong prediction\u2014the model receives a score of \u2212c. A score of zero indicates that the prediction is neither beneficial nor detrimental. Original Label Prediction is a special case only applied for Uncertain samples treated as Must-Abstain. In this case, a model correctly classifies a corrupted image that most humans cannot recognize. Although most humans disagree with the model\u2019s decision, it does not have a negative impact since it is a correct answer. The total score, RSc, is the summation over all test samples, P i RSc(xi). The proper value of cost c depends on the industry and the use case. c can be set as an integer ranging from 0 to the total size of the test set. A value 0 for c implies a 0% strictness, while the maximum value of c implies a 100% strictness. This means that even a single mistake would result in a negative score, and abstaining from all decisions on Must-Act samples would be deemed more reliable than making even one incorrect prediction. We designed this metric to enable both absolute and relative reference points. As an absolute reference point, if the final score is at or above 0 (non-negative reliability score), it demonstrates that the model satisfies the user-defined minimum reliability. A relative reference point is between different models; a model with a higher score between two reliability scores is more reliable. In this paper, we set the value of c as 0, 450, or 900. 9 \f5 Experiment 5.1 Experiment Settings We perform experiments with Transformer-based [74], CNN-based [34], and MLP-based models to present experiments that show which architecture shows the best alignment and reliability on our benchmark. We use ViT [11] and Swin Transformer [40] for Transformer-based models, and DenseNet [28] and ConvNeXt [41] for CNN-based models. For the MLP-based model, we use MLP-Mixer [71]. All models are trained on our train set and tested on the open test set. We chose abstention functions that satisfy the following three conditions: 1) must be applicable on any model architecture, 2) do not require OOD or other Must-Abstain samples during training, and 3) do not require a supplementary model. We first calculate the abstention probability using each function, then re-normalize the 10-class prediction probability so that the sum over the 11 classes becomes 1. Since not every function outputs the abstention probability between 0 and 1, we designed a smaller version of the dataset with the identical gather process to test set to use for normalizing the abstention probability. \u2022 Softmax Probability (SP) regards the entropy among the 10 classes as abstention probability. \u2022 Adjusted Softmax Probability (ASP) acts the same as SP, but it applies temperature scaling and adds perturbations to the input image based on the gradients to decrease the softmax score. This method is inspired by ODIN [27]. \u2022 Mahalanobis detector (MD) [36] determines abstention probability based on the minimum Mahalanobis distance [42] calculated from each class distribution\u2019s mean and variance. \u2022 KNN [70] uses the shortest k-Nearest Neighbor (KNN) distance between the feature of the test sample and the in-class features as an abstention probability. \u2022 TAPUDD [13] extracts features from train set and split into m clusters using Gaussian Mixture Model (GMM). It determines the abstention probability based on the shortest Mahalanobis distance calculated from all clusters. \u2022 OpenMax [3] represents each class as a mean activation vector (MAV) in the penultimate layer of the network. Next, the test sample distance from the corresponding class MAV is used to calculate the abstention probabililty. \u2022 MC-Dropout [16] and Deep Ensemble [33] approximate model uncertainty using multiple predictions given by different dropouts and ensemble of networks, respectively. The average of the entropies over the 10 classes of each prediction determines the abstention probability. 5.2 Visual Alignment and Reliability Score Table 3 presents both the distance-based visual alignment and the reliability scores on the open test set for all model and abstention function combinations. One key observation is that the performance differences between model architectures are not significantly distinct, suggesting that visual alignment is more influenced by abstention functions than by the model architectures. For Must-Act categories, distance-based abstention functions (MD, KNN, and TAPUDD) exhibits better visual alignment. Conversely, for Must-Abstain samples, probability-based methods (SP and ASP) align better with human perception. This implies that distance-based abstentions are generally more inclined to act, while probability-based abstentions are more likely to abstain. In Uncertain category, all abstention functions demonstrate similar visual alignment performances, predominantly ranging from 0.5 and 0.6. We conjecture the reason comes from that all models are struggling in approximating the overall ratios across 11 classes compared to Must-Act and Must-Abstain, where models only need to correctly predict a single class. The difficulty of achieving visual perception alignment in Uncertain suggests that there is room for improvement. KNN [70] has the best visual alignment across all categories on average. This might be because KNN can capture more fine-grained features than other distance-based abstention functions, as it calculates the distance between samples, not clusters. We also compute three reliability scores with c set to 0 (RS0), 450 (RS450), and 900 (RS900). The resulting ratios of each action type are shown in Appendix G. Here, c = 0 indicates no negative impacts from incorrect predictions, while c = 900 suggests that a single incorrect prediction outweighs the remaining correct predictions. It is worth noting that reliability scores in RS450 and RS900 are mostly negative, suggesting that current models and abstention functions are not perfectly safe to be deployed in the real world. 10 \f5.3 Experiment Results from Pre-training and Self-supervised Learning Table 4: Average and standard deviation of ImageNet pre-trained models distance-based visual alignment and reliability score across 5 seeds. Bold indicates the best performance in each category and underline is the second best. Deep Ensemble does not have standard deviation since it is the output of 5 different seeds. For comparison, please refer to Table 3 for results without pre-training. Visual Alignment (\u2193) Reliability score (\u2191) Must-Act Must-Abstain Uncertain Average RS0 RS450 RS900 Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Category 7 Category 8 ViT [11] SP 0.064\u00b10.001 0.107\u00b10.001 0.085\u00b10.001 0.211\u00b10.006 0.760\u00b10.003 0.439\u00b10.004 0.650\u00b10.006 0.262\u00b10.002 0.322\u00b10.002 710 \u221277590 \u2212155890 ASP 0.033\u00b10.000 0.062\u00b10.001 0.044\u00b10.001 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.564\u00b10.000 0.587\u00b10.000 390 \u2212226410 \u2212453210 MD [36] 0.218\u00b10.001 0.341\u00b10.002 0.236\u00b10.001 0.609\u00b10.004 0.764\u00b10.002 0.694\u00b10.004 0.613\u00b10.004 0.402\u00b10.003 0.485\u00b10.001 634 \u2212109616 \u2212219866 KNN [70] 0.399\u00b10.001 0.588\u00b10.001 0.465\u00b10.001 0.450\u00b10.001 0.469\u00b10.001 0.556\u00b10.001 0.300\u00b10.002 0.452\u00b10.000 0.460\u00b10.000 639 -29061 -58761 TAPUDD [13] 0.320\u00b10.017 0.405\u00b10.021 0.315\u00b10.017 0.657\u00b10.021 0.733\u00b10.014 0.753\u00b10.014 0.616\u00b10.029 0.441\u00b10.008 0.530\u00b10.004 587 \u2212132163 \u2212264913 OpenMax [3] 0.042\u00b10.002 0.068\u00b10.000 0.049\u00b10.001 0.728\u00b10.006 0.868\u00b10.006 0.750\u00b10.010 0.820\u00b10.006 0.420\u00b10.002 0.468\u00b10.002 579 \u2212138021 \u2212276621 MC-Dropout [16] 0.034\u00b10.000 0.064\u00b10.001 0.046\u00b10.001 0.909\u00b10.000 0.964\u00b10.000 0.927\u00b10.000 0.947\u00b10.001 0.519\u00b10.000 0.551\u00b10.000 390 \u2212226410 \u2212453210 Deep Ensemble [33] 0.064 0.107 0.085 0.208 0.759 0.437 0.649 0.261 0.321 708 \u221278042 \u2212156792 Swin Transformer [40] SP 0.149\u00b10.104 0.179\u00b10.104 0.168\u00b10.100 0.212\u00b10.021 0.711\u00b10.073 0.383\u00b10.060 0.637\u00b10.089 0.319\u00b10.016 0.344\u00b10.010 737 \u221244263 \u221289263 ASP 0.083\u00b10.067 0.105\u00b10.068 0.099\u00b10.064 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.599\u00b10.028 0.610\u00b10.029 383 \u2212229567 \u2212459517 MD [36] 0.127\u00b10.053 0.183\u00b10.048 0.143\u00b10.051 0.759\u00b10.004 0.854\u00b10.003 0.851\u00b10.002 0.667\u00b10.006 0.485\u00b10.026 0.509\u00b10.022 537 \u2212156963 \u2212314463 KNN [70] 0.293\u00b10.029 0.371\u00b10.024 0.344\u00b10.023 0.280\u00b10.002 0.573\u00b10.001 0.460\u00b10.002 0.386\u00b10.002 0.374\u00b10.013 0.385\u00b10.011 732 \u221233468 \u221267668 TAPUDD [13] 0.181\u00b10.041 0.220\u00b10.038 0.189\u00b10.040 0.850\u00b10.006 0.846\u00b10.008 0.926\u00b10.004 0.742\u00b10.017 0.540\u00b10.026 0.562\u00b10.019 421 \u2212211979 \u2212424379 OpenMax [3] 0.092\u00b10.071 0.116\u00b10.072 0.107\u00b10.069 0.762\u00b10.007 0.815\u00b10.010 0.727\u00b10.021 0.800\u00b10.012 0.476\u00b10.029 0.487\u00b10.031 585 \u2212135315 \u2212271215 MC-Dropout [16] 0.086\u00b10.068 0.110\u00b10.069 0.104\u00b10.065 0.910\u00b10.000 0.946\u00b10.007 0.921\u00b10.004 0.932\u00b10.005 0.548\u00b10.027 0.570\u00b10.027 383 \u2212229567 \u2212459517 Deep Ensemble [33] 0.178 0.206 0.195 0.214 0.703 0.383 0.634 0.322 0.354 701 \u221279849 \u2212160399 DenseNet [28] SP 0.535\u00b10.375 0.553\u00b10.356 0.561\u00b10.344 0.673\u00b10.190 0.746\u00b10.090 0.735\u00b10.106 0.733\u00b10.118 0.609\u00b10.226 0.643\u00b10.223 361 \u2212135089 \u2212270539 ASP 0.503\u00b10.400 0.517\u00b10.386 0.521\u00b10.379 0.999\u00b10.001 0.999\u00b10.001 0.999\u00b10.001 0.999\u00b10.001 0.777\u00b10.131 0.789\u00b10.162 172 \u2212326528 \u2212653228 MD [36] 0.567\u00b10.316 0.600\u00b10.281 0.578\u00b10.305 0.788\u00b10.123 0.817\u00b10.116 0.821\u00b10.109 0.752\u00b10.099 0.634\u00b10.174 0.695\u00b10.187 209 \u2212305791 \u2212611791 KNN [70] 0.575\u00b10.329 0.604\u00b10.297 0.597\u00b10.302 0.697\u00b10.032 0.723\u00b10.039 0.716\u00b10.038 0.710\u00b10.023 0.606\u00b10.130 0.654\u00b10.146 489 \u221247661 \u221295811 TAPUDD [13] 0.655\u00b10.201 0.660\u00b10.204 0.636\u00b10.230 0.853\u00b10.038 0.840\u00b10.064 0.855\u00b10.046 0.791\u00b10.053 0.696\u00b10.093 0.748\u00b10.105 227 \u2212286873 \u2212573973 OpenMax [3] 0.512\u00b10.394 0.529\u00b10.378 0.535\u00b10.367 0.806\u00b10.100 0.847\u00b10.059 0.832\u00b10.059 0.825\u00b10.054 0.674\u00b10.154 0.695\u00b10.194 216 \u2212300384 \u2212600984 MC-Dropout [16] 0.512\u00b10.387 0.543\u00b10.349 0.547\u00b10.343 0.961\u00b10.043 0.963\u00b10.040 0.962\u00b10.041 0.963\u00b10.039 0.755\u00b10.156 0.776\u00b10.175 172 \u2212326528 \u2212653228 Deep Ensemble [33] 0.566 0.575 0.579 0.713 0.794 0.781 0.778 0.622 0.676 583 \u2212132617 \u2212265817 ConvNeXt [41] SP 0.330\u00b10.393 0.359\u00b10.376 0.338\u00b10.384 0.658\u00b10.197 0.832\u00b10.055 0.686\u00b10.172 0.819\u00b10.064 0.517\u00b10.246 0.567\u00b10.235 369 \u2212237681 \u2212475731 ASP 0.314\u00b10.402 0.335\u00b10.391 0.321\u00b10.395 0.999\u00b10.001 0.999\u00b10.001 0.999\u00b10.001 0.999\u00b10.001 0.685\u00b10.155 0.706\u00b10.168 369 \u2212237681 \u2212475731 MD [36] 0.380\u00b10.348 0.407\u00b10.333 0.402\u00b10.328 0.690\u00b10.095 0.769\u00b10.039 0.639\u00b10.134 0.711\u00b10.024 0.536\u00b10.177 0.567\u00b10.181 630 \u221297020 \u2212194670 KNN [70] 0.364\u00b10.369 0.398\u00b10.352 0.389\u00b10.348 0.609\u00b10.123 0.715\u00b10.039 0.625\u00b10.082 0.662\u00b10.099 0.462\u00b10.127 0.528\u00b10.107 716 \u221233934 \u221268584 TAPUDD [13] 0.628\u00b10.168 0.616\u00b10.176 0.624\u00b10.167 0.808\u00b10.073 0.670\u00b10.050 0.806\u00b10.083 0.710\u00b10.032 0.653\u00b10.104 0.689\u00b10.101 235 \u2212158165 \u2212316565 OpenMax [3] 0.319\u00b10.406 0.345\u00b10.389 0.333\u00b10.393 0.796\u00b10.056 0.807\u00b10.033 0.728\u00b10.121 0.802\u00b10.023 0.537\u00b10.197 0.583\u00b10.194 660 \u221285290 \u2212171240 MC-Dropout [16] 0.315\u00b10.402 0.337\u00b10.390 0.322\u00b10.394 0.953\u00b10.032 0.971\u00b10.017 0.955\u00b10.030 0.970\u00b10.018 0.658\u00b10.173 0.685\u00b10.182 369 \u2212237681 \u2212475731 Deep Ensemble [33] 0.432 0.448 0.438 0.651 0.827 0.681 0.812 0.532 0.603 593 \u2212134407 \u2212269407 MLP-Mixer [71] SP 0.198\u00b10.294 0.279\u00b10.269 0.234\u00b10.282 0.550\u00b10.101 0.742\u00b10.005 0.589\u00b10.080 0.650\u00b10.051 0.422\u00b10.160 0.458\u00b10.155 608 \u221235842 \u221272292 ASP 0.165\u00b10.295 0.228\u00b10.281 0.196\u00b10.286 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.999\u00b10.000 0.686\u00b10.096 0.659\u00b10.120 289 \u2212268361 \u2212537011 MD [36] 0.303\u00b10.232 0.391\u00b10.210 0.339\u00b10.217 0.726\u00b10.025 0.710\u00b10.030 0.685\u00b10.038 0.706\u00b10.022 0.535\u00b10.118 0.549\u00b10.105 498 \u2212121452 \u2212243402 KNN [70] 0.270\u00b10.249 0.362\u00b10.227 0.320\u00b10.231 0.642\u00b10.020 0.698\u00b10.010 0.648\u00b10.037 0.616\u00b10.014 0.478\u00b10.112 0.504\u00b10.110 639 \u221229511 \u221259661 TAPUDD [13] 0.553\u00b10.115 0.572\u00b10.120 0.559\u00b10.114 0.815\u00b10.015 0.688\u00b10.010 0.802\u00b10.018 0.749\u00b10.036 0.649\u00b10.072 0.673\u00b10.046 331 \u2212133319 \u2212266969 OpenMax [3] 0.170\u00b10.298 0.245\u00b10.279 0.203\u00b10.288 0.830\u00b10.007 0.857\u00b10.010 0.838\u00b10.007 0.811\u00b10.011 0.563\u00b10.123 0.565\u00b10.126 318 \u2212249432 \u2212499182 MC-Dropout [16] 0.166\u00b10.295 0.230\u00b10.280 0.197\u00b10.285 0.936\u00b10.017 0.960\u00b10.004 0.939\u00b10.015 0.949\u00b10.010 0.643\u00b10.108 0.628\u00b10.127 289 \u2212268361 \u2212537011 Deep Ensemble [33] 0.310 0.369 0.338 0.558 0.736 0.595 0.652 0.440 0.500 647 \u221292503 \u2212185653 Previous studies [1, 78, 24, 46] suggest that training on larger data and pre-training by self-supervised learning (SSL) methods help improve robustness and Out-of-Distribution (OOD) detection. To validate if the same findings can also be applied in our task, we additionally measure the visual alignment and reliability score on models that are pre-trained on ImageNet [63] and pre-trained by two popular SSL methods, which are SimCLR [6] and BYOL [22]. For models that are pre-trained on ImageNet, after pre-training, we initialize the top classification layer and train on our train set while freezing the pre-trained parameters during fine-tuning. For models that are pre-trained by SSL methods, we do not freeze any layers after pre-training. The results are shown in Table 4 and Table 5. The results in Table 4 can be compared to the results in Table 3. For ImageNet pre-trained models, Transformer-based models show improved performance, whereas MLP-based and CNN-based models show similar or decreased visual alignment scores, especially when evaluated with SP. This indicates that the effect of pre-training on larger datasets is dependent on model architecture. Interestingly, distance-based abstention functions display higher visual alignment scores. We suspect that the improved output embeddings from pre-training enable distance-based abstention functions to capture more precise features. Deep Ensemble has better visual alignment when met with Transformer-based and MLP-based. Notably, Transformer-based models combined with KNN have the best visual alignment score. We conjecture the reason comes from both the model architecture and the abstention function. Contrary to CNN-based models, Transformerbased models are able to capture global features of images instead of only local features. Also, KNN calculates abstention probability based on the distance between samples instead of clusters, as done in 11 \f0.50 0.55 0.60 0.65 0.70 0.75 Visual Alignment -60.0 -50.0 -40.0 -30.0 -20.0 Reliability Score (c=900) x(10K) Figure 2: Correlation between Visual Alignment Distance and Reliability Score (RS900). There exists a strong correlation between visual alignment distance and reliability score. This proves that visual alignment can be used as a proxy method for reliability. MD or TAPUDD, which uses more fine-grained features for deciding abstention. Therefore, deciding abstention using fine-grained details on global features gets boosted when trained on a larger set, which leads to the best visual alignment. The overall reliability score increases when pre-trained with ImageNet, and this represents that the models that are pre-trained on ImageNet are more likely to abstain. As shown in Table 5, the results from SSL are highly dependent on both the model architecture and whether the abstention method is distance-based or not. For example, distance-based methods perform better on Must-Abstain categories when paired with Swin Transformer. Unlike other abstention methods, Deep Ensemble generally performs better in all groups regardless of the model architecture. Note that even if the same abstention method is used, the effects on the performance are reversed depending on the model architecture used. As an example, when TAPUDD is combined with Swin Transformer, the performance increases on all Must-Abstain categories and decreases on all Must-Act categories, but the performance difference is reversed when TAPUDD is combined with DenseNet instead. Overall, Deep Ensemble helps increase visual alignment performance in both ImageNet pre-training and SSL. However, other abstention functions did not show noticeable performance increases in both cases. In short, the same findings in previous studies on robustness and OOD detection can not be directly applied to visual alignment. This implies visual alignment has its unique challenges that differentiate from robustness and OOD detection tasks, and there is much room for developing new methods for better visual alignment. In general, KNN shows the best visual alignment score in all three tables (Table 3, Table 4, Table 5). This may be due to using detailed features when calculating abstention probability. However, it is hard to find a consistency for optimal model architecture. For example, in Table 3, Swin Transformer and DenseNet, which have different architectures, have the best performance on average across all seven abstention functions. Therefore, more research on finding the optimal model architecture in visual alignment is needed. 5.4 Analysis Figure 2 shows the correlation between visual alignment distance and reliability score measured in Table 3. There exists a strong correlation between visual alignment distance and reliability score \u2013 the shorter the distance the higher the reliability score. This indicates that visual alignment score can be used as a proxy method for reliability, underscoring the importance of visual alignment. 12 \fMethods based on the minimum distance from each class (MD, KNN, and TAPUDD) generally show a worse visual alignment on Must-Abstain categories. We conjecture that the reason comes from using the shortest distance to in-class clusters. If an embedding contains one clear in-class feature, the distance to the corresponding class would be short, leading the model to make a prediction. On the other hand, methods based on entropy or uncertainty show weak alignment on Must-Act categories. With these methods, the model has to be not only confident that its predicted class is correct but also that the remaining classes are incorrect. Considering the confidence in all classes makes it more challenging for visual alignment in Must-Act categories. An abstention function which takes advantage of both distance-based and probability-based methods is needed to perform well on visual alignment. The distance should be sample-wise to capture the nuanced characteristics of the samples. Overall, our experiments show that no methods perform well across all categories. There is much room for improvement in visual alignment, a field in which our dataset will become an essential tool for benchmarking new methods. 6" + }, + { + "url": "http://arxiv.org/abs/2302.13700v1", + "title": "Imaginary Voice: Face-styled Diffusion Model for Text-to-Speech", + "abstract": "The goal of this work is zero-shot text-to-speech synthesis, with speaking\nstyles and voices learnt from facial characteristics. Inspired by the natural\nfact that people can imagine the voice of someone when they look at his or her\nface, we introduce a face-styled diffusion text-to-speech (TTS) model within a\nunified framework learnt from visible attributes, called Face-TTS. This is the\nfirst time that face images are used as a condition to train a TTS model.\n We jointly train cross-model biometrics and TTS models to preserve speaker\nidentity between face images and generated speech segments. We also propose a\nspeaker feature binding loss to enforce the similarity of the generated and the\nground truth speech segments in speaker embedding space. Since the biometric\ninformation is extracted directly from the face image, our method does not\nrequire extra fine-tuning steps to generate speech from unseen and unheard\nspeakers. We train and evaluate the model on the LRS3 dataset, an in-the-wild\naudio-visual corpus containing background noise and diverse speaking styles.\nThe project page is https://facetts.github.io.", + "authors": "Jiyoung Lee, Joon Son Chung, Soo-Whan Chung", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "cs.SD", + "eess.AS" + ], + "main_content": "INTRODUCTION Text-to-speech (TTS) is one of the core tasks in speech processing that generates speech waveform from a given text transcription. Deep generative models have been introduced to produce highquality spectral features from text sequences [1, 2, 3]. They have brought remarkable improvements in the quality of synthetic speech signals, compared to traditional parametric synthesis methods. Recent works on diffusion models [4, 5, 6] have provided excellent generation results with outputs of high quality in various research \ufb01elds such as image generation, video generation, and natural language processing. For example, diffusion methods have achieved noteworthy results in image generation models; e.g. DALLE-2 [7], Stable Diffusion [8]. Likewise, diffusion methods have shown impressive results in TTS compared to the previous generative methods, both in acoustic modeling [9, 10, 11] and in the vocoder [12, 13]. However, there are several unresolved challenges in the \ufb01eld of TTS. One problem we address in this paper is expanding single speaker TTS model to multi-speaker TTS. Since every person has different speaking styles, tones or accents, it is very challenging for the TTS model to learn various speaker styles. The second and related problem is that a signi\ufb01cant amount of target speakers\u2019 speech samples are required to generate voices of unseen speakers, even for multi-speaker TTS. The variability of speaking styles means that the model must have access to signi\ufb01cant amount of enrollment data to Transcription: \u201cHello everyone\u201d Virtual face Read in your voice! Fig. 1: FACE-TTS generates speech from a given text, conditioned on a face image. The face image is sampled from [8]. learn about each speaker. Since it is dif\ufb01cult to obtain clean enrollment utterances for each speaker, this raises the question \u201cwhat if face images can be used for enrollment instead of clean speech?\u201d In [14, 15], the authors propose to leverage face images to control speaker characteristics of synthesised speech. They train the face identity encoder to share a joint embedding space with the voice encoder, independently from the TTS model. This approach enables generation of speech for unseen speakers without extra speaker adaptation. However, these works do not use the face images as inputs when training the TTS models. Instead, the models are trained using speaker embeddings as the input, and the embeddings are swapped to face images only during inference. In this paper, we propose a novel speech synthesis model, FACETTS, which leverages face images to provide a robust characteristic of speakers. In [16, 17], the authors have explored cross-modal biometrics and demonstrated that there is a strong correlation between voices and face appearances. Inspired by this, we design a multispeaker TTS model, where speaking styles are conditioned on face attributes. While it is dif\ufb01cult to collect speech segments for the enrollment of every speaker, it is much easier to obtain face images. We enforce the matching of the identity of the face and the identity of the synthesised speech to train a robust cross-modal representation of speaking style. Our approach is capable of generating speech signals without speaker enrollment, which is advantageous in the zero-shot or few-shot TTS modeling. Our backbone structure for the TTS model is derived from Grad-TTS [11], which learns acoustic features using the diffusion method. Unlike other face-to-speech synthesis methods [14, 15], FACE-TTS is trained end-to-end from the face encoder to the acoustic model, using in-the-wild datasets. To the best of our knowledge, this is the \ufb01rst time that face images are used as a condition to train a TTS model. We perform qualitative and quantitative tests to assess the speaker representations as well as the perceptual quality of the synthesised speech. In addition, we verify through subject measures whether the synthesised speech \ufb01ts well with the appearances of virtual humans who do not have their own voices as illustrated in Fig. 1. arXiv:2302.13700v1 [cs.LG] 27 Feb 2023 \fFace Image (') GT Mel (#!) Speaker Condition Reverse Di\ufb00usion (Repeat ( Rmes at tesRng) TranscripRon ()) Paired Set Text Encoder Visual Network Noise #\" #\" % DuraRon Predictor Speaker MLP Diffusion Model (Noise Predictor) Concat. Diffusion Audio Network \u2112&'( #\" % \u2112&'( #! Fig. 2: The overall con\ufb01guration of FACE-TTS. Given a text transcription and a face image, our method generates a speech sample using a diffusion model conditioned on face images to model speaker characteristics. The whole network except for audio network is trained end-to-end using the LRS3 dataset. Notice that the audio network are used only during training. 2. RELATED WORK Text-to-speech. With the success of deep neural networks, the perceptual quality of synthesised speech is dramatically improved compared to the previous statistical parametric speech synthesis [18]. In general, TTS models are composed of two modules; an acoustic model and a vocoder. The acoustic model generates speech features (commonly mel-spectrogram) from text sequences, and the vocoder takes the features to generate speech waveform. There have been many approaches using generative modeling methods [1, 2, 19], and Tacotron-based models [3, 20] incorporate a sequence-to-sequence model to transform the text sequences into the acoustic representations. GAN-based models [2, 19] have brought innovative contributions to TTS in the last decade using adversarial training strategy. Recently, another successful generative approach, diffusion-based methods [11, 12, 13, 9], have been proposed in speech synthesis, as the diffusion methods have proved their effectiveness in various generation tasks [21, 8, 7]. Compared to GAN-based models, diffusion methods have advantages in impressive results as well as distribution coverage, a \ufb01xed training objective, and scalability. Audio-visual biometrics. People instinctively co-relate others\u2019 facial appearances and their voices by learning through experiences, because face and voice provide related identity information [22]. In order to learn this correlation between faces and voices, several prior works [16, 17] have tried to use self-supervised methods in the way that people learn from experience. They have leveraged the fact that a face image and speech segment from a single-speaker video should have a common identity. In [23, 24], the authors have shown that visual identity has a strong correlation with the speaker identity by separating input signals using face images. Various self-supervised losses have been considered to learn robust cross-modal embeddings for biometrics matching, such as cross-entropy loss [25], contrastive loss [16] and disentanglement-based loss [26]. Motivated by these previous works, we leverage the cross-modal biometrics matching to foster conditions that re\ufb02ect speaker-dependent characteristics for multi-speaker TTS model. 3. FACE-TTS 3.1. Score-based Diffusion Model FACE-TTS is based on a score-based diffusion model, speci\ufb01cally Grad-TTS [11], which consists of three main parts; (1) text encoder, (2) duration predictor, (3) diffusion model. Formally, given a text transcription C and a corresponding mel-spectrogram X0 for training, the forward process progressively adds standard Gaussian noise to satisfy the following continuous stochastic differential equation (SDE) [27]: dXt = \u22121 2Xt\u03b2tdt + p \u03b2tdWt, (1) where Wt is the standard Brownian motion, and \u03b2t is a noise schedule. In the reverse diffusion process, X0 can be obtained from Xt corresponding to the text as follows: dXt = \u2212 \u00101 2Xt + S(Xt, t) \u0011 \u03b2tdt + p \u03b2td \u02dc Wt, (2) where \u02dc Wt is the reverse-time Brownian motion and S(Xt, t) is a diffusion model that estimates the gradient of the log-density of noisy data \u2207Xt log pt(Xt). Namely, we infer the speech X0 from noisy data Xt with N steps by solving the SDE: Xt\u22121 N = Xt + \u03b2t N \u00101 2Xt + S(Xt, t) \u0011 + p \u03b2t \u02dc Wt, (3) where t \u2208{ 1 N , 2 N , . . . , 1}. We note that N is the number of steps to the discretised reverse process, and t indexes a subsequence of time steps in the reverse process. We follow most parts similar to the original methodology [11] and explain the different points in the below sections. The overall architecture is illustrated in Fig. 2. 3.2. Speaker Conditioning with Cross-modal Biometrics In [11, 28], the authors do not utilise a speaker model for learning speaking styles in their TTS models, but prepare a pre-de\ufb01ned speaker codebook for each identity. Thus, it is dif\ufb01cult to present a new speaker in their models, and it requires a challenging adaptation procedure to resolve this problem. In [29, 30], they prove that the speaker embedding precisely adjusts speaking styles in synthesised speech. However, there still remains a problem. Speaker embeddings usually represent excessive details of speakers, and it yields unstable training in the acoustic modeling of TTS. Therefore, speaker embeddings should be generalised to represent speakers\u2019 voices in synthesised speech. In this paper, we provide identity embedding from a face image as a conditioning feature on the TTS model for multi-speaker modelling. Since the face embedding from the cross-modal biometric model represents the identity related to the voice, it is suitable to generate speech that matches face attributes. Such face embedding does not contain a complex distribution of speakers, but only associative representations from voice and face, and it naturally generalises the speaker embedding and allows ef\ufb01cient multi-speaker \fmodelling. Given a mel-spectrogram X=X0 and a face image I, the network is pre-trained to associate the same speaker identity from the different modalities, where the overall network consists of audio network F(X) and visual network G(I). The visual network ingests a face image of the target speaker to produce a speaker representation. Then the text encoder and the duration predictor estimate the statistics of acoustic features from given a text transcription and a face image. In details, the text encoder generates acoustic features \ufb01t to text sequences, and the duration predictor colourises features with predicted speaking duration of the target speaker for the natural pronunciation. During training, the diffusion process adds Gaussian noise on colourised features to make noisy data, and the diffusion model estimates the gradient of data distribution in noisy data to obtain the target audio. Speci\ufb01cally, the speaker representation guides the diffusion model to estimate gradients optimal to generate synthesised speech in the speaker\u2019s voice. We note that the network con\ufb01guration follows [11]. However, to learn various speakers\u2019 characteristics for the multispeaker TTS, the TTS model requires suf\ufb01cient length of recorded speech for each person. Previous works [9, 14, 15] trained their models using audiobook dataset read by several speakers with enough lengths of utterances, where it is dif\ufb01cult to generalise models for unseen speakers. To solve this problem, we suggest an effective strategy, a speaker feature binding loss, maintaining speaker characteristics of target voices in synthesised speech. It allows FACETTS to learn face-voice association from audio segments even with a short length. Formally, latent embeddings from convolution layers of the audio network trained in cross-modal biometrics are extracted from synthesised speech and target voices, respectively. The speaker feature binding loss Lspk train our FACE-TTS model by minimising distances of two latent embedding sets as follows: Lspk = X B |Fb(X0) \u2212Fb(X\u2032 t)|, (4) where X0 is a mel-spectrogram from a target speaker\u2019s utterance and X\u2032 t is a denoised output from the network, and B indicates the number of convolution blocks of audio network except for the \ufb01rst two convolution blocks. We freeze the audio network not to be updated with this loss. This training strategy enforces to form a speakerrelated latent distribution of synthesised speech similar to that of the target speech. 3.3. Training & Inference In training session, FACE-TTS learns multi-speaker speech synthesis through multiple training criteria. To train text and duration encoders, we exploit the prior loss to estimate the mean from a normal distribution and the duration loss [28] to control the duration of pronunciation using a monotonic alignment between speech and text sequences. Diffusion loss trains the diffusion model to estimate the gradient of data distribution as in [11]. Our \ufb01nal training objective is described as: L = Lprior + Lduration + Ldiff + \u03b3Lspk, (5) where \u03b3 is empirically set to 1e-2. We emphasise that the whole framework is trained end-to-end on LRS3 dataset obtained from inthe-wild environments. Thanks to video in LRS3 with various angles and facial expressions, our FACE-TTS is more robust to real-world face images than previous works [14, 15] that only used the front view of a face image. For inference, the trained FACE-TTS samples a mel-spectrogram of utterance X0 from the noisy data Xt that is estimated by transcription with speaker condition by target speaker\u2019s face. The reverse diffusion process is repeatedly processed to estimate stepby-step noise gradually. Finally, we used a pretrained vocoder to transform the estimated mel-spectrogram to a raw waveform. 4. EXPERIMENTS 4.1. Experimental Settings Datasets. LRS3 [31] is an audio-visual dataset collated from TED videos, which has audio-visual pairs with corresponding text transcriptions. We use the trainval split for the training and the test split for the evaluation, excluding speech samples shorter than 1.3 seconds. Also, we pick out speech samples of speakers who have at least 10 seconds of audio in total. A total of 14,114 utterances and 2,007 speakers is used for training, 50 utterances for validation, and test set includes 412 speakers. The widely used multi-speaker TTS dataset, such as LibriTTS [32], has 550 seconds per speaker in average from well-recorded audio books, whereas LRS3 [31] has a length of about 34 seconds extracted from real-world environments. Therefore, it is extremely challenging to use LRS3 data to train TTS models. We use the test split (448 samples) of LJSpeech [33] to obtain text descriptions in the out-of-distribution for a fair comparison with previous works [11, 28]. The cross-modal biometric model [34] is re-implemented following to the same con\ufb01guration of mel-spectrogram with vocoder [19]. It is trained on VoxCeleb2 [35] dataset which contains 5,994 speakers in audio-visual pairs. Audio and image representation. The inputs to the network, including cross-modal biometric model, TTS model and vocoder, are the 128-dimensional mel-spectrogram extracted at every 10ms with 62.5ms frame length in 16kHz sampling rate. For the image input, the face image is randomly sampled from each video and resized into 224\u00d7224 pixels, same as in [17]. The cross-modal biometric model (i.e. audio and visual networks) embeds audio and face images onto 512-dimensional vectors. Evaluation protocols. In our experiments, the generated melspectrogram is synthesised into an audio waveform using HiFiGAN as the vocoder. We \ufb01rst report \u2018Mel.+HiFi-GAN\u2019 to inform the degradation amount caused by the vocoder. In this case, melspectrogram of target speech is transformed into the waveform without synthesis process. It is natural that it shows a little lower scores with the \u2018Ground-Truth\u2019 result, and it can be the upper-bound score of synthesis results. We perform mean opinion score (MOS) test, which is a common metric to measure subjective perceptual quality of synthesised speech. A total of 17 participants are asked to judge the quality about the synthesis results in 5-scale: 1=Bad; 2=Poor; 3=Fair; 4=Good; 5=Excellent. In the test, 10 utterances are randomly selected from the test set and synthesised using each model. Additionally, we conduct two preference tests, 1) AB forced matching test; synthesised speech and two face images, 2) ABX preference test; two synthesised speech signals and one face image. For the validity of our model for the virtual human speech generation, we perform the MOS test whether the synthesis outputs are harmonised with the face images generated from the recent image generation model [8]. Here, we provide choice options from 1 to 4, where the higher score means the synthesised speech is harmonised with the face image. For the objective evaluation, we establish the 5-way cross-modal forced matching test through the cross-modal biometric model, which has to select the matching identity from synthesised speech and 5 face images. In this matching test, we verify the synthesised speech represents similar identity appeared in a face image. Implementation details. For the fair comparison, we train GradTTS with LibriTTS and LRS3 datasets, respectively. Also, our \fMethod Spk. ID 5-scale MOS Ground Truth 4.865\u00b1.001 Mel.+HiFi-GAN [19] (Upper bound) 4.653\u00b1.035 Grad-TTS [11]\u2020 (Seen) Embed 3.718\u00b1.318 FACE-TTS (Seen) Audio 3.547\u00b1.331 FACE-TTS (Seen) Face 3.706\u00b1.154 FACE-TTS (Unseen) Audio 3.218\u00b1.249 FACE-TTS (Unseen) Face 3.282\u00b1.219 Table 1: Subjective evaluation for comparison of audio quality with mean opinion score (MOS) metric. Grad-TTS\u2020 is trained on LibriTTS, and FACE-TTS are trained on LRS3. Correct (61.5%) Incorrect (38.5%) (a) AB test EQ (5.5%) Correct (59.6%) Incorrect (34.9%) (b) ABX test Fig. 3: Results of preference tests. (a) Preference for a face matching two synthesised utterances. (b) Preference for a synthesised utterance matching two face appearances. FACE-TTS is trained using identity embeddings from audio inputs and face inputs, where both embeddings are obtained from crossmodal biometrics. We follow the most of training con\ufb01guration of Grad-TTS [11]. Since the visual network are initialised with pre-trained weights for the biometric matching task on VoxCeleb2, we give a smaller value (1e-6) as a initial learning rate for those networks. We notice that, except for audio and visual networks, the other networks are trained from scratch. The computation time and \ufb02ops are increased linearly, while more denoising step increases the audio quality. Thus, we equally use 10 denoising or sampling steps to generate speech signals for the inference. 4.2. Results Audio quality. We brought pre-trained parameters of Grad-TTS from the author for the comparison, and it had been trained in the LibriTTS dataset for multi-speaker TTS. In our preliminary experiment, we empirically found that Grad-TTS trained on LRS3 dataset showed competitive perceptual quality. Therefore, we evaluated the Grad-TTS trained on LibriTTS as a comparison following authors\u2019 of\ufb01cial implementation, and we re-sampled generated audio from Grad-TTS from 22.05kHz to 16kHz. FACE-TTS with audio speaker ID was fully trained with the audio network in cross-modal biometrics instead of the visual network. In Table 1, the result indicates that FACE-TTS using face images shows competitive audio quality to Grad-TTS trained on clean speech dataset under the seen speaker condition. We observed that our FACE-TTS can generate audio of \ufb01ne quality (i.e. above 3 score) even for unseen speakers. Furthermore, there is a little difference in the performance between the models using face or audio as conditions. Compared audio-conditioned models, face conditioning has brought more \ufb01negrained audio quality, because the face represents robust identity compared to the speech in\ufb02uenced by recording environments. Speaker veri\ufb01cation. We further evaluate the speaker veri\ufb01cation task with generated utterances and face images. First, AB and ABX preference tests are performed on human evaluators. To evaluate Method Spk. ID Acc. (%) Mel.+HiFi-GAN [19] (Upper bound) 48.6 Grad-TTS [11] Embed 19.4 FACE-TTS (w/o. Lspk) Face 35.4 FACE-TTS Face 38.0 Table 2: Speaker identi\ufb01cation matching accuracy. Since Grad-TTS uses speaker id embedding, its model is evaluated with seen speakers and our model is evaluated with unseen speakers. Random accuracy is 20%. Test sample 4-scale MOS LRS3 (Real) 3.471\u00b1.291 Stable Diffusion [8]+FACE-TTS (Fake) 2.941\u00b1.462 Table 3: Matching preference between virtual face images from Stable Diffusion [8] and generated utterances with MOS. under more challenging conditions, we conducted the experiment with gender uni\ufb01ed. That is, the face or audio in the two cases to be selected were selected from samples of the same gender. The evaluators selected a correct answer rate of about 60% as reported in Fig. 3. Furthermore, Table 2 shows 5-way cross-modal speaker matching accuracy for objective evaluations on the LRS3 dataset. Following their of\ufb01cial implementation, we train Grad-TTS [11] on the LRS3 dataset for this experiment. Although the Grad-TTS trained on the LRS3 shows competitive audio quality with ours, capturing the speakers\u2019 characteristics in the sound with the settled speaker embedding seems challenging in the Grad-TTS. Moreover, our speaker loss improves the matching performance 2.6% than FACE-TTS without the loss, training the diffusion model to sample the utterance, which is more proper to the target face. However, it still has room to improve the performance up to the result in the \ufb01rst row (Mel.+HiFi-GAN). We remain it as future work. Virtual speech generation. To demonstrate the utility of our FACETTS, we synthesised speech with virtual face images generated from [8]. Table 3 reports the subjective evaluation of 4 points Likert-scale measurement: 1=Bad; 2=Neutral; 3=Good; 4=Excellent. We had assessors evaluate virtual faces without knowing they were mixed. As the baseline, we also evaluate the preference of ground-truth facevoice pairs, which are randomly selected on the LRS3 dataset. Surprisingly, people gave \u2018Good\u2019 score on average, in that utterance from our FACE-TTS is well matched with virtual face images. 5." + }, + { + "url": "http://arxiv.org/abs/2210.12910v1", + "title": "Specializing Multi-domain NMT via Penalizing Low Mutual Information", + "abstract": "Multi-domain Neural Machine Translation (NMT) trains a single model with\nmultiple domains. It is appealing because of its efficacy in handling multiple\ndomains within one model. An ideal multi-domain NMT should learn distinctive\ndomain characteristics simultaneously, however, grasping the domain peculiarity\nis a non-trivial task. In this paper, we investigate domain-specific\ninformation through the lens of mutual information (MI) and propose a new\nobjective that penalizes low MI to become higher. Our method achieved the\nstate-of-the-art performance among the current competitive multi-domain NMT\nmodels. Also, we empirically show our objective promotes low MI to be higher\nresulting in domain-specialized multi-domain NMT.", + "authors": "Jiyoung Lee, Hantae Kim, Hyunchang Cho, Edward Choi, Cheonbok Park", + "published": "2022-10-24", + "updated": "2022-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Multi-domain Neural Machine Translation (NMT) (Sajjad et al., 2017; Farajian et al., 2017) has been an attractive topic due to its ef\ufb01cacy in handling multiple domains with a single model. Ideally, a multi-domain NMT should capture both general knowledge (e.g., sentence structure, common words) and domain-speci\ufb01c knowledge (e.g., domain terminology) unique in each domain. While the shared knowledge can be easily acquired via sharing parameters across domains (Kobus et al., 2017), obtaining domain specialized knowledge is a challenging task. Haddow and Koehn (2012) demonstrate that a model trained on multiple domains sometimes underperforms the one trained on a single domain. Pham et al. (2021) shows that separate domain-speci\ufb01c adaptation modules are not suf\ufb01cient to fully-gain specialized knowledge. In this paper, we reinterpret domain specialized knowledge from mutual information (MI) perspective and propose a method to strengthen it. Given \u2217Work done during an internship at NAVER Corp. MI(\ud835\udc6b; \ud835\udc80|\ud835\udc7f) \ud835\udfce calculation computing totals \ud835\udc68 \ud835\udc69 Source Beschreib \u2026 Summenberechnung f\u00fcr ein gegebenes Feld oder einen gegebenen Ausdruck. Reference Describes a way of computing totals for a given field or expression. A (Baseline) Describes the kind of calculation for a given field or expression. B (Ours) Describes the way of computing totals for a given field or expression . Figure 1: Overview of two models with different MI distributions. The example sentence is from IT domain. Model A mostly has low MI and Model B has large MI. For an identical sample, model A outputs a generic term \u2018calculation\u2019 while model B properly maintains \u2018computing totals\u2019. a source sentence X, target sentence Y , and corresponding domain D, the MI between D and the translation Y |X (i.e., MI(D; Y |X)) measures the dependency between the domain and the translated sentence. Here, we assume that the larger MI(D; Y |X), the more the translation incorporates domain knowledge. Low MI is undesirable because it indicates the model is not suf\ufb01ciently utilizing domain characteristics in translation. In other words, low MI can be interpreted as a domain-speci\ufb01c information the model has yet to learn. For example, as shown in Fig. 1, we found that a model with low MI translates an IT term \u2018computing totals\u2019 to the vague and plain term \u2018calculation\u2019. However, once we force the model to have high MI, \u2018computing totals\u2019 is correctly retained in its translation. Thus, maximizing MI promotes multi-domain NMT to be domain-specialized. Motivated by this idea, we introduce a new method that specializes multi-domain NMT by arXiv:2210.12910v1 [cs.CL] 24 Oct 2022 \fpenalizing low MI. We \ufb01rst theoretically derive MI(D; Y |X), and formulate a new objective that weights more penalty on subword-tokens with low MI. Our results show that the proposed method improves the translation quality in all domains. Also, the MI visualization ensures that our method is effective in maximizing MI. We also observed that our model performs particularly better on samples with strong domain characteristics. The main contributions of our paper are as follows: \u2022 We investigate MI in multi-domain NMT and present a new objective that penalizes low MI to have higher value. \u2022 Extensive experiment results prove that our method truly yields high MI, resulting in domain-specialized model. 2 Related Works Multi-Domain Neural Machine Translation Multi-Domain NMT focuses on developing a proper usage of domain information to improve translation. Early studies had two main approaches: injecting source domain information and adding a domain classi\ufb01er. For adding source domain information, Kobus et al. (2017) inserts a source domain label as an additional tag with input or as a complementary feature. For the second approach, Britz et al. (2017) trains the sentence embedding to be domain-speci\ufb01c by updating using the gradient from the domain-classi\ufb01er. While previous work leverages domain information by injection or implementing an auxiliary classi\ufb01er, we view domain information from MI perspective and propose a loss that promotes model to explore domain speci\ufb01c knowledge. Information-Theoretic Approaches in NMT Mutual information in NMT is primarily used either as metrics or a loss function. For metrics, Bugliarello et al. (2020) proposes cross-mutual information (XMI) to quantify the dif\ufb01culty of translating between languages. Fernandes et al. (2021) modi\ufb01es XMI to measure the usage of the given context during translation. For the loss function, Xu et al. (2021) proposes bilingual mutual information (BMI) which calculates the word mapping diversity, further applied in NMT training. Zhang et al. (2022) improves the model translation by maximizing the MI between a target token and its source sentence based on its context. Above work only considers general machine translation scenarios. Our work differs in that we integrate mutual information in multi-domain NMT to learn domain-speci\ufb01c information. Unlike other methods that require training of an additional model, our method can calculate MI within a single model which is more computation-ef\ufb01cient. 3 Proposed Method In this section, we \ufb01rst derive MI in multi-domain NMT. Then, we introduce a new method that penalizes low MI to have high value resulting in a domain-specialized model. 3.1 Mutual Information in Multi-Domain NMT Mutual Information (MI) measures a mutual dependency between two random variables. In multidomain NMT, the MI between the domain (D) and translation (Y |X), expressed as MI(D; Y |X), represents how much domain-speci\ufb01c information is contained in the translation. MI(D; Y |X) can be written as follows: MI(D; Y |X) = ED,X,Y \u0014 log p(Y |X, D) p(Y |X) \u0015 . (1) The full derivation can be found in Appendix B. Note that the \ufb01nal form of MI(D; Y |X) is a log quotient of the translation considering domain and translation without domain. Since the true distributions are unknown, we approximate them with a parameterized model (Bugliarello et al., 2020; Fernandes et al., 2021), namely the cross-MI (XMI). Naturally, a generic domain-agnostic model (further referred to as general and abbreviated as G) output would be the appropriate approximation of p(Y |X). A domain-adapted (further shortened as DA) model output would be suitable for p(Y |X, D). Hence, XMI(D; Y |X) can be expressed as Eq. (2) with each model output. XMI(D; Y |X) = ED,X,Y \u0014 log pDA(Y |X, D) pG(Y |X) \u0015 (2) 3.2 MI-based Token Weighted Loss To calculate XMI, we need outputs from both general and domain-adapted models. Motivated by the success of adapters (Houlsby et al., 2019) in multi-domain NMT (Pham et al., 2021), we assign adapters \u03c61, \u00b7 \u00b7 \u00b7 \u03c6N for each domain (N is the total \fnumber of domains) and have an extra adapter \u03c6G for general. We will denote the shared parameter (e.g., self-attention and feed-forward layer) as \u03b8. For a source sentence x from domain d, x passes the model twice, once through the corresponding domain adapter, \u03c6d, and the other through the general adapter, \u03c6G. Then, we treat the output probability from domain adapter as pDA and from general adapter as pG. For the ith target token, yi ,we calculate XMI as in Eq. (3), p(yi|y 0 is a hyperparameter. After training, to generate XS conditioned on label c, we sample z from the prior distribution p(z) and concatenate with c. Then we can generate XS by decoding the concatenated vector. Currently, our model assumes all input signals have the period of 1. Extension of this model to dynamically deal with signals of varying periods is our primary future research interest. 4 Experiments 4.1 Experimental Setup We conduct experiments with two periodic datasets: toy sinusoid dataset and Physionet2021 [26]. Toy dataset is a simple mixture of three sine and cosine functions: P3 i=1 m2i\u22121cos(2\u03c0d2i\u22121t) + 3 \fm2isin(2\u03c0d2i). We put four class conditions for the toy dataset based on amplitudes M = {m1, . . . , m6} and frequencies D = {d1, . . . , d6}, resulting in four amplitude-frequency class labels: \u2018Low Amp. & Low Freq.\u2019, \u2018Low Amp. & High Freq.\u2019, \u2018High Amp. & Low Freq.\u2019, and \u2018High Amp. & High Freq.\u2019. \u2018Low Amp.\u2019 classes sample m from a uniform distribution U(1, 4) whereas \u2018High Amp.\u2019 classes sample m from U(6, 9). \u2018Low Freq.\u2019 classes sample d from U(1, 4), whereas \u2018High Freq.\u2019 classes sample d from U(8, 11). Each signal has a total of 500 timesteps. Physionet2021 [26] contains 12-lead ECG recordings collected from six separate datasets. We cropped each record into one second consisting of 500 timesteps and extracted samples with three diagnoses, namely \u2018Right Bundle Branch Block\u2019 (RBBB), \u2018Left Bundle Branch Block\u2019 (LBBB), and \u2018Atrial Fibrillation\u2019 (AF) from the V6 lead. These diagnoses are selected because they can be examined within one second record [9]. After preprocessing, there were total 36,110 cropped ECG samples. We split the data into train, validation and test sets with the ratio of 8:1:1. More details on preprocessing are in appendix A. We sampled 20% of the timesteps during training to insure the irregularity of time. 1 For all experiments, we employ 5-layer 1D CNN as an encoder E\u03b8 and 6-layer MLP as a decoder D\u03c6. We evaluate our model on three tasks: (1) reconstruction (for the sampled 20%), (2) imputation for missing timesteps (for the non-sampled 80%) and (3) conditional generation. For the tasks (1) and (2), we use sampled time points and make the model perform both reconstruction and imputation in parallel by generating b XT for all timesteps. For the task (3), we sample z from the prior distribution N(0, I) and pass through the decoder D\u03c6. With the same 5-layer 1D CNN encoder, we compare our model with four different baseline decoders: Gated Recurrent Units (GRU), Transformer decoder, Neural ODE (NODE) and Neural Processes (NP). Further model implementation details are explained in appendix B. 4.2 Experiment Results of Toy Dataset Table 1: Reconstruction and imputation MSE on Toy Dataset Reconstruction Imputation GRU 72.454 74.711 Transformer 183.203 172.338 NODE 15.905 19.896 NP 3.813 4.615 Fourier (Ours) 0.649 0.686 We report reconstruction and imputation results in table 1. Our model shows the lowest MSE on both reconstruction and imputation compared to other baseline models. We illustrate the reconstruction results in \ufb01g. 5 in the appendix C. Based on \ufb01g. 5, we found both GRU and Transformer to perform very poorly, though GRU was able to capture partial periodicity. NODE and NP showed better performance on low frequency samples, but they were not able to reproduce subtle details for high frequency samples. In contrast, our model showed superior performance across all amplitude-frequency classes. We conditionally generate 2,000 samples from the sampled z for each amplitude-frequency class, and for each decoder. As visualized in \ufb01g. 2, NODE produced \ufb02attened samples and NP generated non-periodic signals with a large amplitude regardless of the class conditions. GRU and Transformer were able to produce periodic signals, but all 2,000 samples were nearly identical with minimal sample diversity. In contrast, our model was able to generate diverse periodic signals. In order to verify whether the generated samples correctly re\ufb02ect the class conditions, we conduct Fourier series analysis, which decomposes a given signal into multiple sines and cosines as expressed in eq. (1). From the analysis, we can calculate which frequency is used to compose the signal and its corresponding coef\ufb01cients (i.e. amplitude). We plot histograms on both amplitude and frequency for the toy dataset, and compare the baselines with our model. The results are shown in \ufb01g. 3 for \u2018Low Amplitude & High Frequency\u2019 and \u2018High Amplitude & High Frequency\u2019, the rest two class conditions are described in \ufb01g. 6 in the appendix C. In all baseline models, their amplitude and frequency histograms are similarly shaped across the two classes, which implies that those models fail to re\ufb02ect the class conditions when generating samples. Our model overlaps with the dataset 1We also conduct experiments without sampling to compare model performance in two scenarios (sparse irregular timeseries VS dense regular timeseries). The results are reported in appendix D, where our model outperforms the baselines. However, our model is particularly more powerful when input sequences are irregular, indicating its usefulness in handling real signals where irregularities exist due to missing timesteps [32]. 4 \fLow Amplitude Low Frequency Low Amplitude High Frequency High Amplitude Low Frequency High Amplitude High Frequency Figure 2: Conditionally generated samples in toy dataset. We draw \ufb01ve generated samples for each model plotted in different colors. Our model is able to generate diverse samples while assimilating class conditions whereas baseline models either collapse, diverge, or have a low sample diversity. distribution most precisely compared to the other baselines, suggesting that the periodic signals generated by our model are properly conditioned on the class. 4.3 Experiment Results of Electrocardiogram Table 2: Reconstruction and imputation MSE on Physionet2021 Reconstruction Imputation GRU 77.393 75.327 Transformer 68.823 35.692 NODEs 4.937 3.262 NP 2.630 1.840 Fourier (Ours) 2.164 1.519 We report reconstruction and imputation results in table 2. Our model shows the lowest MSE in both reconstruction and imputation. The results are visualized by \ufb01g. 7 in appendix C. Based on \ufb01g. 7, GRU and Transformer cannot capture the peak of given ECG records. Although NODE and NP could grasp the peak point, they disregard details such as subtle waves in the isoelectric line, the straight line on the ECG. Our model can both capture the peak and the subtle \ufb02uctuations clearly. We conditionally generate 3,000 samples for each diagnosis and decoder. As visualized in \ufb01g. 4, our model generates samples that are highly similar to the real dataset. According to Rawshani [24], RBBB diagnostic characteristics include a deep and broad depression after the peak while LBBB has a wide peak and a shallow depression after the peak. AF has a f-wave, a \ufb01brillatory wave in the isoelectric line, and does not have a P-wave, a little uprising before the peak. As shown in \ufb01g. 4, our model captures the necessary characteristics of each diagnosis, and in the \ufb01gure, its signi\ufb01cance is highlighted in red. GRU generates similar samples regardless of the given 5 \fLow Amplitude High Frequency Amplitude Frequency High Amplitude High Frequency Amplitude Frequency GRU Transformer NODE NP Fourier (Ours) GRU Transformer NODE NP Fourier (Ours) GRU Transformer NODE NP Fourier (Ours) GRU Transformer NODE NP Fourier (Ours) Figure 3: Fourier series analysis on conditionally generated samples. We plot histograms on amplitude and frequency for each model from Fourier analysis. Blue color represents the original dataset histogram and orange color represents each model. Note that the Transformer and NP cover more space than original dataset meaning that those models use more sinusoid to generate a single sample than the original dataset. diagnosis, whereas Transformer draws \ufb02at lines after a certain time point. NODE and NP tend to generate smooth ECG signals ignoring all \ufb02uctuations in ECG. Also, they could not synthesize necessary characteristics of the given diagnosis. We quantitatively evaluate the generated samples by using a pre-trained ECG classi\ufb01er. The classi\ufb01er is trained beforehand on the real dataset to classify the three diagnosis. We use total of \ufb01ve classi\ufb01ers trained with a different parameter initialization seed. We run the classi\ufb01er on our generated samples in order to con\ufb01rm whether our samples are classi\ufb01ed to their given diagnosis. We report our results in table 3. Our model outperforms all other baselines for diagnosis-averaged overall scores, showing notable performance in AF. We speculate the reason for this improved performance is that our model is the only model that can synthesize f-wave, a main feature of AF, while other models fail to generate such \ufb01ne oscillations. 5" + }, + { + "url": "http://arxiv.org/abs/2104.02775v1", + "title": "Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual Speech Separation", + "abstract": "In this paper, we address the problem of separating individual speech signals\nfrom videos using audio-visual neural processing. Most conventional approaches\nutilize frame-wise matching criteria to extract shared information between\nco-occurring audio and video. Thus, their performance heavily depends on the\naccuracy of audio-visual synchronization and the effectiveness of their\nrepresentations. To overcome the frame discontinuity problem between two\nmodalities due to transmission delay mismatch or jitter, we propose a\ncross-modal affinity network (CaffNet) that learns global correspondence as\nwell as locally-varying affinities between audio and visual streams. Given that\nthe global term provides stability over a temporal sequence at the\nutterance-level, this resolves the label permutation problem characterized by\ninconsistent assignments. By extending the proposed cross-modal affinity on the\ncomplex network, we further improve the separation performance in the complex\nspectral domain. Experimental results verify that the proposed methods\noutperform conventional ones on various datasets, demonstrating their\nadvantages in real-world scenarios.", + "authors": "Jiyoung Lee, Soo-Whan Chung, Sunok Kim, Hong-Goo Kang, Kwanghoon Sohn", + "published": "2021-03-25", + "updated": "2021-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.SD", + "eess.AS", + "eess.IV" + ], + "main_content": "Introduction Humans have a remarkable auditory system that can perceive sound sources separately in their conversations even in the presence of many surrounding sounds, including background noise, crowded babbling, thumping music, and sometimes other loud voices [1, 2]. However, reliably separating a target speech signal for humancomputer interaction (HCI) systems such as speech recognition [3, 4, 5], speaker recognition [6, 7, 8], and emotion recognition [9, 10] is still a challenging task because it is an ill-posed problem. 1 arXiv:2104.02775v1 [cs.CV] 25 Mar 2021 \fWith the impressive advent of deep learning technologies that utilize high-dimensional embeddings [11, 12, 13], it is possible nowadays to simultaneously analyze the unique acoustic characteristics of different speakers even from mixed signals. Although these deep learning-based methods are effective compared to conventional statistical signal processing-based ones, they are prone to a label permutation (or ambiguity) error due to their frame-by-frame or short segment-based processing paradigm [11, 14]. In order to address this problem, permutation invariant training [15, 16] that utilizes a permutation loss criterion was presented, but the label ambiguity problem still occurs at the inference stage, especially for unseen speakers. Leveraging the visual streams of target speech signals can be one of the best alternatives. In psychology, several experiments have proved that looking at speakers\u2019 faces is helpful for auditory perception under background noise environments [17, 18]. For example, lip reading, which matches lip movements onto utterances, is widely used to recognize others\u2019 words better [19]. In audio-visual speech separation (AVSS) systems, audio and visual features are used together or complement each other to derive unique characteristics [20, 21, 22, 23, 24, 25, 26]. Mostly, AVSS \ufb01rst extracts the common correspondence features between speaker/linguistic information of speech signals and face/articulatory lip movements of video signals, after which the extracted features are exploited for the following source separation task. Consequently, the AVSS problem can be viewed as a local matching (i.e. frame-wise matching) task, where segmented visual features are matched with frames of speci\ufb01c sounds. Thus, the separation performance highly depends on the alignment accuracy between audio and video streams. In real-world scenarios, however, audio and video are recorded from different devices with their own speci\ufb01cations, and they are transmitted through independent communication channels and saved with different codec protocols. These practical issues frequently cause mutually unaligned states in talking videos. Fig. 1 shows an example of a video with a speech that has physical errors in its video contents, where sometimes audio plays ahead of video and vice versa. When there are even subtle data transformations caused by jitters, omissions, and out-ofsynchronization in video streams, conventional local matching strategies [20, 23, 25] are vulnerable. This issue can be detrimental to the performance of AVSS systems in videotelephony, broadcasting, video conferencing, or \ufb01lming. In this paper, we highlight those limitations and tackle the alignment problems in AVSS processing. We propose a novel cross-modal af\ufb01nity network for robust speech separation, referred to as CaffNet, by utilizing visual cues in consideration of relative timing information. Af\ufb01nity, i.e. mutual correlation, learned in CaffNet compensates for abrupt discontinuities in audio-visual data without external information or additional supervision. Furthermore, we propose an af\ufb01nity regularization module that tiles the diagonal term of the af\ufb01nity matrix to match audio-visual sequences at the utterance level. Since the af\ufb01nity regularization provides a global positional constraint, it avoids the label permutation problem that occurred by inconsistent assignment over time of the speech signals to the visual target. In addition, considering the estimation of the magnitude mask in tandem with the phase mask is one of the keys to reasonable speech reconstruction because such factors are correlated with each other [27, 28]. To accomplish this, we extend CaffNet to have a complex-valued convolution network architecture [29, 30, 31] such that speech quality is indeed increased by restoring the mask of the magnitude and phase spectrum together. We demonstrate the effectiveness of the proposed networks with extensive experiments, achieving large improvements in unconditioned scenarios on several benchmark datasets [32, 33, 34]. 2. Related Work Audio-visual Speech Separation. In terms of multisensory integration, it has been proved that looking at talking faces during conversation is helpful for speech perception [35]. Inspired by this psychological mechanism, numerous works have tried to effectively utilize visual context on speech separation tasks [36, 37, 25, 20, 38, 24, 21, 22, 39, 23]. With the emergence of deep neural networks and the availability of new large-scale datasets [19, 33, 34, 24], a series of works [38, 40] have been published in the past few years on audio-visual speech separation (AVSS). Although they have shown promising results in various speech-oriented applications, these methods have concentrated on isolating the magnitude of speech only, which restricts their applicability. To alleviate this limitation, several works have been proposed to generate both magnitude and phase masks [20, 24, 23, 22]. However, these methods generate phase masks with estimated magnitude masks and noisy phase without learning complex-valued representations, which requires them to consider the correlation between each complex component. This limitation has manifested itself in signi\ufb01cantly degraded performance under extremely noisy conditions [31]. Furthermore, the aforementioned methods have all assumed one-to-one correspondence between the audio-visual segments. Most recently, this problem has been tackled in [21, 39] for situations in which visual face sequences were not fully reserved. In [21], although they considered the case that visual cues abruptly disappear due to occlusion, it still required video sequences aligned to audio. Even though the speaker identity extracted from the still face image seemed to provide a promising visual cue for the separation [39], it showed far less separation performance than ones using 2 \fVideo Cross-modal Transformer \ud835\udc46\ud835\udc47\ud835\udc39\ud835\udc47 Noisy speech Audio Encoder Mask Generator Enhanced speech Em Audio Transformer Visual Transformer \ud835\udc56\ud835\udc46\ud835\udc47\ud835\udc39\ud835\udc47 \ud835\udc66 \ud835\udc4c \ud835\udc40 Phase Modulation Magnitude Modulation Magnitude Modulation Phase Video Face Embedding STFT Noisy speech . \u2220. Cross-modal Affinity Estimation \ud835\udc80 < Enhanced speech iSTFT \ud835\udc74 Magnitude Modulation Visual Non-local Layers Audio Non-local Layers Magnitude Domain Mask Decoder \ud835\udc3c?:A \ud835\udc4b Audio Encoder Visual Encoder \ud835\udc7f \ud835\udc4c D \ud835\udc7d \ud835\udc7a \ud835\udc7a G \ud835\udc7d H \ud835\udebf Figure 2: Overall network con\ufb01guration: (1) encoding audio and visual features; (2) learning cross-modal af\ufb01nity; (3) predicting spectral soft mask M to reconstruct target speech \u02c6 Y. A red-dotted box means the magnitude operation processing. visual sequences since they only regulated global information rather than local contexts. In this paper, we leverage sequential audio-visual frames as the inputs to our networks under the assumption that locally misaligned visual frames with audio frames can still provide local context and speaker identity for robust speech separation. Cross-modal Alignment. As audio and video sequences are recorded using different devices, synchronization problem often appears in recordings. Most recent audio-visual synchronization methods rely on cross-modal representation techniques that measure the linguistic similarity between audio-visual embedding pairs [41, 42, 43]. However, there has been little work on exploring the problem of synchronization along with mismatched audio and video pairs because prior works generally assumed that a paired audio and video set has only one speech. More related to our work are af\ufb01nity-based multi-modal approaches in various other challenging tasks, such as music sound separation [44], emotion recognition [45], language understanding [46], and self-supervised learning [47]. We further extend the crossmodal af\ufb01nity learning to generate time-independent audiovisual representations using an af\ufb01nity regularization with an utterance-wise matching criterion. 3. Approach 3.1. Motivation and Overview In general, humans experience severe confusion when there is a linguistic discrepancy between what we see and what we hear, i.e. a difference between the perceived words from the mouth and actual speech. This is called cognitive dissonance, which is known as the McGurk effect [48]. This effect could be observed in previous frame-wise matching based methods [32, 20, 23, 38, 25] inducing poor performance when the cross-modal data is con\ufb02icted. To deal with such inconsistency problem, we introduce a CaffNet to estimate time-frequency soft masks to isolate a single speech signal from a mixture of sounds (such as other speakers and background noise), taking into account time-agnostic mutual correlation. Concretely, our model is split into three parts, including an audio-visual encoder, learning cross-modal af\ufb01nity, and soft-mask estimation, as shown in Fig. 2. The key idea of CaffNet is to learn cross-modal af\ufb01nity between the audio and video streams even if they have different sampling rates in the wild environments. By this, we mean that information from the video stream stretches or compresses to match the audio signal for the reconstruction of the target speaker\u2019s speech regardless of the frame discontinuity problem. Due to matching ambiguity generated in parts that are muted or that have similar pronunciations from simultaneous speakers, the initially computed af\ufb01nity includes erroneous values and causes the label permutation problem while degrading the separation performance. We resolve this problem by suggesting an af\ufb01nity regularization to induce global consistency of cross-modal af\ufb01nity. Furthermore, we extend this approach to complex-valued neural networks, estimating the magnitude and phase components jointly. Given a noisy time-domain speech X, CaffNet is trained to isolate a clean speech Y from X with corresponding a user-chosen speaker\u2019s face video I1:T , where T is a length of the video stream. The noisy sound X = Y + H is assumed to be a sum of clean speech Y and natural environmental factors H such as background noise, distortions in speech, and sound from other speakers. As it has been a common practice to transform a time-domain speech to a time-frequency representation (i.e. spectrogram) via shorttime Fourier transform (STFT), each of the corresponding time-frequency representations for X, H, and Y is computed by 512-point STFT and denoted by X \u2208C, H \u2208C and Y \u2208C, respectively. 3.2. Cross-modal Af\ufb01nity Network Audio-visual Encoder. As in [20, 21, 44], the audiovisual encoder has a two-stream architecture consisting of an audio encoder stream and a video encoder stream, which take noisy audio and video frames containing the target face, respectively. At \ufb01rst, the audio and video encoders generate their own embedding features independently. In speci\ufb01c, the audio encoder Fs takes the magnitude spectrum of consecutive input frames, |X|. The audio embedding features are extracted by stacked 1D convolutional lay3 \f\u2019) \ud835\udc98* \ud835\udc98+ \ud835\udc98, \ud835\udc68 Affinity Reg. \ud835\udf1e \ud835\udc7d 0 \ud835\udc7a 0 \ud835\udc7d 2 \ud835\udc40\u00d7\ud835\udc36 \ud835\udc41\u00d7\ud835\udc36 \ud835\udc40\u00d7\ud835\udc41 \ud835\udc40\u00d7\ud835\udc41 \ud835\udc41\u00d7\ud835\udc36 Figure 3: Illustration of cross-modal af\ufb01nity estimation module. It takes speech feature \u00af S and video feature \u00af V to calculate the af\ufb01nity matrix A. The cross-modal identity matrix \u0393 regularizes the af\ufb01nity matrix A to maintain global correspondence. ers S = Fs(|X|), where S \u2208RN\u00d7C is a speech embedding feature, C and N indicate the dimension of a channel and the temporal length of the spectrogram, respectively. Besides, visual features are extracted from the temporal stack of \ufb01ve consecutive video frames via the stateof-the-art audio-visual synchronization model E(\u00b7) [41] by a feed-forward process. Finally, a visual embedding feature V \u2208RM\u00d7C is obtained through the visual encoder Fv: V = Fv(\u03a0(Ef(I1:5), Ef(I2:6), \u00b7 \u00b7 \u00b7 , Ef(IT -4:T ))), (1) where I denotes video frames, T is the number of video frames, \u03a0(\u00b7) indicates a concatenation operator, and M = T \u22124 is an entire length of clips tied 5 frames. As the audio and video have different sampling rates, it requires either up-sampling or down-sampling process to equalize the temporal resolution of audio and video embedding matrices [20, 23]. However, our network is fully-convolutional network that effectively learns the audio-visual af\ufb01nity regardless of the size of each embedding matrix. Learning Cross-modal Af\ufb01nity. We assume that audio and video embeddings are naturally out of joint in the unconstrained environments due to temporal mismatch between two media. Considering the fact that learning af\ufb01nity can draw linguistic dependencies between audio and visual features, it is possible to model relative timing dependencies without considering their distances. More speci\ufb01cally, we \ufb01rst extract audio feature \u00af S and visual feature \u00af V by feeding outputs of the audio-visual encoder to modality-separated two non-local layers [49] to measure the af\ufb01nity on the nearest embedding space as possible. Then, an af\ufb01nity matrix Ai,j between i-th audio feature and j-th visual feature is computed using cosine similarity with L2 normalization on an embedding space: Ai,j = softmax( < \u00af Siws, \u00af Vjwv > \u2225\u00af Siws\u22252\u2225\u00af Vjwv\u22252 ), (2) Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (a) Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (b) Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (c) Figure 4: Visualization of af\ufb01nity matrices. (a) When audio contains only one voice, it shows monotonous pattern with audio-visual correspondence. (b) When there is mixture input, the pattern is tangled in absence of sequential consistency. (c) Final af\ufb01nity is arranged with the help of af\ufb01nity regularization. where ws and wv are embedding weights of audio and visual features, respectively, as illustrated in Fig. 3. In (2), softmax activation function is applied row-wise for normalization to obtain an af\ufb01nity matrix A \u2208RN\u00d7M. Ideally, the linear pattern can be found along the diagonal of the af\ufb01nity matrix as shown in Fig. 4(a). However, we observe that the af\ufb01nity weights in different frames are often similar in the silent speech or regions having similar pronunciation between different speakers as depicted in Fig. 4(b), which can cause the label permutation problem. To resolve this problem, we propose an af\ufb01nity regularization to penalize the probabilistic af\ufb01nity matrix corresponding to global alignment context to reliably infer the spectrogram mask of an interest such that \u0393i,j = exp(P i,j Dk i,jAi,j/\u03c4) P k exp(P i,j Dk i,jAi,j/\u03c4), (3) where \u0393 is a cross-modal identity matrix, k \u2208Nk is the search window of offset range across the diagonal term, \u03c4 is a softening parameter [50] set as 0.1, and D is a diagonal mask which satis\ufb01es: Dk i,j = ( 1, if fa fv (j \u2212k) + 1 \u2264i \u2264fa fv (j \u2212k + 1) 0, otherwise, (4) where f\u2217is a sampling rate of each segment (e.g., fa/fv = 4 in our experimental setting). In our experiments, we search for the offsets over [\u22129, +9] frame range, where negative offset means that audio is ahead of video and vice versa. We set Nk \u2208[0, \u00b7 \u00b7 \u00b7 , 19] and if the input pair is matched in a timely manner, diagonal term appears from the 9-th video frame index as shown in Fig. 4(c). Fig. 4 clearly shows that the regularization encourages the model to maintain temporal consistency in the matching process. Then the \ufb01nal visual features \u02c6 V \u2208RN\u00d7C\u2032 are then represented as follows: \u02c6 Vi = P j(Ai,j + \u03b3\u0393i,j) \u00b7 (wo \u00af Vj)\u22a4, (5) 4 \fwhere the identity matrix \u0393 is added to the initial af\ufb01nity matrix A without breaking its global behavior [49, 51]. wo is a projection parameter which has C\u2032 output channel. Balance parameter \u03b3 is set to 1.0 in our experiments. Soft Mask Estimation. The mask decoder Fm takes both the transformed visual features \u02c6 V and corresponding audio features \u00af S to generate a soft mask [52], which \ufb01lters the mixture spectrogram to produce the enhanced spectrogram. Audio-visual features are concatenated over the channel dimension to compute an integrated feature \u03a8 = \u03a0(\u00af S, \u02c6 V). In this way, each audio feature is associated with corresponding visual features, which will be used to recover clean speech. We employ the similar mask decoder architecture used in [20] as the residual building block of our decoder. The sequentially up-scaled output to the original size of the input spectrogram is then passed through sigmoid activation to regularize output values between 0 and 1. Finally, the estimated speech spectrogram \u02c6 Y = M \u2299|X| is computed by element-wise multiplying the estimated mask M = Fm(\u03a8) on the input spectrogram |X|. Then, the estimated speech \u02c6 Y is computed by inverse STFT. We note that the architectural detail of CaffNet is explained in supplementary materials. Training. The terminal objective of our model is to estimate a target speech Y of an interest person associated visual inputs. During training, while previous local matching methods assume that audio is correctly aligned to video [20, 23, 25], we consider that sometimes the audio stream leads the video or sometimes the video stream is going ahead of the audio. To accomplish these cases in the training scheme, CaffNet leverages a video clip that is recorded a little longer than the randomly sampled audio in the datasets. However, there is no additional label on what time the audio will be matched to in the video. To train CaffNet, we minimize the magnitude loss LMAG that makes the magnitude of enhanced spectrogram be similar to that of clean spectrogram on a logarithmic scale [39]: LMAG(Y, \u02c6 Y) = \u2225log(|Y|/| \u02c6 Y|)\u22252. (6) 3.3. Complex Cross-modal Af\ufb01nity Network In this section, we explain how to further improve the separation ability of CaffNet generating complex ratio mask that considers magnitude and phase simultaneously with simple modi\ufb01cations. The complex model, CaffNet-C, has a similar architecture con\ufb01guration as that of the CaffNet. Although using only CaffNet provides satisfactory performance, we upbuild in\ufb02ated complex CaffNet (CaffNet-C) based on complex-valued building blocks [30] to handle complex matrices represented in the spectrograms. In tasks related to audio signal reconstruction, such as speech enhancement [31] and separation [29], it is ideal to perform correct estimation of both components. Details on batch normalization and weight initialization for complex networks can be found in [30, 31]. Different from CaffNet, which solely takes the magnitude of spectrogram as input, the audio encoder stream of CaffNet-C leverages the whole amount of complex-valued spectrogram to extract the audio embedding feature with stacked complex-valued convolutional layers such that S = Fc s(X), where S \u2208RN\u00d7D\u00d72 contains the real and imaginary parts of a complex number, and Fc s denotes complex audio encoder. Inspired by [31], we decode the corresponding phase along with both the noisy phase and magnitude from the feature representation step. This solution makes the complex-valued mask M estimated using the magnitude feature and noise phase feature at the same time. The noise phase is re\ufb01ned to clean phase with the complex mask decoder Fc m: M = Fc m(\u03a0(|\u00af S|, \u02c6 V) \u00b7 ei\u03b8\u00af S). (7) The estimated speech spectrogram \u02c6 Y is computed by multiplying the estimated mask M on the input spectrogram X: \u02c6 Y = M \u2299X = |M| \u00b7 |X| \u00b7 ei(\u03b8M+\u03b8X). (8) Finally, we compute the estimated speech \u02c6 Y with inverse STFT. By inducing complex convolutions in CaffNet-C, we use the scale-invariant source-to-distortion ratio (SI-SDR) to the objective function such as LSI-SDR(Y, \u02c6 Y ) = \u2212< Y, \u02c6 Y > /\u2225Y \u2225\u2225\u02c6 Y \u2225, (9) where it makes more phase sensitive, as inverted phase gets penalized as well. Combining (6) and (9), the \ufb01nal overall objective function in the CaffNet-C is given by LALL = LMAG + \u03b1LSI-SDR, where \u03b1 is a hyper parameter to balance two objective functions and we set it to 1.0. 4. Experiments 4.1. Setup Datasets. Our networks are evaluated on three commonly used AVSS benchmarks: Lip Reading Sentences 2 (LRS2) [53, 19, 54], Lip Reading Sentences 3 (LRS3) [33], and VoxCeleb2 [34] datasets. LRS2 and LRS3 include 224 and 475 hours of videos respectively, along with cropped face tracks of the speakers. While LRS2 is sourced from British television broadcasts, LRS3 contains TED and TEDx videos. Following [21], we remove the few speakers from the LRS3 training set that also appear in the test set, so that there is no overlap of identities between the two subsets. Hence, the test set includes only unseen and unheard speakers during training and is suitable for a speakeragnostic evaluation of our methods. VoxCeleb2 contains over 1 million utterances spoken by 5,994 speakers. They 5 \f-9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 0 2 4 6 8 10 12 14 Frame offset V-Conv V-Conv+PM LWTNet CaffNet CaffNet-C (a) SDRi (dB) -9 -7 -5 -3 -1 0 +1 +3 +5 +7 +9 0 2 4 6 8 10 12 Frame offset LWTNet CaffNet-C (b) SDRi (dB) -9 -7 -5 -3 -1 0 +1 +3 +5 +7 +9 0 0.2 0.4 0.6 0.8 1 Frame offset LWTNet CaffNet-C (c) PESQi Figure 5: Evaluation of AVSS performance with respect to each delay offset between audio and visual streams on LRS2 dataset. The frame offset unit is 40ms which is the duration length between consecutive video frames. (a) reports the SDRi evaluation using ground-truth phase with estimated magnitude spectrum. (b) and (c) report the SDRi and PESQi evaluation results on the predicted phase as well as estimated magnitude spectrum, respectively. provide the pre-train set and test set, and we follow this setting in our experiments. It is assumed that all datasets are well-synchronized [20, 21], so we adapt them to our purposes by augmenting data. Furthermore, VoxCeleb2 is divided into training and test sets according to the speaker\u2019s identity to explicitly assess whether our model can generalize to unseen speakers during training. Thus training and testing sets are disjoint in terms of speaker\u2019s identities. Data Sampling Protocol. As mentioned in Sec. 3, we premise that audio-visual data is obtained in asynchronous circumstances. We assume that any given frame possesses the same time shift, so the visual stream is randomly shifted by \u22129 to 9 frames. Although we randomly shift video frames to assume the discontinuities of training data, the corresponding video clips contain all the audio-response information. For example, if the audio is sampled in the time duration [T, T + \u03b4] and the video is shifted by \u22129 frames, i.e. 0.36s, we extract the video frames during the time duration [T \u22120.36, T + \u03b4]. If the selected time offset is 9, the video frames are extracted within the time duration [T, T + \u03b4 + 0.36]. For consistency and fair evaluation, we follow the similar evaluation settings in the previous work [20]. To generate synthetic training examples, we \ufb01rst select a source pair consisting of visual and audio features by sampling 2 seconds randomly. Source speech is mixed with randomly selected other speaker\u2019s speech signal in the time domain, to simulate multi-talker backgrounds signals. Features. We use a recent audio-visual synchronization model [41] for extracting visual features to serve as its visual input. The input to the visual stream is a video of cropped facial frames, with a frame rate of 25 fps. For every video frame, it outputs a compact 512-dimensional feature vector. For audio features, we use a time-frequency representation via STFT with a 25ms window length and a 10ms hop length as a sampling rate of 16kHz. Note that the extraction of face embeddings follows prior work [22]. Evaluation Metrics. We use three metrics to compare the results of our method to previous methods [20, 22]. First, the signal-to-distortion ratio (SDR) [55] is commonly used metric in recent works [20, 21, 38] to investigate the quality of enhanced speech. Following the previous work [20], we also report results on the perceptual evaluation of speech quality (PESQ) [56] varying from -0.5 to 4.5 and the shorttime objective intelligibility (STOI) [57], which is correlated with the intelligibility of degraded speech signals. In the following experiments, we report SDR improvement (SDRi) and PESQ improvement (PESQi) for a fair comparison with other methods since the testing samples are randomly generated by combinations of the test set. Baseline Models. For the fair comparison with CaffNet, we reproduce the magnitude network of \u2018V-Conv\u2019 [20], which is trained with the magnitude loss only. Also, since V-Conv model only assumes the synchronization circumstances between audio and visual streams, we combine V6 \fDataset Method SDRi \u2191 PESQi \u2191 STOI \u2191 GT GL PR MX GT GL PR MX GT GL PR MX LRS2 V-Conv [20] 11.28 -4.36 6.73 1.35 0.63 0.75 0.89 0.85 0.86 LWTNet [22] 6.88 -4.61 4.06 3.77 0.65 0.16 0.32 0.29 0.77 0.72 0.73 0.74 CaffNet (ours) 11.16 -3.49 6.79 1.29 0.63 0.73 0.89 0.85 0.86 CaffNet-C (ours) 12.46 -2.54 10.01 7.94 1.15 0.65 0.94 0.73 0.89 0.84 0.88 0.86 LRS3 V-Conv [20] 11.23 -1.37 7.00 1.08 0.55 0.61 0.86 0.82 0.83 LWTNet [22] 7.71 -3.93 4.83 4.44 0.82 0.35 0.49 0.45 0.84 0.80 0.82 0.82 CaffNet (ours) 10.22 -2.64 6.46 1.06 0.49 0.60 0.86 0.82 0.84 CaffNet-C (ours) 12.31 -1.38 9.78 7.92 0.91 0.49 0.71 0.55 0.86 0.82 0.85 0.83 Table 1: Evaluation of AVSS performance on the LRS2 and LRS3 datasets when audio and visual inputs are in synchronous condition. Contrary to other methods [22, 20], CaffNet and CaffNet-C are trained in unconditioned circumstance, i.e. training with randomly given frame offsets. GT: ground-truth phase; GL: Grif\ufb01n-Lim; PR: predicted phase; MX: mixture phase. Conv with a cutting-edge synchronization method proposed in [41, 58] to deal with asynchronous samples, referred to as \u2019V-Conv+PM\u2019. Furthermore, we examine the performance of LWTNet [22] using delayed test samples, where this method includes an independent synchronization module. 4.2. Results LRS2 and LRS3. In Fig. 5, the proposed models, CaffNet and CaffNet-C, show robust performance in asynchronized environment while the previous methods have signi\ufb01cant degradation in performance when the two frames were delayed. Although V-Conv+PM system has the synchronization step before the separation, it is clear that our methods are more robust to these temporal shifts. Because VConv+PM is a cascaded-step system, errors in the \ufb01rst step have a negative impact on the second. For that reason, VConv+PM even shows less effective results, even compared to V-Conv when audio and visual streams are synchronized. Furthermore, despite the alignment step in LWTNet [22], its accuracy is poorer than our methods on delayed samples. When the two streams are well-aligned, our method provides competitive performance compared to baseline methods. In Tab. 1, we summarize the comparison results on the LRS2 and LRS3 datasets without a random shift on video streams (i.e., frame offset is 0). Even though our network is trained in an unconditioned environment where audio and visual streams are not synchronized, our method outperforms existing methods [20, 22] which are trained with the synchronized audio-visual streams. To validate the generality of our models, we investigate the performance on the LRS3 dataset, summarized in Tab. 1. Each model is only trained on the LRS2 dataset and evaluated on the LRS3 dataset without an additional adaptation process. Overall performances are similar to those on the LRS2 dataset. CaffNet-C achieves the best SDRi of 12.31 and still outperforms all the others, where the reconstructions are obtained using the magnitudes predicted by our network and either the ground truth phase. This demonstrates that our methods can be generalized to other datasets. Furthermore, the results show that phase estimation helps Speaker Mixture Phase estimation (Source-Reference) GT GL PR MX Seen Male-Female 7.35 -5.46 3.54 3.33 Female-Male 7.51 -3.63 3.99 3.71 Male-Male 8.21 -4.61 4.24 3.91 Female-Female 7.24 -3.41 3.91 3.58 Unseen Male-Female 8.22 -5.12 4.80 4.42 Female-Male 7.34 -3.42 3.99 3.71 Male-Male 7.35 -5.46 3.55 3.33 Female-Female 6.37 -4.23 3.15 2.92 Table 2: Evaluation of SDRi on CaffNet-C with regarding to gender combinations on the VoxCeleb2 dataset reconstruct the magnitude. Compared to CaffNet, CaffNetC improves the SDRi by 2.09dB using ground-truth phase on the LRS3 dataset, respectively. VoxCeleb2. In order to explicitly investigate whether our model can be generalized to speakers unseen during training, we \ufb01ne-tune and test on the VoxCeleb2 in Tab. 2. The training and test sets are disjoint in terms of speaker identities. We evaluate performance based on gender combinations, as many previous speech separation methods have shown signi\ufb01cant performance degradation when mixtures involve same-gender speech [11, 39]. Although our method shows slight drops in performance on the female-female set, the result on the male-male set is similar to those of subsets including different genders. 4.3. Analysis We conduct extensive experiments to describe the contribution of our method. Note that all experiments in this section are performed on the LRS2 dataset with CaffNet-C. Channel Latency. Our premise is that temporal alignment of cross-modal can be achieved in our networks without additional supervision for ground-truth mapping between each pair of input streams. As shown in Fig. 6, CaffNet-C surprisingly \ufb01nds appropriate frame offsets without any extra supervision. It demonstrates that our method reliably infers clean speech with well-aligned visual streams. Although we obtain the highest SDR in the 7 \fExperiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (a) -9 offset (12.2dB) Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (b) 0 offset (12.8dB) Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (c) +9 offset (12.6dB) Figure 6: Qualitative results of af\ufb01nity matrices according to the frame delay offsets. If the frame offset is negative, a linear pattern along the diagonal of the af\ufb01nity matrix is displaced higher than zero offset case, and if it is positive, the pattern is shown lower than (b). (\u00b7) is SDR. AR Magnitude Phase Delay offset -5 0 5 \u0017 Prediction GT 8.01 7.77 7.71 \u0017 Prediction GL -3.93 -4.24 -3.96 \u0017 Prediction MX 4.37 3.99 3.86 \u0017 Prediction PR 5.13 4.75 4.62 \u0013 Prediction GT 12.42 12.48 12.41 \u0013 Prediction GL -3.05 -2.67 -2.54 \u0013 Prediction MX 7.96 7.97 7.85 \u0013 Prediction PR 9.95 9.92 9.88 Table 3: Evaluation of SDRi to demonstrate the effectiveness of af\ufb01nity regularization. \u2018AR\u2019 refers to the presence or absence of regularization. synchronous setting, there are only slight differences between the performance despite delays in audio and video. Af\ufb01nity Regularization. One might also ask whether af\ufb01nity regularization of our method is helpful. To verify its ef\ufb01cacy, we conducted an ablation study with CaffNet-C in Tab. 3. Since the af\ufb01nity regularization has induced the global correspondence term for reliable speech separation, SDRi increases about 4 dB. It provides reasonable evidence for the effects of the af\ufb01nity regularization. Jitter. We then assessed the impact of our method from jitter, another well-known discontinuity problem frequently observed due to independent packetization of the audiovideo streams. As a proof of concept, we assumed that no video frames are transmitted from t to t+\u03c4 frames, where \u03c4 is chosen as a random integer number such that \u03c4 \u22648 (\u22480.3 s). Considering a simple frame repetition method that can be used in the jitter situation, we replaced the missing video frames with the (t\u22121)-th frame. CaffNet-C shows compliant performance with 5.64 dB even in these challenging situations, in terms of SDRi on the testing set of LRS2 dataset. As exempli\ufb01ed in Fig. 7, it shows that our network clearly distinguishes two global terms in the af\ufb01nity matrix when the frame jitter arises in the visual stream. Speech Recognition. To verify the intelligibility of the outputs, we further exploit the estimated speech signals on Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index Jitter (a) With jitter Experiments SDR: Signal to Distortion Ratio PESQ: Perceptual Evaluation of Speech Quality Visual index Audio index (b) Without jitter Figure 7: Example of af\ufb01nity matrices with jitter problem. In this case, the jitter is happen for 0.4s on the 25th frame, thus diagonal components are disconnected and reappeared from the 35th frame. Magnitude Phase Delay offset Avg. \u2193 -5 0 5 Mixture MX 84.91 84.91 Ground-truth GT 17.74 17.74 Prediction GT 31.70 32.96 31.90 32.18 Prediction GL 40.54 39.38 39.74 39.88 Prediction MX 38.35 37.01 37.73 37.69 Prediction PR 35.88 35.08 34.83 35.26 Table 4: Automatic speech recognition on the LRS2 dataset. another task. Speci\ufb01cally, we conduct an additional experiment for automatic speech recognition with enhanced speech signals. To do this, we utilize the speech recognition API of Google cloud system 1 and compute the word error rate (WER) as an automatic metric to evaluate the accuracy of recognition. Firstly, we obtain the WER of 17.74% on the clean ground-truth set, which is the best result that we can achieve in this setup, while the WER on the mixture set is 84.95%. In Tab. 4, CaffNet-C achieves 35.26% error rate when we use separated speech signals with the network setup of phase prediction (PR). It clearly shows that there is no meaningful difference by varying delay offset. 5." + }, + { + "url": "http://arxiv.org/abs/1908.05913v1", + "title": "Context-Aware Emotion Recognition Networks", + "abstract": "Traditional techniques for emotion recognition have focused on the facial\nexpression analysis only, thus providing limited ability to encode context that\ncomprehensively represents the emotional responses. We present deep networks\nfor context-aware emotion recognition, called CAER-Net, that exploit not only\nhuman facial expression but also context information in a joint and boosting\nmanner. The key idea is to hide human faces in a visual scene and seek other\ncontexts based on an attention mechanism. Our networks consist of two\nsub-networks, including two-stream encoding networks to seperately extract the\nfeatures of face and context regions, and adaptive fusion networks to fuse such\nfeatures in an adaptive fashion. We also introduce a novel benchmark for\ncontext-aware emotion recognition, called CAER, that is more appropriate than\nexisting benchmarks both qualitatively and quantitatively. On several\nbenchmarks, CAER-Net proves the effect of context for emotion recognition. Our\ndataset is available at http://caer-dataset.github.io.", + "authors": "Jiyoung Lee, Seungryong Kim, Sunok Kim, Jungin Park, Kwanghoon Sohn", + "published": "2019-08-16", + "updated": "2019-08-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.HC", + "cs.MM" + ], + "main_content": "Introduction Recognizing human emotions from visual contents has attracted signi\ufb01cant attention in numerous computer vision applications such as health care and human-computer interaction systems [1, 2, 3]. Previous researches for emotion recognition based on handcrafted features [4, 5] or deep networks [6, 7, 8] have mainly focused on the perception of the facial expression, based on the assumption that facial images are one of the most discriminative features of emotional responses. In this regard, the most widely used datasets, such as the AFEW [9] and FER2013 [10], only provide the cropped and aligned facial images. However, those conventional This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (NRF2017M3C4A7069370). \u2217Corresponding author (a) (b) (c) Figure 1. Intuition of CAER-Net: for untrimmed videos as in (a), conventional methods that leverage the facial regions only as in (b) often fail to recognize emotion. Unlike these methods, CAER-Net focuses on both face and attentive context regions as in (c). methods with the facial image dataset frequently fail to provide satisfactory performance when the emotional signals in the faces are indistinguishable and ambiguous. Meanwhile, people recognize the emotion of others from not only their faces but also surrounding contexts, such as action, interaction with others, and place [11, 12]. Given untrimmed videos as in Fig. 1(a), could we catch how a woman feels solely from her facial expression as in Fig. 1(b)? It is ambiguous to estimate the emotion only with cropped facial videos. However, we could easily guess the emotion as \u201csurprise\u201d with her facial expression and contexts that an another woman comes close to her as shown in Fig. 1(c). Nevertheless, such contexts have been rarely considered in most existing emotion recognition methods and benchmarks. Some methods [13, 14] have shown that emotion recognition performance can be signi\ufb01cantly boosted by considering context information such as gesture and place [13, 14]. In addition, in visual sentimental analysis [15, 16] that recognizes the sentiment of an image, similar to emotion recognition but not tailored to humans, the holistic visual appearance was used to encode such contexts. However, these approaches are not practical for extracting the salient context information from visual contents. Moreover, largearXiv:1908.05913v1 [cs.CV] 16 Aug 2019 \fscale emotion recognition datasets, including various context information close in real environments, are absence. To overcome these limitations, we present a novel framework, called Context-Aware Emotion Recogntion Networks (CAER-Net), to recognize human emotion from images and videos by exploiting not only human facial expression but also scene contexts in a joint and boosting manner, instead of only focusing on the facial regions as in most existing methods [4, 5, 6, 7, 8]. The networks are designed in a twostream architecture, including two feature encoding stream; face encoding and context encoding streams. Our key ingradient is to seek other relevant contexts by hiding human faces based on an attention mechanism, which enables the networks to reduce an ambiguity and improve an accuracy in emotion recognition. The face and context features are then fused to predict the emotion class in an adaptive fusion network by inferring an optimal fusion weight among the two-stream features. In addition, we build a novel database, called ContextAware Emotion Recognition (CAER), by collecting a large amount of video clips from TV shows and annotating the ground-truth emotion category. Experimental results show that CAER-Net outperforms baseline networks for contextaware emotion recognition on several benchmarks, including AFEW [9] and our CAER dataset. 2. Related Work Emotion recognition approaches. Most approaches to recognize human emotion have focused on facial expression analysis [4, 5, 6, 7, 8]. Some methods are based on the facial action coding system [17, 18], where a set of localized movements of the face is used to encode facial expression. Compared to conventional methods that have relied on handcrafted features and shallow classi\ufb01ers [4, 5], recent deep convolutional neural networks (CNNs) based approaches have made signi\ufb01cant progress [6]. Various techniques to capture temporal dynamics in videos have also been proposed making connections across the time using recurrent neural networks (RNNs) or deep 3D-CNNs [19, 20]. However, most works have been relied on human face analysis, and thus they have limited ability to exploit context information for emotion recognition in the wild. To solve these limitations, some approaches using other visual clues have been proposed [21, 22, 13, 14]. Nicolaou et al. [21] used the location of shoulders and Schindler et al. [22] used the body pose to recognize six emotion categories under controlled conditions. Chen et al. [13] detected events, objects, and scenes using pre-learned CNNs and fused each score with context fusion. In [14], manually annotated body bounding boxes and holistic images were leveraged. However, [14] have a limited ability to encode dynamic signals (i.e., video) to estimate the emotion. Moreover, the aforementioned methods are a lack of practical solutions to extract the sailent context information and exploit it to context-aware emotion recognition. Emotion recognition datasets. Most of the datasets that focus on detecting occurrence of expressions, such as CK+ [23] and MMI [24], have been taken in lab-controlled environments. Recently, datasets recorded in the wild condition for including naturalistic emotion states [9, 25, 26] have attracted much attention. AFEW benchmark [9] of the EMOTIW challenge [27] provides video frames extracted from movies and TV shows, while SFEW database [25] has been built as a static subset of the AFEW. FER-Wild [26] database contains 24,000 images that are obtained by querying emotion-related terms from search engines. MS-COCO database [28] has been recently annotated with object attributes, including some emotion categories for human, but the attributes are not intended to be exhaustive for emotion recognition, and not all people are annotated with emotion attributes. Some studies [29, 30] built the database consisting of a spontaneous subset acquired under a restrictive setting to establish the relationship between emotion and body posture. EMOTIC database [14] has been introduced providing the manually annotated body regions which contains emotional state. Although these datasets investigate a different aspect of emotion recognition with contexts, a large-scale dataset for context-aware emotion recognition is absence that contains various context information. Attention inference. Since deep CNNs have achieved a great success in many computer vision areas [31, 32, 33], numerous attention inference models [34, 35] have been investigated to identify discriminative regions where the networks attend, by mining discriminative regions [36], implicitly analyzing the higher-layer activation maps [34, 35], and designing different architecture of attention modules [37, 38]. Although the attention produced by these conventional methods could be used as a prior for various tasks, it only covers most discriminative regions of the object, and thus frequently fails to capture other discriminative parts that can help performance improvement. Most related methods to our work discover attentive areas for visual sentiment recognition [16, 39]. Although those produce the emotion sentiment map using deep CNNs, it only focuses on image-level sentiment analysis, not human-centric emotion like us. 3. Proposed Method 3.1. Motivation and Overview In this section, we describe a simple yet effective framework for context-aware emotion recognition in images and videos that exploits the facial expression and context information in a boosting and synergistic manner. A simple solution is to use the holistic visual appearance similar \fFace Encoding Stream ... ... Convolution Max-pooling \u2131(#$; & ') Context Encoding Stream Two-stream Encoding Networks Adaptive Fusion Networks \u2131() *; & *) \u2131() $; & $) \u201cSad\u201d Softmax Average-pooling Face Context ) $ ) * # + \u2131(# ,*; & -) \u2131(# +; & .) # ,* #* #$ /$ /* Input Figure 2. Network con\ufb01guration of CAER-Net, consisting of two-stream encoding networks and adaptive fusion networks. to [14, 13], but such a model cannot encode salient contextual regions well. Based on the intuition that emotions can be recognized by understanding the context components of scene, as well as facial expression together, we present an attention inference module that estimates the context information in images and videos. By hiding the facial regions in inputs and seeking the attention regions, our networks localize more discriminative context regions that are used to improve emotion recognition accuracy in a context-aware manner. Concretely, let us denote an image and a video that consists of a sequence of T images as I and V = {I1, . . . , IT }, respectively. Our objective is to infer the discrete emotion label y among K emotion labels {y1, . . . , yK} of the image I or video clip V with deep CNNs. To solve this problem, we present a network architecture consisting of two subnetworks, including a two-stream encoding network and an adaptive fusion network, as illustrated in Fig. 2. The twostream encoding networks consist of face stream and context stream in which facial expression and context information are encoded in the separate networks. By combining two features in the adaptive fusion network, our method attains an optimal performance for context-aware emotion recognition. 3.2. Network Architectures 3.2.1 Two-stream Encoding Networks In this section, we \ufb01rst present a dynamic model of our networks for analyzing videos, and then present a static model for analyzing images. Face encoding stream. As in existing facial expression analysis approaches [6, 20, 40], our networks also have the facial expression encoding module. We \ufb01rst detect and crop the facial regions using the off-the-shelf face detectors [41] to build input of face stream VF . The facial expression encoding module is designed to extract the facial expression features denoted as XF from temporally stacked facecropped inputs VF by feed-foward process such that XF = F(VF ; WF ), (1) with face stream parameters WF . The facial expression encoding module is designed based on the basic operations of 3D-CNNs which are well-suited for spatiotemporal feature representation. Compared to 2D-CNNs, 3D-CNNs have the better ability to model temporal information for videos using 3D convolution and 3D pooling operations. Speci\ufb01cally, the face encoding module consist of 5 convolutional layers with 3 \u00d7 3 \u00d7 3 kernels followed by batch normalization (BN), recti\ufb01ed linear unit (ReLU) layers and 4 max-pooling layers with stride 2 \u00d7 2 \u00d7 2 except for the \ufb01rst layer. The \ufb01rst pooling layer has a kernel size 1\u00d72\u00d72 with the intention of not to merge the temporal signal too early. The number of kernels for \ufb01ve convolution layers are 32, 64, 128, 256 and 256, respectively. The \ufb01nal feature XF is spatially averaged in the average-pooling layer. Context encoding stream. In comparison to the face encoding stream, the context encoding stream includes a context encoding module and an attention inference module. To extract the context information except the facial expression, we present a novel strategy that hides the faces and seeks contexts based on the attention mechanisms. Speci\ufb01cally, the context encoding module is designed to extract the context features denoted as XC from temporally stacked facehidden inputs VC by feed-foward process: XC = F(VC; WC), (2) with context stream parameters WC. In addition, an attention inference module is learned to extract attention regions of input, enabling the context encoding stream to focus on the sailent contexts. Concretely, the attention inference module takes an intermediate feature XC as input to infer the attention A \u2208RH\u00d7W , where \f(a) input (b) static model (c) dynamic model Figure 3. Visualization of the attention maps of (b) static and (c) dynamic context encoding models of CAER-Net. H \u00d7 W is the spatial resolution of the XC. To make the sum of attention for each pixel to be 1, we spatially normalize the attention A by using the spatial softmax [42] as follows: \u02c6 Ai = exp(Ai) P j exp(Aj), (3) where \u02c6 A is the attention for context at each pixel i and j \u2208{1, \u00b7 \u00b7 \u00b7 , H \u00d7 W}. Since we temporally aggregate the features using 3D-CNNs, we only normalize the attention weight across spatial axises not temporal axis. Note that the attention is implicitly learned in an unsupervised manner. Attention \u02c6 A is then applied to the feature XC to make the attention-boosted feature \u02c6 XC as follows: \u00af XC = \u02c6 A \u2299XC, (4) where \u2299is an element-wise multiplication operator. Speci\ufb01cally, we use \ufb01ve convolution layers to extract intermediate feature volumes XC followed by BN and ReLU, and 4 max-pooling layers. All max-pooling layers except for the \ufb01rst layer have 2 \u00d7 2 \u00d7 2 kernel with stride 2. The \ufb01rst pooling layer has kernel size 1 \u00d7 2 \u00d7 2 similar to facial expression encoding stream. The number of \ufb01lters for \ufb01ve convolution layers are 32, 64, 128, and 256, respectively. In the attention inference module, we use two convolution layers with 3 \u00d7 3 \u00d7 3 kernels producing 128 and 1 feature channels, followed by BN and ReLU layers. The \ufb01nal feature \u00af XC is spatially averaged in the average-pooling layer. Static model. Dynamic model described above can be simpli\ufb01ed for emotion recognition in images. A static model, called CAER-Net-S, takes both a single frame facecropped image IF and face-hidden image IC as input. In networks, all 3D convolution layers and 3D max-pooling layers are replaced with 2D convolution layers and 2D maxpooling layers, respectively. Thus, our two types of models can be applied in various environments regardless of the data type. Fig. 3 visualizes the attention maps of static and dynamic models. As expected, our networks both with static and dynamic models localize the context information well, except for the face expression. By exploiting the temporal connectivity, the dynamic model can localize more sailent regions compared to the static model. \u201cSad\u201d !\": 0.31 / !#: 0.69 \u201cHappy\u201d !\": 0.91 / !#: 0.09 \u201cFear\u201d !\": 0.26 / !#: 0.74 \u201cSurprise\u201d !\": 0.62 / !#: 0.38 \u201cAnger\u201d !\": 0.75 / !#: 0.25 \u201cSad\u201d !\": 0.13 / !#: 0.87 Figure 4. Some examples of the attention weights, i.e., \u03bbF and \u03bbC, in our networks. 3.2.2 Adaptive Fusion Networks To recognize the emotion by using the face and context information in a joint manner, the features extracted from two modules should be combined. However, a direct concatenation of different features [14] often fails to provide optimal performance. To alleviate this limitation, we build the adaptive fusion networks with an attention model for inferring an optimal fusion weight for each feature XF and \u00af XC. The attentions are learned such that \u03bbF = F(XF ; WD) and \u03bbC = F( \u00af XC; WE) with network parameters WD and WE, respectively. Softmax function make the sum of these attentions to be 1, i.e., \u03bbF + \u03bbC = 1. Fig. 4 shows some examples of the attention weights, i.e., \u03bbF and \u03bbC, in CAER-Net. According to contents, the attention weights are adaptively determined to yield an optimal solution. Unlike methods using the simple concatenation [14], the learned attentions are applied to inputs as XA = \u03a0(XF \u2299\u03bbF , \u00af XC \u2299\u03bbC), (5) where \u03a0 is a concatenation operator. We then estimate the \ufb01nal output y for emotion category by classi\ufb01er: y = F(XA; WG), (6) where WG represents the remainder parameters of the adaptive fusion networks. Speci\ufb01cally, the fusion networks consist of 6 convolution layers with 1 \u00d7 1 kernels. The four layers use to produce fusion attention \u03bbF and \u03bbC. While the intermediate two layers that receive each stream feature as input produce 128 channel feature, the remaining two layers produce 1 channel attention for facial and contextual features. For the two layers that act as \ufb01nal classi\ufb01ers, the \ufb01rst convolution layer produces 128 channel feature followed by ReLU and dropout layers to prevent the problem of the network over\ufb01tting, and the second convolution layer produces K channel feature to estimated the emotional category. \fTime Unlabeled clip (Group & ambiguous) Neutral Happy Surprise Sadness Disgust Fear Anger Labeled clip Time Time Step 1) Step 2) Step 3) Uncollected clip Collected clip Figure 5. Procedure for building CAER benchmark: we divide the video clips to the shot with shot boundary detection method, and remove face-undetected shots, group-level and ambiguous shots to estimate the emotion. Finally, we annotate the emotion category. 4. The CAER Benchmark Most existing datasets [10, 43] have focused on the human facial analysis, and thus they are inappropriate for context-aware emotion recogntion. In this section, we introduce a benchmark by collecting large-scale video clips from TV shows and annotating them for context-aware emotion recogntion. 4.1. Annotation We \ufb01rst collected the video clips from 79 TV shows and then re\ufb01ned them using the shot boundary detector, face detector/tracking and feature clustering 1. Each video clip was manually annotated with six emotion categories, including \u201canger\u201d, \u201cdisgust\u201d, \u201cfear\u201d, \u201chappy\u201d, \u201csad\u201d, and \u201csurprise\u201c, as well as \u201cneutral\u201d. Six annotators were recruited to assign the emotion category on the 20,484 clips of the initial collection. Since all the video clips have audio and visual tracks, the annotators labeled them while listening to the audio tracks for more accurate annotations. Each clip was evaluated by three different annotators. The annotation was performed blindly and independently, i.e. the annotators were not aware of the other annotator\u2019s response. Importantly, in comparison of existing datasets [9, 14], con\ufb01dence scores were annotated as well as emotion category, which can be thought as the probability of the annotation reliability. If two more annotators assigned the same emotion 1https://github.com/pyannote/pyannote-video Category # of clips # of frames % Anger 1,628 139,681 12.33 Disgust 719 59,630 5.44 Fear 514 46,441 3.89 Happy 2,726 219,377 20.64 Neutral 4,579 377,276 34.69 Sad 1,473 138,599 11.16 Surprise 1,562 126,873 11.83 Total 13,201 1,107,877 100 Table 1. Amount of video clips in each category on CAER dataset. categories, the clip was remained in the database. We also removed the clips which have lower con\ufb01dence average under the 0.5. Finally, 13,201 clips and about 1.1M frames were available. The videos range from short (around 30 frames) to longer clips (more than 120 frames). The average of sequence length is 90 frames. In addition, we extracted about 70K static images from CAER to create a static image subset, called CAER-S. The dataset is randomly split into training (70%), validation (10%), and testing (20%) sets. Overall stage of data acquisition and annotation is illustrated in Fig. 5. Table 1 summarizes the number of clips per each cateogry in the CAER benchmark. 4.2. Analysis We compare CAER and CAER-S datasets with other widely used datasets, such as EMOTIC [14], AffectNet [43], AFEW [44], and Video Emotion datasets [45], as shown in Table 2. According to the data type, the datasets are grouped into the static and dynamic. Even if static databases for facial expression analysis such as AffectNet [43] and FER-Wild [26] collect a large amount of facial expression images from the web, they have only facecropped images not including surrounding context. In addition, EMOTIC [14] do not contain human facial images, as exampled in Fig. 6, thus causing subjective and ambiguous labelling from observers. On the other hand, commonly used video emotion recognition datasets had insuf\ufb01cient amount of data than image-based datasets [45, 46]. Compared to these datasets, the CAER dataset provides the large-scale video clips which are suf\ufb01cient amount to learn the machine learning algorithms for context-aware emotion recognition. 5. Experiments 5.1. Implementation Details CAER-Net was implemented with PyTorch library [47]. We trained CAER-Net from scratch with learning rate initialized as 5 \u00d7 10\u22123 and dropped by a factor of 10 every 4 epochs. CAER-Net was learned with the cross-entropy loss function [48] with ground-truth emotion labels with batch size to 32. As CAER dataset has various length of \f(a) EMOTIC [14] (b) AffectNet [43] (c) CAER Figure 6. Examples in the EMOTIC [14], AffectNet [43] and CAER. While EMOTIC includes face-unvisible images to yeild ambiguous emotion recognition, AffectNet includes face-cropped images which have limited to use of context. Data type Dataset Amount of data Setting Annotation type Context Static (Images) EMOTIC [14] 18,316 images Web 26 Categories \u0013 AffectNet [43] 450,000 images Web 8 Categories \u0017 CAER-S 70,000 images TV show 7 Categories \u0013 Dynamic (Videos) AFEW [44] 1,809 clips Movie 7 Categories \u0017 CAER 13,201 clips TV show 7 Categories \u0013 Table 2. Comparison of the CAER with existing emotion recognition datasets such as EMOTIC [14], AffectNet [43], AFEW [44], and Video Emotion [45] datasets. Compared to existing datasets, CAER contains large amount of video clips for context-aware emotion recognition. videos, we randomly extracted single non-overlapped consecutive 16 frame clips from every training video which sampled at 10 frames per second. While the clips of facial VF are resized to have the frame size of 96 \u00d7 96, the clips of contextual parts VC are resized to have the frame size of 128 \u00d7 171 and randomly cropped into 112 \u00d7 112 at training stage. We also trained static model of CAER-Net-S with CAER-S dataset with the input size of 224 \u00d7 224. To reduce the effects of over\ufb01tting, we employed the dropout scheme with the ratio of 0.5 between 1 \u00d7 1 convolution layers, and data augmentation schemes such as \ufb02ips, contrast, and color changes. At testing phase, we used a single center crop per contextual parts clips. For video predictions, we split a video into 16 frame clips with a 8 frame overlap between two consecutive clips then average clip predictions of all clips. 5.2. Experimental Settings We evaluated CAER-Net on the CAER and AFEW dataset [9], respectively. For evaluation of the proposed networks quantitatively, we measured the emotion recognition performance by classi\ufb01cation accuracy as used in [27]. We reproduced four classical deep network architectures before the fully-connected layers, including AlexNet [31], VGGNet [32], ResNet [33], and C3D [49], as the baseline methods. We adopt two fully-connected layers as classi\ufb01ers for the baseline methods. We initialized the feature extraction modules of all the baselines using pretrained modMethods w/F w/C w/cA w/fA Acc. (%) CAER-Net-S \u0013 70.09 \u0013 \u0013 65.65 \u0013 \u0013 \u0013 \u0013 73.51 CAER-Net \u0013 74.13 \u0013 \u0013 71.94 \u0013 \u0013 74.36 \u0013 \u0013 \u0013 74.94 \u0013 \u0013 \u0013 75.57 \u0013 \u0013 \u0013 \u0013 77.04 Table 3. Ablation study of CAER-Net-S and CAER-Net on the CAER-S and CAER datasets, respectively. \u2018F\u2019, \u2018C\u2019, \u2018cA\u2019, and \u2018fA\u2019 denote face encoding stream, context encoding stream, context attention module and fusion attention module, respectively. els from two large-scale classi\ufb01cation datasets such as ImageNet [50] and Sports-1M [51], and \ufb01ne-tuned whole networks on CAER benchmark. We trained all parameters of learning rate 10\u22124 for \ufb01ne-tuned models. 5.3. Results on the CAER dataset Ablation study. We analyzed CAER-Net-S and CAERNet with ablation studies as varying the combination of different inputs such as cropped face and context, and attention modules such as context and fusion attention modules. For all those experiments, CAER-Net-S and CAER-Net were trained and tested on the CAER-S and CAER datasets, respectively. For quantitative analysis of ablation study, we \f(a) CAER-Net w/F (b) CAER-Net Figure 7. Confusion matrix of CAER-Net with face stream only and with face and context streams on the CAER benchmark. (a) (b) (c) (d) Figure 8. Visualization of the attention: (from top to bottom) inputs, attention maps of CAER-Net-S and CAER-Net. (a) and (b) are results of ablation study without hiding the face during training, (c) and (d) with hiding the face. examined the classi\ufb01cation accuracy on the CAER benchmark as shown in Table 3. The results show that the best result can be obtained when both the face and context are used as inputs. As our baseline, CAER-Net w/F that considers facial expression only for emotion recognition provides the accuracy 74.13 %. Compared to this, our CAERNet that fully makes use of both face and context shows the best performance. When we compared the static and dynamic models, CAER-Net shows 3.53 % improvement than CAER-Net-S, which shows the importance to consider the temporal dynamic inputs for context-aware emotion recognition. Fig. 7 demonstrates the confusion matrix of CAER-Net w/F and CAER-Net, which also verify that compared to the model that only focuses on facial stream only, a joint model that considers facial stream and context stream simultaneously can highly boost the emotion recognition performance. Happy and neutral accuracies were increased by 7.48% and 5.65%, respectively, which clearly shows that context information helps distinguishing these two categories rather than only using facial expression. Finally, we conducted an ablation study for the context attention module. First of all, when we trained CAER-Net-S and CAERNet without hiding the face, they tended to focus on the most discriminative parts only (i.e., faces) as depicted in the preceding two columns Fig. 8. Secondly, we conducted Anger Disgust Fear Happy Neutral Sad Surprise Category 30 40 50 60 70 80 90 Accuracy Figure 9. Quantitative evaluation of CAER-Net-S in comparison to baseline methods on each category in the CAER-S benchmark. Methods Acc. (%) ImageNet-AlexNet [31] 47.36 ImageNet-VGGNet [32] 49.89 ImageNet-ResNet [33] 57.33 Fine-tuned AlexNet [31] 61.73 Fine-tuned VGGNet [32] 64.85 Fine-tuned ResNet [33] 68.46 CAER-Net-S 73.51 Table 4. Quantitative evaluation of CAER-Net-S in comparison to baseline methods on the CAER-S benchmark . another experiment on actionless frames as depicted in the second and last columns. As shown in the last two columns Fig. 8, both CAER-Net-S and CAER-Net attend to not only \u201cthings that move\u201d but also the salient scene that can be an emotion signals. To summarize, our context encoding stream enables the networks to attend salient context that boost performance for both images and videos. Comparison to baseline methods. In Fig. 9 and Table 4, we evaluated CAER-Net-S with baseline 2D CNNs based approaches. The standard networks including AlexNet [31], VGGNet [32], and ResNet [33] pretrained with ImageNet were reproduced for comparison with CAER-Net-S. In addition, we also \ufb01ne-tuned these networks on the CAER-S dataset. Compared to these baseline methods, our CAERNet-S improves the classi\ufb01cation performance than \ufb01netuned ResNet by 5.05%. Moreover, CAER-Net-S consistently performs favorably against baseline deep networks on each category in the CAER-S benchmark, which illustrates that CAER-Net can learn more discriminative representation for this task. In addition, we evaluated CAER-Net with a baseline 3D CNNs based approach in Table 5. Compared to C3D [49], our CAER-Net has shown the state-of-the-art performance on the CAER benchmark. Finally, Fig. 10 shows the qualitative results with learned attention maps obtained by CAM [34] with \ufb01ne-tuned VGGNet and in context encoding stream of CAER-Net-S. Note that images in Fig. 10 were correctly classi\ufb01ed to groundtruth emotion categories both with \ufb01ne-tuned VGGNet and \f(a) \u201cDisgust\u201d (b) \u201cFear\u201d (c) \u201cSurprise\u201d (d) \u201cSad\u201d (e) \u201cHappy\u201d (f) \u201cFear\u201d Figure 10. Visualization of learned attention maps in CAER-Net-S: (from top to bottom) inputs, attention maps of CAM [34], inputs of context encoding stream, attention maps in context encoding stream. Note that red color indicates attentive regions and blue color indicates suppressed regions. Best viewed in color. Methods Acc. (%) Sports-1M-C3D [49] 66.38 Fine-tuned C3D [49] 71.02 CAER-Net 77.04 Table 5. Quantitative evaluation of CAER-Net in comparison to C3D [49] on the CAER benchmark . CAER-Net-S. Unlike CAM [34] that only considers facial expressions, the attention mechanism in CAER-Net-S localizes context information well that can boost the emotion recognition performance in a context-aware manner. 5.4. Results on the AFEW dataset We conducted an additional experiment to verify the effectiveness of the CAER dataset compared to the AFEW dataset [9]. When we trained CAER-Net on the combination of CAER and AFEW datasets, the highly improvement was attained. It demonstrates that CAER dataset could be complement data distribution of the AFEW dataset. It should be noted that Fan et al. [40] has shown the better performance, they are formulated the networks with the ensemble of various networks to maximize the performance in EmotiW challenge. Unlike this, we focused on investigating how context information helps to improve the emotion recognition performance. For this purpose, we choice shallow architecture rather than Fan et al. [40]. If the face encoding stream adopt more complicated networks such Methods Training data Acc. (%) VielZeuf et al. [52] w/F FER+AFEW 48.60 Fan et al. [19] w/F FER+AFEW 48.30 Hu et al. [53] w/F AFEW 42.55 Fan et al. [40] w/F FER+AFEW 57.43 CAER-Net w/F AFEW 41.86 CAER-Net CAER 38.65 CAER-Net AFEW 43.12 CAER-Net CAER+AFEW 51.68 Table 6. Quantitative evaluation of CAER-Net on the AFEW [9] benchmark, as varying training datasets. Fan et al. [40], the performance of CAER-Net also will be highly boosted. We reserve this as further works. 6." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file