text
large_stringlengths 252
2.37k
| length
uint32 252
2.37k
| arxiv_id
large_stringlengths 9
16
| text_id
int64 36.7k
21.8M
| year
int64 1.99k
2.02k
| month
int64 1
12
| day
int64 1
31
| astro
bool 2
classes | hep
bool 2
classes | num_planck_labels
int64 1
11
| planck_labels
large_stringclasses 66
values |
|---|---|---|---|---|---|---|---|---|---|---|
In principle, all known and unknown fundamental matter fields would contribute to $\langle\rho\rangle$. The dominant contribution to $\langle\rho\rangle$ comes from the quantum zero-point energies of these fundamental fields. Without the knowledge of all fundamental fields, it is impossible to determine the exact value of $\langle\rho\rangle$. However, the standard effective field theory arguments predict that, in general, $\langle\rho\rangle$ takes the form FORMULA if we trust our theory up to a certain high energy cutoff $\Lambda$. This result could have been guessed by dimensional analysis and the numerical constants which have been neglected will depend on the precise knowledge of the fundamental fields under consideration [CIT]. The exact value of the cutoff $\Lambda$ is also not known. If it is taken to be the Planck energy, i.e. $\Lambda=1$, we would have FORMULA where Planck units has been used for convenience.
| 932
|
1904.08599
| 16,705,965
| 2,019
| 4
| 18
| false
| true
| 2
|
UNITS, UNITS
|
iii\) In the old scenario, we required taking $\Lambda$ to super-Planck scale and the oscillation scale of gravity field would be on super-super-Planck scale. However, general relativity is generally expected to break down at or above Planck scale and QFT may break down even earlier.
| 284
|
1904.08599
| 16,706,053
| 2,019
| 4
| 18
| false
| true
| 3
|
UNITS, UNITS, UNITS
|
We thank Alireza Molaeinezhad for the help with stacking spectra and also Jesus Falcón-Barroso for providing us his pixel-by-pixel smoothing code. We also thank the anonymous referee for a constructive report that helped us to improve our manuscript. EE and AV acknowledge support from grant AYA2016-77237-C3-1-P from the Spanish Ministry of Economy and Competitiveness (MINECO). This paper is based on data retrieved from the Sloan Digital Sky Survey archives (<http://classic.sdss.org/collaboration/credits.html>). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England.
| 856
|
1904.11493
| 16,717,692
| 2,019
| 4
| 22
| true
| false
| 1
|
MPS
|
As the loop quantum gravity is based on polymer quantization, we will argue that the polymer length (like string length) can be several orders larger than the Planck length, and this can have low energy consequences. We will demonstrate that a short distance modification of a quantum system by polymer quantization and by string theoretical considerations can produce similar behavior. Moreover, it will be demonstrated that a family of different deformed Heisenberg algebras can produce similar low energy effects. We will analyze such polymer correction to a degenerate Fermi gases in a harmonic trap, and its polymer corrected thermodynamics.
| 646
|
1904.10455
| 16,717,761
| 2,019
| 4
| 23
| false
| true
| 1
|
UNITS
|
One can then reconsider our estimates from §[4.1] for this high-$z$ only $\hat \kappa'$ map, which should provide the relevant signal-to-noise for $\sigma_8(z)$ at $z>2$. As we consider $L>50$, we ignore the internal Planck $\kappa$ estimator that is noise dominated beyond this $L$. The dashed lines of Fig. REF show these revised signal-to-noise estimates when including this spectroscopic low-$z$ delensing. We can see that the greatest returns are for CMB-S4 given the much reduced reconstruction noise, achieving a $\simeq 30$% increase in the significance. With an increased map RMS, and therefore reconstruction noise, this boost is suppressed for both SO and Advanced ACT; a fact compounded by the inability of AdvACT and SO to reach a map RMS where a maximum likelihood lensing estimate outperforms the simple quadratic estimator.
| 839
|
1904.13378
| 16,743,587
| 2,019
| 4
| 30
| true
| false
| 1
|
MISSION
|
Multipole vectors and pseudoentropies provide powerful tools for a numerically fast and vivid investigation of possible statistically anisotropic, respectively non-Gaussian signs in CMB temperature fluctuations. After reviewing and linking these two conceptions we compare their application to data analysis using the Planck 2015 NILC full sky map.
| 348
|
1905.01176
| 16,755,134
| 2,019
| 5
| 3
| true
| true
| 1
|
MISSION
|
The Planck Collaboration made its final data release in 2018. In this paper we describe beam-deconvolution map products made from Planck LFI data using the artDeco deconvolution code to symmetrize the effective beam. The deconvolution results are auxiliary data products, available through the Planck Legacy Archive. Analysis of these deconvolved survey difference maps reveals signs of residual signal in the 30-GHz and 44-GHz frequency channels. We produce low-resolution maps and corresponding noise covariance matrices (NCVMs). The NCVMs agree reasonably well with the half-ring noise estimates except for 44 GHz, where we observe an asymmetry between $EE$ and $BB$ noise spectra, possibly a sign of further unresolved systematics.
| 735
|
1905.05440
| 16,788,236
| 2,019
| 5
| 14
| true
| false
| 3
|
MISSION, MISSION, MISSION
|
REF(#fig:planck_figure){reference-type="autoref" reference="fig:planck_figure"} shows constraints on the first seven parameters of the Planck chain in blue. The entire distribution has 27 free parameters, however all parameters past the first six are nuisance parameters. We checked that our reconstruction successfully recovered those 20 nuisance parameters, but present only the cosmologically interesting parameters here.
| 424
|
1905.09299
| 16,819,684
| 2,019
| 5
| 22
| true
| false
| 1
|
MISSION
|
[^8]: For the power spectrum, what matters most is the mass of the most massive neutrino. This is not so different for $M_\nu=0.06$ and $0.12,\mathrm{eV}$ (assuming the normal hierarchy), so the neutrino power spectra will also be similar, though the normalization of the total power spectrum changes (and for the Planck 2015 cosmology there are additional parameter changes).
| 376
|
1906.00968
| 16,853,983
| 2,019
| 6
| 3
| true
| false
| 1
|
MISSION
|
The tree level neutrino mass matrix originating from a type I seesaw mechanism can receive several corrections due to the Planck suppressed operators. Such corrections can arise either in the light neutrino mass matrix directly due to Weinberg operator, in Dirac neutrino mass matrix or heavy right handed neutrino mass matrix out of which the first one is negligible compared to the latter ones. To illustrate the role of such corrections in a simple manner, we only take the corrections to the Dirac neutrino mass matrix and show that the corrections from Planck suppressed operators can generate the necessary deviations from TBM mixing leading to a non-zero value of $\theta_{13}$, in agreement with observations. Owing to the specific flavor structure of the model we have specific correlation among the mixing angles appearing in the lepton mixing matrix. Such correlations can be tested in future neutrino oscillation experiments like DUNE, T2HK etc [CIT]. However a detailed study in this direction is beyond the scope of present study. We also outline the super-WIMP dark matter phenomenology by considering three distinct scenario: (i) $\eta$ decays to SM particles as well as $\psi$ and $\psi$ is kinematically long lived, (ii) $\eta$ decays to $\psi$ and a SM neutrino while $\psi$ is perfectly stable, (iii) $\eta$ decays into a pair of $\psi$ while $\psi$ is perfectly stable. Out of these, the first scenario correspond to the model which we have discussed in our work while the latter two scenario can be realised if the discrete $Z_4$ symmetry in the dark sector can be uplifted to a gauge symmetry which does not get broken by gravity effects. While we do not discuss such UV complete gauge symmetric realisation of $Z_4$ symmetry we outline the interesting differences for super-WIMP phenomenology. The analysis for neutrino sector in all three DM scenarios remain same, however.
| 1,898
|
1906.02756
| 16,869,653
| 2,019
| 6
| 6
| true
| true
| 2
|
UNITS, UNITS
|
Incomplete relaxation in the past is one means by which nonequilibrium could exist in the inflationary era. Another possibility is that nonequilibrium is *generated* during the inflationary phase by exotic gravitational effects at the Planck scale (ref. [CIT], section IVB). Trans-Planckian modes -- that is, modes that originally had sub-Planckian physical wavelengths -- may well contribute to the observable part of the inflationary spectrum [CIT], in which case inflation provides an empirical window onto physics at the Planck scale [CIT]. It has been suggested that quantum equilibrium might be gravitationally unstable [CIT]. In quantum field theory the existence of an equilibrium state arguably requires a background spacetime that is globally hyperbolic, in which case nonequilibrium could be generated by the formation and evaporation of a black hole (a proposal that is also motivated by the controversial question of information loss) [CIT]. A heuristic picture of the formation and evaporation of microscopic black holes then suggests that quantum nonequilibrium will be generated at the Planck length $l_{\mathrm{P}}$. Such a process could be modelled in terms of nonequilibrium field modes. Thus, a mode that begins with a physical wavelength $\lambda_{\mathrm{phys}}<l_{\mathrm{P}}$ in the early inflationary era may be assumed to be out of equilibrium upon exiting the Planckian regime (that is, when $\lambda_{\mathrm{phys}}>l_{\mathrm{P}}$) [CIT]. If such processes exist, the inflaton field will carry quantum nonequilibrium at *short* wavelengths (below some comoving cutoff).
| 1,598
|
1906.03670
| 16,876,768
| 2,019
| 6
| 9
| true
| true
| 3
|
UNITS, UNITS, UNITS
|
Following [CIT] :twodust [see also [CIT] :pressure], we modelled the thermal dust as a double grey body, by assuming two populations of dust grains, instead of the idealised case of a single grey body spectrum. Indeed, the latter provides an accurate representation of the thermal emission from Galactic dust only at frequencies higher than 353 GHz. The spectral energy density we set for this component is therefore FORMULA where the dimensionless constant factors $f_1$ and $q_1/q_2$ refer to the relative contribution from the coldest component at temperature $T_1$ and the hottest component at temperature $T_2$. The $\beta_1$ and $\beta_2$ parameters give the slopes of the two different power laws, while $B(\nu; T_1)$ and $B(\nu; T_2)$ are the corresponding Planck functions describing the black body spectra. In order to get the best-fit parameters of the model described by Eqn. REF(#eqn:meisnergb){reference-type="eqref" reference="eqn:meisnergb"}, we calculated an independent fit to the dust component through a Monte Carlo Markov Chain sampling. To treat only the signal from the dust, we limited this fit to the pixels in the frequency maps that are located sufficiently far from the cluster, at radial distances from the centre larger than $5 R_{500}$. The only spatially-variable parameter is the temperature $T_2$, which is fixed a priori to the value determined by a joint fit to *IRAS* and Planck data, as detailed in [CIT] :twodust. From this fit we obtain maps of the dust component at all frequencies, which we plug in the model maps of Eqn. REF(#eqn:model){reference-type="eqref" reference="eqn:model"}.
| 1,626
|
1906.10013
| 16,923,060
| 2,019
| 6
| 24
| true
| false
| 2
|
LAW, MISSION
|
We have some understanding of dust physics and its relationship to polarization modes. The amplitude and orientation of the dust signal is set by the integrated column density and magnetic field orientation. For $E$ to have more power than $B$ qualitatively means that density fluctuations (structures in the ISM density field) must prefer orientations parallel or perpendicular to the local magnetic field [CIT]. This picture is borne out by measurements of the magnetic field orientation in individual, bright, filamentary structures in the Planck 353 GHz data [CIT]. This is further validated by the observations that linear structures in neutral hydrogen emission, highlighted by a Rolling Hough Transformation, also correlate with the magnetic field direction indicated by Planck dust polarization [CIT].
| 809
|
1906.10052
| 16,923,354
| 2,019
| 6
| 24
| true
| false
| 2
|
MISSION, MISSION
|
We show in Fig. REF the predicted cosmological observables for these $\alpha$-Starobinsky potentials in the $(n_s, r)$ plane, together with the results of the Planck collaboration combined with other CMB data, indicated by blue shadings corresponding to the 68% and 95% confidence level regions [CIT] [^1]. As the curvature parameter $\alpha$ increases, the value of the scalar tilt $n_s$ changes only slightly and stays within the range $\sim 0.96-0.97$, while the tensor-to-scalar ratio $r$ increases with the value of $\alpha$. The CMB data set a a 68% upper bound on the tensor-to-scalar ratio $r \sim 0.055$, which is attained for $\alpha \sim 51$ when $n_s \sim 0.967$ for a nominal choice of $N_* \approx 55$, as indicated by the blue star. The green dots and line at small $r$ show the prediction of the original Starobinsky model, corresponding to the case $\alpha = 1$. It is apparent that future measurements of $r$ will be able to constrain $\alpha$ more significantly, and that more precise measurements of $n_s$ could in principle constrain $n_s$, thereby $N_*$ and hence the post-inflationary history of the Universe, which is sensitive to the decay of the inflaton into low-mass particles [CIT].
| 1,211
|
1906.10176
| 16,924,985
| 2,019
| 6
| 24
| true
| true
| 1
|
MISSION
|
In order to investigate in detail the internal properties of halos and their correlations with the large-scale tidal field, we use 1000 N-body simulations, a subset of Quijote[^1] suite [CIT]. The subset we use are the output snapshots at $z=0$ from the dark matter only simulations run using TreePM+SPH code Gadget-III [CIT] in a periodic box size of $1,h^{-1}{\rm Gpc}$ with $512^3$ particles. The mass of a single particle is $M_{\rm p}=6.57\times10^{11},[h^{-1} M_\odot]$. All simulations were run using the following values of cosmological parameters: $\Omega_{\rm m} = 0.3175$, $\Omega_{\rm b} = 0.049$, $\Omega_\Lambda = 0.6825$, $n_{\rm s} = 0.9624$ and $h = 0.6711$, which are in good agreement with the constraints from Planck [CIT]. We use the Friends-of-Friends (FoF) algorithm [CIT] to identify halos both in real- and redshift-space using the linking length $b_{ll}=0.2$ and different minimum number of particles per halo $n_{\rm min}$.
| 950
|
1906.11823
| 16,937,851
| 2,019
| 6
| 27
| true
| false
| 1
|
MISSION
|
A significant tension has become manifest between the current expansion rate of our Universe measured from the cosmic microwave background by the Planck satellite and from local distance probes, which has prompted for interpretations of that as evidence of new physics. Within conventional cosmology a likely source of this discrepancy is identified here as a matter density fluctuation around the cosmic average of the 40 Mpc environment in which the calibration of Supernovae Type Ia separations with Cepheids and nearby absolute distance anchors is performed. Inhomogeneities on this scale easily reach 40% and more. In that context, the discrepant expansion rates serve as evidence of residing in an underdense region of $\delta_{\rm env}\approx-0.5\pm0.1$. The probability for finding this local expansion rate given the Planck data lies at the 95% confidence level. Likewise, a hypothetical equivalent local data set with mean expansion rate equal to that of Planck, while statistically favoured, would not gain strong preference over the actual data in the respective Bayes factor. These results therefore suggest borderline consistency between the local and Planck measurements of the Hubble constant. Generally accounting for the environmental uncertainty, the local measurement may be reinterpreted as a constraint on the cosmological Hubble constant of $H_0=74.7^{+5.8}_{-4.2}$ km/s/Mpc. The current simplified analysis may be augmented with the employment of the full available data sets, an impact study for the immediate $\lesssim10$ Mpc environment of the distance anchors, more prone to inhomogeneities, as well as expansion rates measured by quasar lensing, gravitational waves, currently limited to the same 40 Mpc region, and local galaxy distributions.
| 1,772
|
1906.12347
| 16,940,220
| 2,019
| 6
| 28
| true
| false
| 4
|
MISSION, MISSION, MISSION, MISSION
|
From the $\kappa$-dependence of the metric perturbation in (REF), it is not difficult to convince oneself that $d_L^\textsc{gw}$ is proportional to a power of the effective Planck mass appearing in front of the graviton kinetic term when expanding the action to $O(h^2)$ around a FLRW background, FORMULA In GR, $M^{2-D}_\textsc{gw}=\kappa^2=8\pi G=M_{\rm Pl}^{2-D}$ is the reduced Planck mass, but in other theories (starting with purely classical scalar-tensor and $f(R)$ models) it happens that $M_\textsc{gw}\neq M_{\rm Pl}$ and that $M_\textsc{gw}(t)$ is a function of the cosmological background and acquires a non-trivial time dependence. By the same token, we can recast the parameter $\Xi$ expressing the ratio of luminosities as a function of the effective mass $M_{\rm eff}$. Thus, in some classical models FORMULA This expression is valid only under the assumption that all the corrections to the dynamics can be encoded at sufficiently large scales in an effective Newton constant. In general, this happens when the comoving number density of gravitons is conserved [CIT]. However, there are cases where (REF) may not hold, as in higher-dimensional braneworld models or in models where the graviton is unstable [CIT]. We will see later that also in quantum gravity the left- and right-hand sides of (REF) may be different quantities, the reason being that volumes (including also comoving volumes) do not scale as expected in certain regimes. Therefore, if one is interested in placing constraints on effective Planck masses, it is useful to consider also other parameters aside from ratios of luminosity distances.
| 1,628
|
1907.02489
| 16,959,619
| 2,019
| 7
| 4
| true
| true
| 3
|
UNITS, UNITS, UNITS
|
Even though the radiative losses are extremely high compared to the non-LTE case (see below, the last paragraph of this section), there is no precursor region. We can invoke two reasons for this: first, the region that emits is quite small, so the radiation energy emitted per unit time is not enough to heat up the unshocked plasma; second, according to Fig.REF (mid-right panel), the Planck opacity, $k_P$, in LTE regime is smaller by several order of magnitudes compared to that in the non-LTE case. Therefore, matter absorbs far less radiation in LTE than in non-LTE (a reminder: the gain of radiation energy by matter is $G =k_P, \rho, c, E$).
| 648
|
1907.04591
| 16,977,622
| 2,019
| 7
| 10
| true
| false
| 1
|
OPACITY
|
We also provide a graphical representation of the comparison results in Fig. REF, which directly shows the depletion factor parameters obtained in this analysis and the previous works (see Table I for details). The red circle and square denote the best-fitted gas depletion factor within the cluster radius of $R_{500}$, concerning the whole ACTPol and reduced ACTPol sample, respectively. The grey dashed and white netted region show the hydrodynamical simulation results ($1\sigma$ uncertainties) at different cluster radius ($R_{500}$ and $R_{2500}$), while the triangles with different directions represent the best-fitted gas depletion factor parameters within $R_{2500}$, concerning the [CIT] $f_{gas}$ sample and different distance indicators (SN Ia observations, SZ effect/X-ray measurements of galaxy clusters, and Planck's best-fitted $\Lambda$CDM cosmology). On the one hand, we find that the $\gamma_0$ value is in full agreement with the simulated results derived within $R_{500}$. On the other hand, although the $\gamma_1$ value in our analysis is compatible with $\gamma_1=0$ within 2$\sigma$, a non-negligible time evolution for the depletion factor is still supported by the current observations. Such tendency is clearly in tension with the results of cosmological hydrodynamical simulations [CIT], but well consistent with the self-consistent observational constraints by using exclusively galaxy cluster data [CIT].
| 1,436
|
1907.06509
| 16,992,371
| 2,019
| 7
| 15
| true
| false
| 1
|
MISSION
|
Lastly, it is worth noting that the effects on the statistics examined in this study should not be solely attributed to a running of the scalar spectral index. This is because as we vary the running, the other cosmological parameters change in order to retain consistency with the primary CMB constraints from $Planck$. Of particular relevance for this study is the amplitude of the initial matter power spectrum, $A_s$, which we discuss further in Section [3.1]. As such, when discussing the results from the simulations, we will refer to the changes seen as being a result of a $Planck$-constrained running cosmology, abbreviated as a $\Lambda\alpha_s$CDM cosmology, rather than due to the running parameter alone.
| 716
|
1907.09497
| 17,017,203
| 2,019
| 7
| 22
| true
| false
| 2
|
MISSION, MISSION
|
The Planckgalaxy cluster catalog [CIT] contains 4 clusters in this field, all of which are included in both this catalog and the SPT2500 deg$^2$ catalog. The cluster redshifts agree and the scatter in mass between these clusters is consistent with the scatter between the clusters in B15 and the Planckcluster catalog. A more detailed comparison of Planckand SPTpolmasses is included in [CIT].
| 393
|
1907.09621
| 17,018,235
| 2,019
| 7
| 22
| true
| false
| 3
|
MISSION, MISSION, MISSION
|
Taking $N_e=60$ (for a better fit of $n_s$ with PLANCK data), we calculate the parameters as follows: $n_s\approx 0.9635$, $r\approx 0.0002$, and $\Lambda\sim 10^{16.8} {\rm GeV}$. The form of the scalar potential is given in Fig. REF where the local maximum $\varphi_+$ and the starting point of inflation $\varphi_i$ are shown.
| 329
|
1907.10373
| 17,024,679
| 2,019
| 7
| 24
| false
| true
| 1
|
MISSION
|
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
| 937
|
1907.11876
| 17,037,212
| 2,019
| 7
| 27
| true
| false
| 2
|
MPS, MPS
|
Apart from the results with the Base dataset, in table REF, we provide results with the Base+SNe dataset also. In the absence of a varying dark energy equation of state, the SNe data is able to constrain $\Omega_m$ effectively [CIT], while the Planck CMB data can constrain $\Omega_m h^2$ well. Thus together they can effectively constrain $H_0$, and as found out in [CIT], the Planck+SNe combination actually prefers $H_0$ values which are higher than Planck alone, and thus the SNe data can help in breaking the degeneracy between $H_0$ and $\sum m_{\nu}$ partially. BAO data is however much more efficient in breaking the degeneracy and with Base+SNe (note that Base already contains BAO). In the DH case, we find the following 95% bounds on the neutrino mass sum in this $\Lambda\textrm{CDM}+\sum m_{\nu}$ model: $\sum m_{\nu}<0.11$ eV, which is only slightly stronger than the bound without the SNe data.
| 909
|
1907.12598
| 17,041,879
| 2,019
| 7
| 29
| true
| true
| 3
|
MISSION, MISSION, MISSION
|
Many astrophysical measurements of fundamental importance rely on accurate effective area calibration of X-ray telescopes, such as *Chandra*and *XMM-Newton*. A particularly important case is cosmological constraints based on galaxy clusters. Discovering how the Universe formed and what natural forces control its evolution is one of the most basic astronomical endeavors. The cosmological model, which quantifies the various types of matter (baryonic and dark matter, massive neutrinos, dark energy) that control the geometry and expansion rate of the Universe, can be constrained in several ways. As an example, Fig. REF shows relatively recent constraints on two interesting cosmological parameters, the Dark Energy density, $\Omega_\Lambda$, and its equation of state parameter, $w_0$, derived from the Cosmic Microwave Background (CMB, labeled WMAP in the figure), Type 1a supernovae (SN 1a), Baryonic Acoustic Oscillations (BAO), and clusters of galaxies. The distribution of masses of clusters --- the most massive gravitationally bound objects in the Universe with masses $\sim 10^{15}$$M_{\odot}$--- is very sensitive to the cosmological model. Clusters probe cosmology in the low-$z$ universe, while the CMB traces the state of the Universe at its dawn ($z=1000$). The 'clusters' constraints in Fig. REF are derived from *Chandra*X-ray mass measurements (Vihklinin et al. 2009) and are complementary to other methods. Combinations of all the different methods could provide the most stringent constraints (labeled 'all') if all measures agree --- or indicate the need for new physics if disagreements persist. Indeed, the most recent studies hint at tension between cluster and CMB constraints, and one proposed explanation is unexpectedly massive neutrinos (Planck Collaboration 2016). It is thus vitally important to exclude the measurement errors.
| 1,860
|
1907.12677
| 17,042,455
| 2,019
| 7
| 29
| true
| false
| 1
|
MISSION
|
Since the time for which the wormhole is open is so small, one might worry that quantum gravity corrections are important and cannot be neglected. This is not the case. The diamond is just a - small - piece of the BTZ geometry, the invariant curvature is given by $\ell^{-2}$ and is well separated from the Planck scale. While passing through the wormhole a signal would just feel like traveling through empty flat spacetime. Nonetheless, we still need to make sure that the signal is localized to a Planck sized box to be certain that it will make it through the opening. This sounds like a difficult, even dangerous, task. In this case we don't need to worry about this issue because the mouth of the wormhole is located close to the horizon of a black hole. The gravitational blueshift makes sure that an ordinary message at infinity is boosted enough by the time it reaches the mouth of the wormhole to fit in such a Planck sized box. We just need to send the message from the boundary early enough. The same gravitational effect guarantees that we don't need to fine-tune the moment we send the message from the boundary up to a Planck-time precision, because an asymptotic observer sees the window open for an exponentially longer time. We conclude that, despite the smallness of the opening, it is kinematically possible to send a message through the wormhole.[^4]
| 1,371
|
1907.13140
| 17,046,591
| 2,019
| 7
| 30
| false
| true
| 4
|
UNITS, UNITS, UNITS, UNITS
|
We present three non-parametric Bayesian primordial reconstructions using Planck 2018 polarization data: linear spline primordial power spectrum reconstructions, cubic spline inflationary potential reconstructions and sharp-featured primordial power spectrum reconstructions. All three methods conditionally show hints of an oscillatory feature in the primordial power spectrum in the multipole range $\ell\sim20$ to $\ell\sim50$, which is to some extent preserved upon marginalization. We find no evidence for deviations from a pure power law across a broad observable window ($50\lesssim\ell\lesssim2000$), but find that parameterizations are preferred which are able to account for lack of resolution at large angular scales due to cosmic variance, and at small angular scales due to Planck instrument noise. Furthermore, the late-time cosmological parameters are unperturbed by these extensions to the primordial power spectrum. This work is intended to provide a background and give more details of the Bayesian primordial reconstruction work found in the Planck 2018 papers.
| 1,080
|
1908.00906
| 17,055,941
| 2,019
| 8
| 2
| true
| false
| 3
|
MISSION, MISSION, MISSION
|
The evolution of the system due to soft scattering is a competition between the emission and absorption rates \_t f\_+ \_\_f\_= \^3q, (, f\_- (1 + f\_) -, f\_(1 + f\_-)),. We will now generally assume that the distribution is isotropic which simplifies the analysis of momentum diffusion. Expanding in powers of the momentum transfer $\q$ (which is small compared to the momentum $\p$ of the hard particle), we see that the contribution of small angle elastic processes to the Boltzmann equation (REF) takes the form of a Fokker-Planck equation ~~[]{#eq:FPequation label="eq:FPequation"}~~ C\_diff[f()] = \^i() (f\_(1 + f\_)) + q\^ij(),, where the drag and diffusion coefficients are given by FORMULA Specifically for isotropic systems these coefficients can be decomposed as \^i() = p\^i,, q\^ij() = q\_L p\^i p\^j + q (\^ij - p\^i p\^j),, and the scalar coefficients $\eta,\hat{q}_{L},\hat{q}$ can be evaluated as (see [CIT] for a review), FORMULA Similarly, the elastic scattering rate for kicks transverse to the direction of the particle can also be evaluated in closed form yielding ~~[]{#eq:Gammael label="eq:Gammael"}~~ (2)\^2 = g\^2 C_A T\^\* (-),. Although the Fokker-Planck coefficients in depend on the cutoff scale $\mu_\perp$, the time the evolution of the system is independent of $\mu_\perp$, when both the hard collisions and the Fokker-Planck evolution are taken into account [CIT]. We finally note that from and, the elastic scattering rate is of order
| 1,471
|
1908.02113
| 17,066,347
| 2,019
| 8
| 6
| false
| true
| 3
|
FOKKER, FOKKER, FOKKER
|
We consider the quintom action with 2 scalar fields and interaction between matter fields and one of the scalar field as the following FORMULA where $\kappa^2 = 8 \pi G$ is an inverse of the reduced Planck mass squared. R is the Ricci scalar, $\phi$ is a quintessence scalar field, $\sigma$ is a "phantom" scalar field, and $\psi_M$ is a matter field. We assume that there is only one self-interacting potential of the quintessence scalar field, while the "phantom" scalar field is rolling on an effective potential arising from the phantom-matter interaction as we will explain below. Strictly speaking, this $\sigma$ is not exactly the standard phantom but rather a ghost field since its equation of state is $P_{\sigma}=\rho_{\sigma}<0$. However, here and henceforth we will simply call it the phantom field for convenience, and also in accordance with the original name quintom (quintessence $+$ phantom). The extra component $X$ is identified with the phantom field $\sigma$ in this model. Note that the quintom model has negative kinetic energy term in the Lagrangian, the phantom scalar field thus encounters a quantum instability problem of its own. In our model, however, the *total* energy density of all components in the universe is always positive and the evolution of the universe always obey positive energy condition.
| 1,333
|
1908.03324
| 17,077,756
| 2,019
| 8
| 9
| true
| true
| 1
|
UNITS
|
If this picture is correct, then the evolution of the black hole collision and merger will result in a horizon pinch, which then quickly evaporates through quantum-gravity effects (just a little 'pixie dust') and yields two outgoing black holes. The loss of classical predictivity is very small: the horizon bifurcates with a variation of the horizon area (increase or decrease) of only Planckian-size, and the uncertainty in the outgoing scattering angle will be proportional to at most a power of $(M_\textrm{Planck}/M)$, where $M$ is the total mass of the system. Hence, the indeterminacy is a parametrically very small number for any macroscopic initial mass. Predictivity of the entire evolution using General Relativity will be maintained to great accuracy. Except for the details of the break up, the picture we have presented in figure REF will then be essentially correct.
| 881
|
1908.03424
| 17,078,581
| 2,019
| 8
| 9
| false
| true
| 1
|
UNITS
|
The ATLAS and CMS Collaborations recently released the 80--137 ${\rm fb}^{-1}$ results at the LHC Run 2 recorded from 2017--18, highlighted by no significant deviation beyond the expected Standard Model background [CIT]. The dearth of any positive signal of supersymmetry (SUSY) has elevated the lower bound on the gluino mass to 2.25 TeV with regards to the ATLAS and CMS $\widetilde{g} \to t \bar{t} + \widetilde{\chi}_1^0$ simplified model scenarios [CIT]. The advancement of gluino limits above 2 TeV further strains the SUSY model space, providing impetus for phenomenologists to build SUSY models supporting a heavy gluino, yet remain consistent with the measured light Higgs boson mass of $M_h = 125.1 \pm 0.14$ GeV [CIT] and the WMAP 9-year [CIT] plus 2018 Planck [CIT] observed relic density on the dark matter content in our universe of $\Omega_{DM} h^2 \simeq 0.12$, in addition to satisfying the world average top quark mass of $M_t = 173.1 \pm 0.9$ GeV [CIT]. In this current age of multi-TeV gluino exclusion limits, the intersection of all these empirically validated quantities becomes ever more increasingly difficult. Despite the aforementioned hurdles, we shall present here an intriguing case for a natural SUSY model with no electroweak fine-tuning that does indeed meet the experimental requirements just noted and can also generate a heavy gluino that would not as yet been produced at the LHC Run 2 in sufficient quantities for detection.
| 1,462
|
1908.06149
| 17,100,848
| 2,019
| 8
| 16
| false
| true
| 1
|
MISSION
|
The numbers for the various parameters in these potentials can be extracted from experiment. Those which we obtain in the real world have some contamination from the pion mass. But for our purposes, we can use these numbers. The scientists in this imaginary world will have discovered a new scale - the QCD scale. Numerically these potentials become FORMULA with FORMULA Similarly FORMULA The QCD scale plays a role for these potentials that the Planck scale played for gravity.
| 478
|
1908.11003
| 17,139,172
| 2,019
| 8
| 29
| false
| true
| 1
|
UNITS
|
We have made observations of galaxy clusters detected by the Planck space telescope, with the Arcminute Microkelvin Imager (AMI) radio interferometer system in order to compare mass estimates obtained from their data. I analysed this data using the physical model described in Section REF, following largely the data analysis method outlined in [CIT].398.2049F. This allowed us to derive physical parameter estimates for each cluster, in particular the total mass out to a given radius. I have also calculated two mass estimates for each cluster from Planck's PowellSnakes detection algorithm [CIT] data following [CIT] (PSZ2), and found the following.
| 652
|
1909.00029
| 17,147,546
| 2,019
| 8
| 30
| true
| false
| 2
|
MISSION, MISSION
|
We estimated the mass of warm H~2~ from the flux of the H~2~,1--0,S(1) emission line $F_{1-0,\rm S(1)}$ using FORMULA where $A_{1-0,\rm S(1)} = 3.47\times10^{-7},\rm s^{-1}$ is the spontaneous emission coefficient [CIT], $f_{\nu=1,J=3}(T)$ is the number fraction of H~2~ molecules in the $\nu = 1$ vibrational state and $J = 3$ rotational state at temperature $T$, and $h$ and $c$ are the Planck constant and the speed of light respectively. In LTE, the number fraction of molecules in a rovibrational state with energy $E_j$ and degeneracy $g_j$ is described by the Boltzmann distribution FORMULA where $k$ is the Boltzmann constant and $Z_{\rm vr}(T) = \sum_{i} g_i e^{-E_{i} / kT}$ is the partition function, which we computed using the molecular data of [CIT]. At a temperature of $5000,\rm K$, consistent with our excitation diagram (Fig. REF), $f_{\nu=1,J=3}(T) = 0.0210$. Using this value in Eqn. REF yields $M_{\rm H_2} (5000,\rm K) = 4400 \pm 70,\rm M_\odot$.
| 968
|
1909.00144
| 17,148,269
| 2,019
| 8
| 31
| true
| false
| 1
|
CONSTANT
|
All-sky maps at frequencies $>$ 300,GHz delivered by past space-mission surveys like ESA's Planck have forged an important legacy to the Astronomy community for decades. They have been of major importance for most ground-based CMB experiments to characterise residual foreground contamination in their data and to obtain evidence for false detections. As a historical example, the Planck 353-GHz map has served as an exquisite tracer of the Galactic thermal dust contamination in the CMB $B$-mode data of the ground-based experiment BICEP2, providing evidence for *false* detection of the primordial gravitational wave signal from inflation by BICEP2 [CIT]. Dust and CIB foregrounds are extremely challenging to characterise at frequencies $<270$,GHz by ground-based CMB surveys, which often have to rely on the extrapolation of the high-frequency templates provided by space-mission surveys.\ **A new era of faint signal-to-foreground regimes:** The next decades will be dedicated to the search for ever fainter cosmological signals, e.g., kSZ, rSZ, pSZ, and CMB-cluster lensing as described in this white paper, for which high-frequency observations at high resolution from Backlightof much higher precision than Planck will be of crucial importance to control unavoidable foreground biases. Besides the control of foregrounds, spectral coverage at high frequencies $>300$,GHz allowed by the space mission is essential to discern the distinct spectral signature of some of the signals presented here, such as rSZ and non-thermal SZ effects. Finally, having an absolute spectrometer on board as an option for absolute calibration [CIT] would be a huge gain for measuring these faint SZ signals without bias.
| 1,708
|
1909.01592
| 17,161,330
| 2,019
| 9
| 4
| true
| false
| 3
|
MISSION, MISSION, MISSION
|
The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE).
| 935
|
1909.02042
| 17,165,970
| 2,019
| 9
| 4
| true
| false
| 3
|
MPS, MPS, MPS
|
To assess how well FRBs might help to constrain $f_{\mathrm{d}} (z)$, we consider two different parametric models. Firstly, a single fixed constant, FORMULA where $\bar{f}_{\mathrm{d}}$ represent a weighted average of the diffuse gas fraction in the redshift range or interest. And secondly, and a two parameter model given by FORMULA where $f_0$ is the value of $f_{\mathrm{d}}$ today, and $f_a$ is its derivative with respect to the scale factor, $a(t)$. We then fit for FORMULA where $\vec{f}$ contains the parameters associated with the relevant $f_{\mathrm{d}} (z)$ model, namely FORMULA To forecast the combined constraints, FRB+CBSH, we extract the relevant parameter covariance matrix from the chains provided by the Planck 2016 data release, and include it as a prior in the analysis. The log-prior is given by FORMULA where $P(\theta)$ is the prior probability associated with the set of parameter values $\theta$, $\mathbf{C}$ is the covariance matrix, and $\xi = \theta-\theta_{\mathrm{fid}}$ is the displacement in parameters space between the relevant parameter values and the fiducial values.
| 1,107
|
1909.02821
| 17,172,346
| 2,019
| 9
| 6
| true
| false
| 1
|
MISSION
|
As a proof-of-concept, we stacked Planck data at the positions of galaxies from the MaNGA survey. While the number of galaxies is insufficient to make a rkSZ detection, it can yield an upper limit on the average CMB temperature dipole aligned with galaxies' spin.
| 263
|
1909.04690
| 17,186,006
| 2,019
| 9
| 10
| true
| false
| 1
|
MISSION
|
The explicit calculation [CIT] of the production of massless particles (gravitons) by spacetime curvature [CIT] gives a particular result for the source term: $\mathcal{S} \propto \hbar,H,R^2$ with Hubble parameter $H$ and Ricci curvature scalar $R$. Here, we assume a somewhat different functional dependence on the cosmic scale factor $a(t)$, FORMULA with the Ricci curvature scalar $R(t)$ from REF(#eq:R-from-a){reference-type="eqref" reference="eq:R-from-a"}, a positive decay constant $\Gamma$, and a length scale $l_\text{decay}$. In the following, we assume that $l_\text{decay}$ is equal to the Planck length scale, FORMULA with $c$ and $\hbar$ temporarily displayed (the numerical value of the reduced Planck energy is $E_\text{planck} \approx 2.44 \times 10^{18},\text{GeV}$).
| 786
|
1909.05816
| 17,194,182
| 2,019
| 9
| 12
| false
| true
| 3
|
UNITS, UNITS, UNITS
|
The mass $m$ of the aikyon is *defined* by $m\equiv\hbar/Lc$; and as a consequence $L$ is hence interpreted to be its Compton wavelength. Newton's gravitational constant $G$ is defined by $G\equiv L_p^2 c^3/\hbar$, and Planck mass $m_P$ by $m_P=\hbar/L_P c$. Mass and spin are both emergent concepts of Level I; at Level 0 the aikyon only has an associated length $L$ - this length is a property of both the gravity aspect and the matter aspect of the STM atom.
| 461
|
1909.06340
| 17,198,939
| 2,019
| 9
| 13
| false
| true
| 1
|
UNITS
|
The authors of this principle argue in Ref. [CIT] that this is rather natural if we consider *extensive* variables constrained by some new physics theory at high energies, as long as the system has a rather strong first order phase transition. Again we may use the analogy of water and note that slush (in which ice and liquid water coexist) is present for a (relatively) wide range of extensive variables (in this case temperature and pressure) due to the existence of a first order phase transition. Returning to the Higgs potential, a possible *extensive* quantity could be $\langle |\phi|^2 \rangle$. If this were set by some new physics theory at the Planck scale with a strong first order phase transition, it would be rather likely to find $\langle |\phi|^2 \rangle\sim M_{\rm Pl}^2$, leading to a second degenerate vacuum at the Planck Scale. In essense, this principle is relying on a rather flat distribution of *extensive* parameter space set at the Planck scale matching to a rather peaked distribution of *intensive* parameters (i.e. the usual Higgs potential parameters) due to a strong first-order phase transition, which in turn leads to a second degenerate vacuum [CIT]
| 1,187
|
1909.10459
| 17,226,468
| 2,019
| 9
| 23
| false
| true
| 3
|
UNITS, UNITS, UNITS
|
The risk associated with making assumptions on an unseen population has triggered a number of observational programs targeting, or also including, clusters more easily missed. Andreon & Moretti (2011) exploited the low and stable X-ray background of X-ray Telescope (XRT) on Swift for follow-ups of a sample of clusters free from the X-ray bias, finding a larger scatter in X-ray luminosity at a given richness than in X-ray selected samples (accounting for Malmquist and selection effect corrections for the latter). Several of these clusters have low surface brightness, which impair their detection in X-ray surveys. A similar effort was repeated by Ge et al. (2019) and by Pearson et al. (2017), the latter using Chandra data on groups and optical luminosity in place of richness, finding an increased scatter. Giles et al. (2015) followed up in X-ray a small sample of weak-lensing selected clusters. Andreon et al. (2009) and Andreon, Trinchieri & Pizzolato (2011) observed the two most distant clusters free from the X-ray selection bias to constrain the evolution of the $L_X-T$ scaling without making a hypothesis on the unseen population. Because of the heavy censoring of X-ray selected samples, constraints derived from 100 X-ray selected clusters (Giles et al. 2016) are comparable with the one derived for just the two high redshift clusters above. In parallel, clusters selected by their Sunayev-Zeldovich (SZ) signal turned out to also show a larger scatter than in X-ray selected samples (Planck Collaboration 2011a, 2012a), with some unexpected outlier clusters with a low X-ray luminosity for their SZ signal (Planck Collaboration 2016). These and other efforts led to the discovery of a growing variety of cluster properties at a given mass: X-ray luminosity and gas fraction have a larger scatter than previously thought (e.g., Andreon & Moretti 2011; Planck collaboration 2011, 2012; Andreon et al. 2016, 2017; Giles et al. 2017, Rossetti et al. 2017), and clusters with low electron density profiles (Andreon et al. 2016), or of low surface brightness (Andreon et al. 2016, Xu et al. 2018) have been discovered.
| 2,134
|
1909.11491
| 17,235,941
| 2,019
| 9
| 25
| true
| false
| 3
|
MISSION, MISSION, MISSION
|
Now, even though in many practical cases, the KL number of degrees of freedom $N_{\text{KL}}$ appears to be small (1, 2 or even 0) for $Q_{\text{UDM}}$ [CIT], we show using illustrative constructions and a real data example that there exists cases where $N_{\text{KL}}$ can remain large. A high value of $N_{\text{KL}}$ may be explained by a significant improvement of constraint in each mode. But as we explained above, such a large $N_{\text{KL}}$ can lead to underestimate an inconsistency when converted in terms of PTE and significance level. Right below are two illustrative numerical examples and a concrete example using real data from WMAP versus Planck is given in the next sub-section. In the three examples, this conversion leads to underestimation of inconsistencies.
| 780
|
1910.01608
| 17,265,974
| 2,019
| 10
| 3
| true
| false
| 1
|
MISSION
|
We use the data sets presented in Table REF to compare different $H_0$ measurements from **1) Planck:** CMB measurements using Planck-2018 data, specifically we use the measurement TTTEEE+lowE+CMB lens used in [CIT]; **2) SH0ES:** Local measurements of $H_0$ from [CIT]; **3) Joint LSS:** Joint analysis using three distinct LSS measurements as in Table REF [^2]; **4) SNe+BAO+BBN:** Combination of background data sets to constraint $H_0$. SNe and BAO have different degeneracy directions, while BBN can constraint $\Omega_b h^2$; **5) H0LiCOW:** measures expansion rate of the universe using time-delay cosmography and distance ladder results [CIT]; **6) CCHP-TRGB:** Calibration of SNe Ia using the TRGB method, independent of the Cepheid distance scale [CIT]; **7) TRGB-2:** TRGB+SNe Ia distance ladder using a different calibration method [CIT]. The values of $H_0$ obtained from each of these methods are given in Fig. REF.
| 929
|
1910.01608
| 17,265,989
| 2,019
| 10
| 3
| true
| false
| 2
|
MISSION, MISSION
|
- For any $e:\nu$ annihilation ratio and given type of WIMP, with the exception of a neutral scalar particle, Planck CMB observations set a $2\sigma$ lower bound on $m_\chi$. Similarly, Planck+BBN bound set $m_\chi > 0.8,\text{MeV}$ a $2\sigma$. These bounds are independent upon the spin of the particle and whether the annihilation is s-wave or p-wave.
| 354
|
1910.01649
| 17,267,156
| 2,019
| 10
| 3
| true
| true
| 2
|
MISSION, MISSION
|
We conclude that any model whose only effect is to change the fundamental scale at which gravity becomes non-perturbative is not going to help. These include, for example, all scenarios with large extra dimensions (LED) [CIT]. In these models the Planck scale is not fundamental, and its enormous value is simply a consequence of the large size of the extra dimensional space. The "true\" scale of gravity can be much lower, for instance of the order of the electroweak scale for LED models that solve the hierarchy problem. At low energies the extra dimensions are hidden, and gravity is weaker compared to the other forces because the gravitational flux also spreads in the extra dimensions. At high energies, however, the extra dimensions are resolved and the fundamental gravity scale restored to its true value. In a (4+n)-dimensional spacetime with $n$ compactified extra dimension of volume $\mathcal{V}$, the fundamental gravity scale $M_P$ is related to the usual Planck scale via FORMULA The current large value of $m_p$ (thus small value of $G$) is simply due to the large volume $\mathcal{V}$ of the extradimensional space. One can then envision a scenario in which $\mathcal{V}$ was much smaller in the early universe, effectively making gravity much stronger at that epoch. This however does not make gravity more efficient at absorbing gravitons, as we saw, since a stronger gravity also makes it that much easier to create black holes. This is a scenario of modified gravity that simply changes the gravity scale at high energies, and as such it cannot work.
| 1,574
|
1910.01657
| 17,267,776
| 2,019
| 10
| 3
| true
| true
| 2
|
UNITS, UNITS
|
The epoch of reionization (EoR) marks a fundamental event for the Universe, characterized by a transition phase of the intergalactic medium (IGM) from cold and almost neutral to warm and fully ionized [CIT]. During this dramatic event, located approximately at $z\sim 6-8$, the first sources of UV photons with energy above 13.6 eV were able to clear the fog of the widespread neutral hydrogen (HI) and put an end to the so called period of the Dark Ages [CIT]. Only in the recent years, thanks mainly to the analysis of the CMB optical depth by Planck [CIT] and the high-z QSOs [CIT], it was clear that the EoR was a very rapid event; this phase transition lasted for a short time period $\Delta z\le 2.8$, and it also was a patchy process. This is consistent with the progressive decrease of the photo-ionization rate $\Gamma_\mathrm{HI}$ observed at $z\ge 5.5$ [CIT].
| 870
|
1910.02775
| 17,275,031
| 2,019
| 10
| 7
| true
| false
| 1
|
MISSION
|
The MOND length, $\ell_{\scriptscriptstyle M}$, MOND mass, $M_{\scriptscriptstyle M}$, and a MOND time, $t_{\scriptscriptstyle M}\equiv c/a_0$, may, in themselves, have physical significance, and might be physically the more fundamental, and more indicative of the origin of MOND, than $a_0$ itself. Analogously, in quantum theory one combines $\hbar$ with c and $G$ to form the Planck length, time, and mass FORMULA These quantities are indicative in different ways of where quantum gravity may be important. For example, a black hole having a mass near $\mathcal{M}_{\scriptscriptstyle P}$ requires quantum gravity for its description.
| 637
|
1910.04368
| 17,289,352
| 2,019
| 10
| 10
| true
| true
| 1
|
UNITS
|
Cosmic shear data from both KV450 and DES-Y1 are publicly available. These datasets have no overlapping regions, such that the cross-covariance between the surveys can be neglected, allowing for a simple joint analysis of these two surveys. The cosmic shear analyses of both of these surveys yielded consistent, but smaller, $S_8\equiv\sigma_8(\Om/0.3)^{0.5}$ values than the Planck Legacy analysis of the cosmic microwave background [CIT]. It is therefore interesting to ask whether any tension between these two probes increases if we combine KV450 and DES-Y1. Recently, [CIT] carried out a joint analysis of KV450 and DES-Y1 using 2PCFs. They incorporated a spectroscopic calibration for the redshift distributions of galaxies and a consistent set of priors. [CIT] found an $S_8$ value that is in tension with Planck by $2.5\sigma$. Here we follow the same procedure for combining these surveys, using COSEBIs instead of 2PCFs. Our fiducial analysis also adopts the alternative spectroscopic calibration approach for the DES-Y1 redshift distributions.
| 1,054
|
1910.05336
| 17,297,160
| 2,019
| 10
| 11
| true
| false
| 2
|
MISSION, MISSION
|
REF shows the DES-Y1 contour plots for $S_8$ and $\Om$ with the Planck Legacy results shown in red [TT,TE,EE+lowE, [CIT]][^2]. Here we show the analysis of DES-Y1 data using the three setups introduced in REF. The grey dashed contours show the result from the cosmic shear analysis of T18, which used $\xi_\pm$ with different scale-cuts for each pair of redshift bins, to primarily avoid the effects of baryon feedback. The magenta contours show our analysis of DES-Y1 data with the same setup as T18, but using COSEBIs over the full angular range of $[0.5', 250']$. The cyan contours show the effect of moving from the T18 setup to H20, retaining the DES bpz redshift distribution. Finally the orange contours show the effect of using the DIR calibrated redshift distributions with the H20 setup.
| 797
|
1910.05336
| 17,297,176
| 2,019
| 10
| 11
| true
| false
| 1
|
MISSION
|
Gravitinos are prototypical DM with Planck-suppressed interactions [CIT]. The prototype Lagrangian is: FORMULA and the resulting cross sections are: FORMULA where we used the symbol ${G}$ to indicate a gravitino particle. Finally, the freeze-out temperature is: FORMULA Thermal gravitinos are hot relics (in fact they were the first super-symmetric DM candidate ever proposed) and their abundance is calculated as: FORMULA However, this calculation neglects single-gravitino processes that maintain gravitinos out of equilibrium, for example: FORMULA where $V$ is a gauge boson and $\tilde{\lambda}$ its gauginos (its supersymmetric partner). Generically, there is a gravitino overproduction problem, so we need to dilute them away: FORMULA where $T_{RH}$ is the reheating temperature.
| 785
|
1910.05610
| 17,299,251
| 2,019
| 10
| 12
| true
| true
| 1
|
UNITS
|
The Pan--STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan--STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST--1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
| 1,046
|
1910.06168
| 17,302,373
| 2,019
| 10
| 14
| true
| false
| 3
|
MPS, MPS, MPS
|
To demonstrate a simple example with recent cosmological data, we provide in Fig. REF the function $\mathcal{R}(\ensuremath{\Sigma m_\nu},,0)$ computed in few cases, obtained from the publicly available Planck 2018 (P18) chains [^2] with four different data sets and considering the $\Lambda$CDM+$\Sigma m_\nu$model. The datasets include the full CMB temperature and polarization data [CIT] plus the lensing measurements [CIT] by Planck 2018, and BAO information from the `SDSS BOSS` DR12 [CIT] the `6DF` [CIT] and the `SDSS DR7 MGS` [CIT] surveys.
| 548
|
1910.06646
| 17,306,207
| 2,019
| 10
| 15
| true
| true
| 2
|
MISSION, MISSION
|
Having established the procedure for obtaining the reheating temperature, we use Python programming to numerically evaluate the integrals of the preceding section. This requires us to fix several parameters of the model as well as to choose proper units. A natural choice would be to express the results using natural units i.e., in units where $\hbar=c=1$, but this presents us with a problem regarding the numerical simulation. In natural units, the numerical values range from very small to very large with tens of orders of magnitude in difference. This makes the numerical simulation prone to errors, increases computational time and prevents the calculation of full range of parameters. To remedy this situation, we will use Planck units where $G_N=1$ and the results can be expressed in natural units after calculations.
| 827
|
1910.07520
| 17,312,029
| 2,019
| 10
| 16
| true
| true
| 1
|
UNITS
|
In appendix [6], we show the observational constraints on a non-interacting cosmological model $w$CDM model with and without neutrinos, using the same data combinations, motivated to check if the results can be mimicked by a very different assumption/model extension that is not the interacting DE scenario. We find that within this non-interacting $w$CDM + $M_{\nu}$ scenario, the bounds on the total neutrino mass are as follows: $M_{\nu} < 0.287,\; (< 0.184),\; (< 0.296)$ eV at 95% CL for CMB (Planck 2018 + BAO), (Planck 2018 + R19), respectively. We notice that the bound obtained from the CMB data only, in direct comparison with IVS + $M_{\nu}$ scenario, is very compatible and both the interacting and non-interacting models provide with the same limit on the neutrino mass scale. On the other hand, in view of both the joint analyses, i.e., Planck 2018 + BAO and Planck 2018 + R19, the bounds on $M_{\nu}$ are significantly wider compared to the $w$CDM model. Thus, the above observation shows that $M_{\nu}$ scale can be minimally model-dependent.
| 1,058
|
1910.08821
| 17,325,704
| 2,019
| 10
| 19
| true
| false
| 4
|
MISSION, MISSION, MISSION, MISSION
|
Due to the reliance of CCDs on the photoelectric effect, there is a limit on the the minimum wavelength they can detect. Electron emission is only viable when the incident photons have enough energy to move the electrons from the valence band to the conduction band in the silicon CCD [33]. From Planck's relation shown in REF(#eq: Eq_3){reference-type="eqref" reference="eq: Eq_3"} (where E is the energy of the incident photon, h is Planck's constant and f the frequency of the incident light) it can be seen there is a natural threshold wavelength for detection by CCDs.
| 573
|
1910.10847
| 17,340,212
| 2,019
| 10
| 24
| true
| false
| 2
|
RELATION, CONSTANT
|
As stated in Section [5], the input to the RadFil code consists in $20 \times 20 \deg$ patches extracted from the Planck $y$-map, which follow the filaments. The RadFil code then extracts profiles of pixel intensity on the map, along lines perpendicular to the filament path on the map. In this appendix we report a few examples of the intermediate steps of the process. Each panel of each figure reports in the top part a cut-out of the Planck $y$-map, with rectangles marked which correspond to the position of the patches. The filament path is also reported on the map. In the bottom part of each panel, the individual filaments obtained along lines spaced by one pixel perpendicular to the filament axis and along the filament spine are reported.
| 750
|
1910.11879
| 17,348,384
| 2,019
| 10
| 25
| true
| false
| 2
|
MISSION, MISSION
|
The Planck length is the minimum length which physical law do not fail. The Dirac delta function was created to deal with continuous range issue, and it is zero except for one point. Thus contradict the Planck length. Renormalization method is the usual way to deal with divergence difficulties. The authors proposed a new way to solve the problem of ultraviolet divergence and this method is self-consistent with the Planck length. For this purpose a redefine function ${\delta}_P$ in position representation was introduced to handle with the canonical quantization. The function ${\delta}_P$ tends to the Dirac delta function if the Planck length goes to zero. By logical deduction the authors obtains new commutation/anticommutation relations and new Feynman propagators which are convergent. This article will deduce the new Feynman propagators for the Klein-Gordon field, the Dirac field and the Maxwell field. Through the new Feynman propagators, we can eliminate ultraviolet divergence.
| 993
|
1911.01789
| 17,349,740
| 2,019
| 10
| 26
| false
| true
| 4
|
UNITS, UNITS, UNITS, UNITS
|
We also found that there are two factors significantly influencing the constraints on the cosmic curvature parameter, i.e., the choice of lens models and the classifications of SGL data. The uncertainty of $\Omega_k$ obtained from the latest observations is about 0.2, which is large compared to the result given by the Planck observation, but we emphasize that such a result is obtained by using a cosmological model-independent method and only low-redshift observations. Considering the lens models, we find that the SIS model ($f_E=1$) is well consistent with results from intermediate-mass and high-mass galaxies. In the framework of power-law mass model, the values of lens model parameters ($\gamma_0, \gamma_1$) obtained from high-mass subsample is barely consistent with those in the SIS model ($\gamma_0=2, \gamma_1=0$), but from others are not which implies that the total density profile of early-type galaxies may be evolving with cosmic time slightly. When considering the luminosity density profile to be different from the total mass density profile, in the extended power-law lens model, the constraints on the lens model parameters ($\alpha, \delta$) from different subsamples are inconsistent with each other, revealing the possible difference between mass density distributions of dark matter and luminous baryons in galaxies with different masses.
| 1,367
|
1910.12173
| 17,350,658
| 2,019
| 10
| 27
| true
| true
| 1
|
MISSION
|
The right panel of Fig. REF shows the vacua of singlet and doublet models at the Planck scale in terms of the Yukawa couplings $(\alpha_\kappa,\alpha_{\kappa'})|_{M_F}$ at the matching scale. Integrating the RG between $M_F$ and $M_{\rm Pl}$, we find wide ranges of models whose vacua at the Planck scale are either $V^+$ (blue), or a stable $V$ with a metastable Higgs sector ($\alpha_\lambda\gtrsim -10^{-4}$) such as in the SM [CIT] (yellow). For other parameter ranges we also find $V^-$ (green), or unstable BSM potentials (gray), or Landau poles below the Planck scale (light red). Most importantly, the anomalous magnetic moments (REF) are matched for couplings in the red-shaded areas which cover the $1\sigma$ band. Constraints from Higgs signal strength [CIT] imply an upper bound on $\alpha_\kappa$ corresponding to a lower bound for the scalar mass of about 226 GeV (for $M_F=1$ TeV). Similar results are found for $V^+$ at the low scale (not shown) except that regions with $V^-$ in Fig. REF turn into $V^+$. We conclude that models are stable and Planck-safe for a range of parameters $\alpha_{{}_{\rm BSM}}|_{M_F}$.
| 1,130
|
1910.14062
| 17,365,914
| 2,019
| 10
| 30
| false
| true
| 4
|
UNITS, UNITS, UNITS, UNITS
|
Based on the motivation that some quantum gravity theories predicts the Lorentz Invariance Violation (LIV) around Planck-scale energy levels, this paper proposes a new formalism that addresses the possible effects of LIV in the electrodynamics. This formalism is capable of changing the usual electrodynamics through high derivative arbitrary mass dimension terms that includes a constant background field controlling the intensity of LIV in the models, producing modifications in the dispersion relations in a manner that is similar to the Myers-Pospelov approach. With this framework, it was possible to generate a CPT-even and CPT-odd generalized modifications of the electrodynamics in order to study the stability and causality of these theories considering the isotropic case for the background field. An additional analysis of unitarity at tree level was considered by studying the saturated propagators. After this analysis, we conclude that, while CPT-even modifications always preserves the stability, causality and unitarity in the boundaries of the effective field theory and therefore may be good candidates for field theories with interactions, the CPT-odd one violates causality and unitarity. This feature is a consequence of the vacuum birefringence characteristics that are present in CPT-odd theories for the photon sector.
| 1,342
|
1911.00048
| 17,367,158
| 2,019
| 10
| 31
| false
| true
| 1
|
UNITS
|
Moreover, we have also founds new bounds on the Hořava-Lifshitz parameter $\lambda$ using Hubble constant data and our own MCMC simulations using cosmological data. We find that some of these bounds overlap significantly with regions of $\lambda$ known to lead to ghost instabilities in the infrared limit of the theory, but that some bounds also cover a non-pathological parameter space. Moreover, we have used available bounds on $\lambda$ to estimate how much Lorentz-violating effects could contribute to the Hubble tension. Most significantly, we find that when using our own bounds on $\lambda$ from the beyond detailed balance scenario along with a MCMC method and Planck CMB data, Lorentz violation can contribute to up to $38\%$ of the Hubble tension. Therefore it would make sense to also consider Lorentz-violating field theories in the search to find an explanation for the Hubble tension.
| 901
|
1910.14414
| 17,370,335
| 2,019
| 10
| 31
| true
| false
| 1
|
MISSION
|
While the expansion employed in [CIT] may have done the authors a disservice in negatively impacting the conclusions, there appears to be some truth to the tension claims. In particular, our best-fit value for $\Omega_m = 0.369^{+0.015}_{-0.014}$ is discrepant from the Planck value at $3.5, \sigma$, thus confirming that there is a real tension with the standard model, namely flat $\Lambda$CDM with $\Omega_m \approx 0.3$. We traced this tension to the QSO data and confirmed that the best-fit value of $\Omega_m$ to the QSO data is consistent with a flat $\Lambda$CDM Universe with no dark energy. This marks an irreconcilable inconsistency between the Risaliti-Lusso QSOs and flat $\Lambda$CDM, which replaces the "$\sim 4 \sigma$ tension from the $\Lambda$CDM model\" claim.
| 779
|
1911.01681
| 17,383,989
| 2,019
| 11
| 5
| true
| false
| 1
|
MISSION
|
The above theory is assumed to operate at the Planck scale - there is no space-time, but one can define a Planck scale foam of space-time-matter. If one is not observing dynamics at the Planck scale, a mean-field dynamics at lower energies is arrived at. This is done by averaging over time-scales much larger than Planck time, using the standard techniques of statistical thermodynamics. This mean field dynamics falls into two classes.
| 437
|
1911.02955
| 17,393,620
| 2,019
| 11
| 7
| true
| true
| 4
|
UNITS, UNITS, UNITS, UNITS
|
The perturbation of the diffusion coefficient ${ D(t) \to D(t) + \delta D(t) }$ modifies the Fokker-Planck operator by $\delta D(t)\partial_v^2$, and induces a shift in the expectation ${ \langle \mathcal{O}_L^{r}(t) \rangle }$ to FORMULA where the response stochastic variable FORMULA is defined by the responding variable $\mathcal{O}_L^{r}(t)$, and the probing variable FORMULA The above expression is obtained from Eq.REF(#res-stoc-var){reference-type="eqref" reference="res-stoc-var"} for $\delta_{\lambda}\mathcal{L} = \partial_v^2$. The response variable can be rewritten in terms of the two-time correlation variables as FORMULA
| 637
|
1911.03106
| 17,394,749
| 2,019
| 11
| 8
| false
| true
| 1
|
FOKKER
|
On the other hand, the data of precision cosmology (planck15,planck18) analysed in the terms of parameters of this standard cosmological model continuously tighten the constraints on deviations of the measured parameters from the model predictions. These measured parameters involve dark matter density $\Omega_{DM} h^2 = 0.120\pm 0.001$, baryon density $\Omega_b h^2 = 0.0224\pm 0.0001$ (where the dimensionless constant $h$ is the modern Hubble constant $H_0$ in the units of 100 km/s/Mpc), scalar spectral index $n_s = 0.965\pm 0.004$, and optical depth $\tau = 0.054\pm 0.007$ [CIT]. These results are only weakly dependent on the cosmological model and remain stable, with somewhat increased errors, in many commonly considered extensions. Assuming the $\Lambda$CDM cosmology, the inferred late-Universe parameters were determined: the Hubble constant $H_0 = (67.4\pm 0.5)$ km/s/Mpc; matter density parameter $\Omega_m = 0.315\pm 0.007$; and matter fluctuation amplitude $\sigma_8 = 0.811\pm 0.006$. Combining with the results of studies of baryon acoustic oscillations (BAO) by measurement of large scale distribution of galaxies [CIT] [^3] Planck collaboration has constrained the effective extra relativistic degrees of freedom to be $N_{\rm eff} = 2.99\pm 0.17$, and the sum of neutrino mass was tightly constrained to $\sum m_\nu< 0.12$. These results prove the basic ideas of inflationary model with baryosynthesis and dark matter/energy, but cannot provide definite choice for the corresponding BSM physics.
| 1,519
|
1911.03294
| 17,396,275
| 2,019
| 11
| 8
| true
| true
| 1
|
MISSION
|
In this picture, $A^{\rm IR}_{V}$ should be the largest (provided the dust temperature does not vary along the line of sight), which is indeed what we see in general. In the region observed with MaNGaL, the median extinction values are 2.7, 4.0, and 2.9 mag for $A_{\rm V}$, $A^{\rm IR}_{\rm V}$, and $A^{\rm CO}_{\rm V}$, respectively. However, this relation breaks down significantly at certain locations. Specifically, $A^{\rm IR}_{\rm V}< A_{\rm V}$ at nearly all locations to the north of the ionizing star. Also, as the observer looks through the dense molecular clump southward of the ionizing star, $A^{\rm IR}_{\rm V}< A^{\rm CO}_{\rm V}$. This behaviour is somewhat unexpected, and may be caused by the choice of the opacity prescription. The Planck-based opacity parameters that we have utilised are more appropriate for the diffuse medium, while the gas in the studied region has a much higher density. We also tried other opacity prescriptions [e.g. [CIT]], and they do not improve the situation. The CO-based extinction is also somewhat uncertain, as its value scales linearly with the adopted CO abundance, and this value is also prone to some variations. Overall, we conclude that extinctions based on *AKARI* data, as well as the extinctions based on the CO data, are uncertain up to a factor of two, and should therefore be used as a relative measure of column density, rather than absolute. We note, however, that the estimated dust temperature does not depend strongly on the adopted opacity.
| 1,512
|
1911.04551
| 17,406,157
| 2,019
| 11
| 11
| true
| false
| 1
|
OPACITY
|
In Fig. REF, we display the CMB power spectra in the $\Lambda$CDM and PLCE models along with the data from Planck 2018. Since the TT spectra of PLCE and $\Lambda$CDM are almost identical to the data from Planck 2018 for the high values of the multipole $l$, we focus on the differences between the two models and the data when $l<100$ as depicted in Fig. REF. The TT power spectrum in the PLCE model for $\nu > 0$ is larger than that of $\Lambda$CDM when $l < 100$ with the error in the allowable range of the observational data.
| 529
|
1911.06046
| 17,417,650
| 2,019
| 11
| 14
| true
| false
| 2
|
MISSION, MISSION
|
The authors thank Andreas Trautner and Rahul Srivastava for useful conversations. S.C.C's work is supported by FPA2017-85216-P (AEI/FEDER, UE), SEV-2014-0398, PROMETEO/2018/165 (Generalitat Valenciana), Spanish Red Consolider MultiDark FPA2017-90566-REDC and the FPI grant BES-2016-076643. The work of W.R. is supported by the DFG with grant RO 2516/7-1 in the Heisenberg program. U.J.S.S. acknowledges support from CONACYT (México). S.C.C would like to thank the Max-Planck-Institute for Nuclear Physics in Heidelberg for their hospitality during his visit, where this work was initiated.
| 589
|
1911.06824
| 17,424,153
| 2,019
| 11
| 15
| false
| true
| 1
|
MPS
|
The Bekenstein--Hawking entropy is written as FORMULA where $k_{B}$ and $\hbar$ are the Boltzmann constant and the reduced Planck constant, respectively. The reduced Planck constant is defined as $\hbar \equiv h/(2 \pi)$, where $h$ is the Planck constant [CIT]. Substituting $A_{H}=4 \pi r_{H}^2$ into Eq. (REF) and applying Eq. (REF) yields FORMULA where $K$ is a positive constant given by FORMULA and $L_{p}$ is the Planck length, which is written as [CIT] FORMULA From Eq. (REF), we can confirm $S_{\rm{BH}} >0$.
| 516
|
1911.08306
| 17,425,479
| 2,019
| 11
| 15
| false
| true
| 4
|
CONSTANT, CONSTANT, CONSTANT, UNITS
|
We now return to the generic case where $\Omega_\Lambda$ need not vanish, so we are effectively treating these models as one-parameter extensions of $\Lambda$CDM, to which model they reduce when $\lambda=0$. We start by assuming that matter has the standard equation of state, $w=0$. As in the previous subsection will separately consider the cases without and with the aforementioned Planck prior on the matter density.
| 420
|
1911.08232
| 17,432,832
| 2,019
| 11
| 19
| true
| true
| 1
|
MISSION
|
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
| 937
|
1911.08497
| 17,435,629
| 2,019
| 11
| 19
| true
| false
| 2
|
MPS, MPS
|
The effective Majorana mass, which is the key parameter of $0 \nu \beta \beta$ decay process is defined in the standard three neutrino formalism as FORMULA where $U_{e i}$ are the PMNS matrix elements and $\alpha$, $\beta$ are the Majorana phases. In terms of the lightest neutrino mass $m_l$ and the atmospheric and solar mass-squared differences, it can be expressed for NH and IH as FORMULA and FORMULA Analogously, one can obtain the expression for $|M_{ee}|$ in the presence of an additional sterile neutrino as FORMULA Now varying the PMNS matrix elements as well as the Dirac CP phase within their $3\sigma$ range [CIT] and the Majorana phases $\alpha$ and $\beta$ between $[0,2 \pi]$, we show the variation of $|M_{ee}|$ for three generation of neutrinos in the top panel of Fig. REF. Including the contributions from the eV scale sterile neutrino the corresponding plots are shown in the bottom panel, where the left panel is for NH and the right one for IH. In all these plots, the horizontal regions represent the bounds on effective Majorana mass from various $0 \nu \beta \beta$ experiments, while the vertical shaded regions are disfavoured from Planck data on the sum of light neutrinos, where the current bound is $\Sigma_i m_i < 0.12$ eV from Planck+WP+highL+BAO data at 95% C.L. [CIT]. It should be noted that with the inclusion of an eV scale sterile neutrino, part of the the parameter space of $|M_{ee}|$ (for IH) is within the sensitivity reach of KamLAND-Zen experiment. Furthermore, there is also some overlap regions between NH and IH cases. Thus, the future $0 \nu \beta \beta$ decay experiments may shed light on several issues related the nature of neutrinos.
| 1,687
|
1911.10952
| 17,452,161
| 2,019
| 11
| 25
| false
| true
| 2
|
MISSION, MISSION
|
A non-minimal coupling arises if in the previously studied models the constants $g$ and/or $M^2$are promoted to be field dependent. A particularly interesting case is the model in which FORMULA This belongs to the class of models (REF), with quartic potential, however the scalar $h$ is non-minimally coupled to the scalar curvature ${\cal{R}}$, in the Palatini framework, since $g$ is field dependent, in the particular way shown above. This model arises actually from the Higgs coupling to Palatini gravity FORMULA where $u \simeq 246, GeV$ is the Electroweak scale. In Planck units, $m_P=1$, this is very small $u \sim 10^{-16}$ and plays no significant role in inflation. Setting therefore $u=0$ and working in the unitary gauge, $H^{\dagger} = (0, h / \sqrt{2})$, (REF) is actually the model described by $g, M^2$ and quartic potential as given in (REF).
| 859
|
1911.11513
| 17,456,464
| 2,019
| 11
| 26
| true
| true
| 1
|
UNITS
|
Given a set of cosmological parameters, the primordial abundance of light elements is fully computable from the standard model of particle physics [CIT]. The precise determination of cosmological parameters from Planck satellite leads to accurate prediction of the light element abundance, such as $\leftidx{^4}{\mathrm{He}}$ from low metallicity H$_\mathrm{II}$ regions in low-redshift star-forming galaxies [CIT], primordial abundance of deuterium (D/H) using quasar absorption lines like the DLAs [CIT], and $\leftidx{^7}{\mathrm{Li}}$/H ratio in metal-poor stars in the Milky Way halo [CIT]. The standard BBN populations of relativistic particles, including photons, electrons, positrons, and three species of neutrinos mix as a hot plasma with the same temperature. At a given temperature, the resulting cosmic expansion rate is $2.3$ times that of photon alone. The weak freeze-out starts at this time, settling down the neutron-to-proton ratio which eventually determines the helium abundance $Y_{\rm p}$. Additional relativistic degree of freedom can enhance the expansion rate by a factor of $8\%$[^1], which forces the neutrino freeze-out to occur at a higher temperature. This, in turn, implies more neutrons, triggering more $\leftidx{^4}{\mathrm{He}}$.
| 1,265
|
1912.00995
| 17,477,612
| 2,019
| 12
| 2
| true
| false
| 1
|
MISSION
|
The evolution of the Universe can be described by two big phases: Classical and Quantum, that is to say, after and before the Planck time $t_P = 10^{-44}\; sec$ respectively. Each cosmological stage in the classical known Universe $t_P \leq t \leq 10^{61}\;t_P$ has a dual quantum stage in the preceding quantum phase before the Planck time: $10^{-61}\; t_P \leq t \leq t_P$.
| 375
|
1912.06655
| 17,527,625
| 2,019
| 12
| 13
| true
| true
| 2
|
UNITS, UNITS
|
**(vii) Two extremely different physical conditions and gravity regimes**. This is a realistic, clear and precise illustration of the *physical classical-quantum duality between the two extreme Universe scales and gravity regimes*: the dilute state and Horizon size of the Universe today on the one largest known side, and the super-Planckian scale and highest density state on the smallest side: Length, Mass, and their associated time (Hubble rate) and vacuum energy density ($\Lambda, \rho_\Lambda$) of the Universe *today* are truly *classical*, while its extreme past at $10^{-61}\; t_P$ = $10^{-105}$ sec deep inside the Planck domain of extremely small size and high vacuum density value ($\Lambda_Q, \rho_Q$) are truly *quantum and super-Planckian*.
| 757
|
1912.06655
| 17,527,681
| 2,019
| 12
| 13
| true
| true
| 1
|
UNITS
|
In addition, Eq (REF) *consistently* reflects the *semi-classical or semi-quantum gravity* character of Inflation. In other words, as well as the Planck scale $m_P$ is from the classical side the crossing to the quantum gravity regime, the *Inflation scale $10^{-6} m_P$ in the classical phase is the typical scale for the semi-classical gravity* regime. And the quantum dual Inflation scale in the quantum precursor phase is consistently $10^{6} m_P$. (This last could be viewed as a \"semi-quantum gravity\" scale, \"low\" with respect to the higher superplanckian scales of the earlier quantum stages, the highest $H = 10^{61} hp$ being at the extreme quantum past. Whatever be, classical or quantum, Inflation is at $10^{\pm 6}$ from the Planck scale). Consistently, this can be also seen in terms of the classical and quantum entropies $S_{\Lambda}$ and $S_Q$ of Inflation: FORMULA FORMULA
| 895
|
1912.06655
| 17,527,705
| 2,019
| 12
| 13
| true
| true
| 2
|
UNITS, UNITS
|
It is well known that Einstein's theory of general relativity (GR) fails to give any concrete predictions around singularities due to the non-negligible quantum effects in the Planck regime. Various attempts has been made to construct a consistent theory describing such a regime in the past decades. Among these theories, loop quantum gravity (LQG) presents a picture of granular and discrete space-time at Planck scale [CIT]. In this theory, it has been shown that the operators representing geometric observables (e.g., 2-surface area, 3-region volume, length of a curvature and integral of certain metric components) have a discrete spectra [CIT]. Namely, gravity in LQG is quantized. Because of the quantum features of the underlying spacetime geometry, LQG propose that singularities may not exist. This concept was first implemented precisely in the theory of loop quantum cosmology (LQC) which is constructed by applying the method of loop quantization to the homogeneous and isotropic cosmological model [CIT]. According to LQC, the classical big bang singularity is finally resolved by the quantum bounce scenario [CIT].
| 1,130
|
1912.07278
| 17,533,523
| 2,019
| 12
| 16
| false
| true
| 2
|
UNITS, UNITS
|
Thus, at the present stage, the result of our rigorous analysis extends the regime of calculability until times sufficiently late that the NLL contributions can potentially become large. For a $\lambda \phi^4$ theory, the LL+NLL contributions take the schematic form, FORMULA Our analysis is then guaranteed to be trustworthy in the limit $t \rightarrow \infty, \lambda \rightarrow 0$, with $\lambda t^2$ fixed, so that all the LL terms survive (and are resummed by Fokker-Planck evolution) but NLL $\rightarrow 0$. If however subleading log contributions do indeed resum to remain subleading, then Fokker-Planck evolution gives the leading nonperturbative behavior for correlators for large $t$ and finite small $\lambda$.
| 723
|
1912.09502
| 17,555,610
| 2,019
| 12
| 19
| false
| true
| 2
|
FOKKER, FOKKER
|
Motivated by this unique feature of Abelian gauge theories, in this paper we focus on chiral $U(1)$ gauge theories that are anomaly-free, but for which anomaly cancellation occurs due to fermions appearing at different scales. As illustrated in Figure REF, these theories, which represent partial UV completions of the anomalous EFTs that are the focus of [CIT], feature a variety of mass scales above the photon mass. When gravitational effects are decoupled, the most relevant scales are the masses $M_f$ of the heavy fermions responsible for anomaly-cancellation, as well as a possible cutoff $\Lambda_*$ of the anomaly-free theory. When gravitational effects are included, the four-dimensional Planck scale, and the quantum gravity scale $\Lambda_{\rm QG}$ (which may differ from $M_{Pl}$) also enter into the discussion.
| 825
|
1912.10054
| 17,562,286
| 2,019
| 12
| 20
| false
| true
| 1
|
UNITS
|
The results of analyzing the Planck background simulations are shown in Fig. REF for the constant luminosity model, and Fig. REF for the log-normal luminosity model. In the case of the constant luminosity model, we choose the same input parameters as used in the Gaussian simulations (§[5.2]). For the log-normal luminosity model, we choose a fiducial value of $\sigma_{\ln{s}} = 1$, and select the value of $\mu_{\ln{s}}$ that corresponds to our earlier choice of $s_d$ in the Gaussian simulations. As before, we set $f_{\rm{disk}} = 0.15$.
| 541
|
1912.10498
| 17,564,804
| 2,019
| 12
| 22
| true
| false
| 1
|
MISSION
|
To emulate the distribution of galaxy properties in a cosmological context we use the Universe Machine simulation. Universe Machine provides a galaxy catalog with galaxy stellar masses and star- formation histories extending from $z = 0$ to $z = 10$. We use the publicly available galaxy catalogs created using the Bolshoi--Planck simulation [CIT], which is a dissipationless CDM-only $N$-body simulation of a $250$ Mpc $h^{-1}$ volume with $2048^3$ particles in a Planck Cosmology, $\Omega_m = 0.307$, $h = 0.7$.
| 513
|
2001.01025
| 17,594,957
| 2,020
| 1
| 4
| true
| false
| 2
|
MISSION, MISSION
|
In the limit where the light particles decouple instantaneously from the thermal bath at a temperature $T_0$, it will not receive the entropy released subsequently by annihilating species, so that the final effective degrees of freedom after neutrino decoupling is given by [CIT] FORMULA where $g_{LS}$ is the number of degrees of freedom of the light relativistic relic. Translating into an effective number of neutrinos and assuming $T_0 > T_{EW}$ we find the lower bounds of Ref. [CIT] on the effective number of neutrino, FORMULA In particular notice that we have kept $g_*(T = T_0)$ to emphasize that the bound derived in Ref. [CIT] is only when there are no additional degrees of freedom in the theory beyond the SM ones. This assumption does not hold by definition in our effective theory approach since we do expect new physics to occur around the scale $\Lambda$.[^19] This implies that CMB-S4 experiments will not necessarily rule out every thermally coupled relic, especially in the case of a particularly rich UV sector. Current limits from the Planck experiment [CIT] typically also exclude light relics decoupling below the QCD phase transition at around $100$ MeV.
| 1,179
|
2001.01490
| 17,597,320
| 2,020
| 1
| 6
| false
| true
| 1
|
MISSION
|
Both the Veltman and Pauli constraints are evaluated from loop diagrams so the masses which appear there are really renormalization group, RG, scale dependent. Boson and fermion contributions enter with different signs and evolve differently under RG evolution which means they have a chance to cross zero deep in the ultraviolet. With the particle masses and couplings measured at the LHC, the Standard Model works as a consistent theory up to the Planck scale. One finds that the electroweak vacuum sits very close to the border of stable and metastable suggesting possible new critical phenomena in the ultraviolet, within 1.3 standard deviations of being stable on relating the top quark Monte-Carlo and pole masses if we take just the Standard Model with no coupling to undiscovered new particles [CIT]. The question of vacuum stability depends on whether the Higgs self-coupling crosses zero or not deep in the ultraviolet and involves a delicate balance of Standard Model parameters. The Higgs and other particle masses might be determined by physics close to the Planck scale.
| 1,084
|
2001.01706
| 17,598,044
| 2,020
| 1
| 6
| true
| true
| 2
|
UNITS, UNITS
|
The most striking feature found in the Planck 2018 angle distributions is a strong $2\psi$ mode, whose amplitude is expected to be determined by the value of $\mu_U$, which in turn is determined by the $B$-mode power spectrum. Note that: FORMULA Therefore the dominant contribution to the variance $\mathrm{var}\qty(\mu_{U})$ is the low-$\ell$ terms of the power spectra $C_\ell^{BB}$. However, in the actual Planck 2018 data, there is a strong discrepancy between the values of $\mu_U$ in the 2018 Planck maps and the low-$\ell$ terms of the $B$-mode power spectrum. We note that this discrepancy exists for the first time in the 2018 data release. In the 2015 CMB maps, the $Q$ and $U$ monopoles are much smaller and appear that they may have been set to effectively zero by hand, e.g. SMICA [CIT] has $\mu_Q = 4 \times 10^{-4}, \mu \mathrm{K}$ and $\mu_{U} = 7 \times 10^{-5}, \mu \mathrm{K}$. Moreover, we find that $\qty|\mu_Q| \sim \qty|\mu_U|$ for all 2015 Planck CMB products, whereas the power spectra predict that these quantities should differ by an order of magnitude or so.
| 1,086
|
2001.01757
| 17,599,019
| 2,020
| 1
| 6
| true
| false
| 4
|
MISSION, MISSION, MISSION, MISSION
|
Cosmological models describing the non-gravitational interaction between dark matter and dark energy are based on some phenomenological choices of the interaction rates between dark matter and dark energy. There is no such guiding rule to select such rates of interaction. {\it In the present work we show that various phenomenological models of the interaction rates might have a strong field theoretical ground.} We explicitly derive several well known interaction functions between dark matter and dark energy under some special conditions and finally constrain them using the latest cosmic microwave background observations from final Planck legacy release together with baryon acoustic oscillations distance measurements. Our analyses report that one of the interacting functions is able to alleviate the $H_0$ tension. We also perform a Bayesian evidence analyses for all the models with reference to the $\Lambda$CDM model. From the Bayesian evidence analyses, although the reference scenario is preferred over the interacting scenarios, however, we found that two interacting models are close to the reference $\Lambda$CDM model.
| 1,137
|
2001.03120
| 17,606,948
| 2,020
| 1
| 9
| true
| false
| 1
|
MISSION
|
PSR,J1658$+$ 3630 was also observed with five of the international LOFAR stations in Germany, namely stations in Unterweilenbach (telescope identifier DE602), Tautenburg (DE603), Bornim (DE604), Jülich (DE605) and Norderstedt (DE609), operated by the German LOng Wavelength consortium. The observing strategy and the individual stations involved are presented in Table REF. The observations were conducted at a central frequency of 153.8,MHz and with a bandwidth of 71.5,MHz across 366 sub-bands. The data from the stations in Unterweilenbach, Tautenburg and Jülich were recorded on machines at the Max-Planck-Institut für Radioastronomie in Bonn, while the data from stations in Bornim and Norderstedt were record at machines in Jülich Supercomputing Centre. They were recorded using the LOFAR und MPIfR Pulsare (LuMP4) software[^4] as channelized complex voltages and then coherently dedispersed to the DM of the pulsar and folded using the best ephemeris of the pulsar available in 2017 July, producing *archive* files with ten-second sub-integrations and 1024 phase bins. A summary of the different observing strategies on PSR,J1658$+$ 3630 is shown in Table REF.
| 1,167
|
2001.03866
| 17,614,714
| 2,020
| 1
| 12
| true
| false
| 1
|
MPS
|
The authors would like to thank the referee for improving the presentation of the paper. ATJ and KTI are supported by JSPS KAKENHI Grant Number JP17H02868. MO is supported by JSPS KAKENHI Grant Number JP15H05892 and JP18K03693. IK is supported by JSPS KAKENHI Grant Number JP15H05896. SHS thanks the Max Planck Society for support through the Max Planck Research Group. J. H. H. C. acknowledges support from the Swiss National Science Foundation (SNSF). This work was supported in part by World Premier International Research centre Initiative (WPI Initiative), MEXT, Japan. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
| 1,415
|
2002.01611
| 17,683,558
| 2,020
| 2
| 5
| true
| false
| 2
|
MPS, MPS
|
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.\ SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
| 1,678
|
2002.05724
| 17,713,618
| 2,020
| 2
| 13
| true
| false
| 3
|
MPS, MPS, MPS
|
We revisit the observational constraints on spatial curvature following recent claims that the Planck data favour a closed Universe. We use a new and statistically powerful Planck likelihood to show that the Planck temperature and polarization spectra are consistent with a spatially flat Universe, though because of a geometrical degeneracy cosmic microwave background spectra on their own do not lead to tight constraints on the curvature density parameter Omega_K. When combined with other astrophysical data, particularly geometrical measurements of baryon acoustic oscillations, the Universe is constrained to be spatially flat to extremely high precision, with Omega_ K = 0.0004 +/-0.0018 in agreement with the 2018 results of the Planck team. In the context of inflationary cosmology, the observations offer strong support for models of inflation with a large number of e-foldings and disfavour models of incomplete inflation.
| 933
|
2002.06892
| 17,720,789
| 2,020
| 2
| 17
| true
| true
| 4
|
MISSION, MISSION, MISSION, MISSION
|
Another possibility is that the tendency for *Planckpower spectra to favour closed Universes is caused by systematic errors in the *Plancklikelihoods and/or *Planckdata. As discussed above, it is certainly true that different likelihood implementations lead to different results, with the `Plik`likelihood favouring closed Universes more strongly than our own `CamSpec`likelihood. We have discussed the construction of the `CamSpec`likelihood in great detail in EG and have argued that our methodology is robust and gives reasonable $\chi^2$ values for the polarization spectra, unlike `Plik`[see [CIT] :2019 for further details]. However, for readers interested in spatial curvature, whether `Plik`or `CamSpec`is the more reliable likelihood is irrelevant because *differences between* Plancklikelihoods* *are overwhelmed when* Planckdata are combined with BAO*. This is why the estimates of equs. (REF) and (REF) agree so precisely.*****
| 939
|
2002.06892
| 17,721,514
| 2,020
| 2
| 17
| true
| true
| 2
|
MISSION, MISSION
|
Weakly coupled light particles such as ALPs also have a profound impact on the cosmological evolution of our universe, in particular on the abundance of light elements produced during Big Bang Nucleosynthesis (BBN) [CIT]. The resulting limits on the parameter space of ALPs are complementary to searches in the laboratory and provide valuable additional information regarding the validity of a given point in parameter space. In the particle physics community, however, cosmological bounds on a given model are often perceived as 'soft' in the sense that altering the cosmological history may well weaken or even fully invalidate these bounds. To rectify this perception, the main objective of this article is to evaluate the robustness of cosmological constraints on ALPs in the keV-GeV region, allowing for additional effects which may weaken the bounds of the standard scenario. Here we mainly concentrate on effects which 'factorise' from the ALP sector in order to leave the ALP physics unchanged. Specifically we allow for an arbitrary additional relativistic component in the early universe, contributing to $N_\text{eff}$, as well as an arbitrary chemical potential of SM neutrinos. We also consider different reheating temperatures $T_\mathrm{R}$ which directly impact the initial ALP abundance. Employing the latest determinations of the primordial helium and deuterium abundances [CIT] as well as information from the Planck mission [CIT] we find that while bounds can indeed be weakened, very relevant robust constraints remain.
| 1,540
|
2002.08370
| 17,732,626
| 2,020
| 2
| 19
| true
| true
| 1
|
MISSION
|
This section reviews the model equations used to describe particle kinetic physics, i.e., the dynamics of charged particles in the configuration and momentum space under the effect of electro-magnetic forces. The section is divided as follows: in Sect. [1.1] we describe the Vlasov--Maxwell system of equations, then in Sect. [1.2] we discuss the numerical methods developed to follow the dynamics of such a system. Section [1.3] discusses the particle-in-cell (PIC) technique used to study solutions of the Vlasov--Maxwell system. In Sect. [1.4] we provide a discussion on the comparison between PIC and Vlasov approaches. Section [1.5] briefly describes hybrid methods where a fluid approximation is introduced for the electronic component whereas kinetic (PIC) techniques are used to describe ions. In Sect. [1.6] we specifically discuss the Fokker--Planck description of kinetic problems. The Fokker--Planck approach is particularly well-adapted to investigate cosmic ray propagation. Finally, we give a particular focus on Fokker--Planck simulations developed in the context of the study of the radiative transfer in hot plasmas around compact objects.
| 1,157
|
2002.09411
| 17,740,463
| 2,020
| 2
| 21
| true
| false
| 3
|
FOKKER, FOKKER, FOKKER
|
Planck (http://www.esa.int/Planck) is a project of the European Space Agency (ESA), with contributions from NASA (USA) and telescope reflectors provided by a collaboration between ESA and a scientific consortium led and funded by Denmark. The Planck data we used here is Planck HFI Products for Public Data Release 3 2018 [CIT]. The data is from the study of the polarized thermal emission from Galactic dust, using the High Frequency Instrument at 353 $GHz$ with angular resolution 5$'$.
| 488
|
2002.09948
| 17,744,033
| 2,020
| 2
| 23
| true
| false
| 4
|
MISSION, MISSION, MISSION, MISSION
|
We are thus led to a new idea: Higgs bosons are composite bound-states of standard model fermion pairs driven by threshold black holes at $M_P$ with the corresponding quantum numbers. The black holes of the far UV are quantum mechanical, mini black holes that are dressed by fermion loops to acquire lower energy (multi-TeV scale) masses. There are many bound-state Higgs bosons, at least one per fermion pair at $M_{Planck}$, and a rich spectroscopy of Higgs bosons is expected to emerge. This theory dynamically unifies Planck scale physics with the electroweak and multi-TeV scales. By studying Higgs physics at the LHC one may have a window on the threshold spectrum of black holes at the Planck scale.
| 706
|
2002.11547
| 17,755,765
| 2,020
| 2
| 26
| false
| true
| 3
|
UNITS, UNITS, UNITS
|
We assume that the Hubble parameter during inflation, $H_{\rm inf}$, is approximately constant in time, and derive the BD distribution for the axion field, $\phi$. First, let us separate it into the long and short wave-length modes, $\phi=\bar{\phi}+\delta\phi_{\rm{short}}$. The axion dynamics under the effect of short-wavelength fluctuations is described by the Langevin equation, FORMULA where $V(\phi)$ is a periodic potential of $\phi$ and the dot and prime represent the derivative with respect to the cosmic time $t$ and axion field $\phi$, respectively. We assume that $V(\phi)$ is negligibly small compared to the total energy density of the universe, and it satisfies $|V''(\phi)|\ll H^2_{\rm{inf}}$ for all values of $\phi$ so that the stochastic formalism is applicable. The information of the short wave-length mode $\delta\phi_{\rm{short}}$ is included in the Gaussian noise term, $f(\bm{x},t)$, satisfying FORMULA where $\langle \cdots \rangle$ represents the stochastic average. The corresponding Fokker-Planck equation is given by FORMULA where $\mathcal{P}(\phi, t)$ denotes the probability distribution for the coarse-grained field $\phi$.
| 1,159
|
2002.12195
| 17,760,266
| 2,020
| 2
| 27
| true
| true
| 1
|
FOKKER
|
We would like to thank the anonymous reviewer for suggestions and comments that have helped to improve this paper. AC would like to acknowledge DST for providing INSPIRE fellowship. AC would like to thank Catherine Hale for providing the SKADS bias prescription and for many helpful suggestions. AC would like to thank Matt Jarvis for providing SKADS catalogue over private communication and also for helpful suggestions. AC would like to thank Akriti Sinha for pointing us towards BOSS catalogue for the first time. NR acknowledges support from the Max Planck Society through the Max Planck India Partner Group grant. AD would like to acknowledge the support of EMR-II under CSIR No. 03(1461)/19.
| 697
|
2002.12383
| 17,762,652
| 2,020
| 2
| 27
| true
| false
| 2
|
MPS, MPS
|
Different models have been considered so far in order to embed inflation into an axion framework. One possibility consists in considering the dynamics of the PQ complex field during inflation [CIT] and identify the inflaton field with the radial mode $\varrho_a$ of the PQ complex field (see). At sufficiently high temperatures, the potential of $\varrho_a$ in Eq. REF(#eq:VPhiKSVZ){reference-type="eqref" reference="eq:VPhiKSVZ"} can be approximated by a quartic potential. However, such a form of inflaton potential has been excluded to a high level of confidence by the measurements of the CMB spectra by the Planck mission [CIT]. For this reason, Ref. [CIT] considered a non-minimal coupling to gravity so that the potential at large values of $\varrho_a$ is flattened out (see e.g. Ref. [CIT]) and the model reconciles with observations. This model also circumvents the problem that, for relatively high values of the Hubble rate during inflation $H_I$, axion isocurvature fluctuations during inflation are too large with respect to what is allowed by measurements [CIT], see. The reason being that the radial field has not yet relaxed to its minimum value during inflation and it evolves in the regime $\varrho_a \gg f_a$, thus suppressing isocurvature fluctuations.[^1]
| 1,276
|
2003.01100
| 17,772,801
| 2,020
| 3
| 2
| true
| true
| 1
|
MISSION
|
This paper presents the initial catalog of photometric redshifts as measured via new optical and near-IR imaging data for SuperCLASS obtained from Subaru and *Spitzer*. With these data, we present an initial analysis of radio-detected galaxies in the field and the distribution of galaxies in and around the five $z\sim0.2$ Abell galaxy clusters. We outline the individual data sets compiled for the survey in Section [2] and we describe the methods used to measure photometric redshifts in Section [3]. We discuss the redshift distribution of radio sources and distribution of sources surrounding the field's galaxy clusters are presented in Section [5]. Section [6] summarizes our results. We assume a $Planck$ $\Lambda$CDM cosmology with $\Omega_m=0.307$ and $H_0=67.7$, km,s$^{-1}$,Mpc$^{-1}$ [CIT].
| 803
|
2003.01735
| 17,777,606
| 2,020
| 3
| 3
| true
| false
| 1
|
MISSION
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.