Calculating the abundance of thermally produced dark matter particles has become a standard procedure, with sophisticated methods guaranteeing a precision that matches the percent-level accuracy in the observational determination of the dark matter density. Here, we point out that one of the main assumptions in the commonly adopted formalism, namely local thermal equilibrium during the freeze-out of annihilating dark matter particles, does not have to be satisfied in general. We present two methods for how to deal with such situations, in which the kinetic decoupling of dark matter happens so early that it interferes with the chemical decoupling process: i) an approximate treatment in terms of a coupled system of differential equations for the leading momentum moments of the dark matter distribution, and ii) a full numerical solution of the Boltzmann equation in phase-space. For illustration, we apply these methods to the case of Scalar Singlet dark matter. We explicitly show that even in this simple model, which has been extensively discussed in the literature, the prediction for the dark matter abundance can be affected by up to one order of magnitude.
Presently a ${>}3\sigma$ tension exists between values of the Hubble constant $H_0$ derived from analysis of fluctuations in the Cosmic Microwave Background by Planck, and local measurements of the expansion using calibrators of type Ia supernovae (SNe Ia). We perform a blinded reanalysis of Riess et al. 2011 to measure $H_0$ from low-redshift SNe Ia, calibrated by Cepheid variables and geometric distances including to NGC 4258. This paper is a demonstration of techniques to be applied to the Riess et at. 2016 data. Our end-to-end analysis starts from available CfA3 and LOSS photometry, providing an independent validation of Riess et al. 2011. We obscure the value of $H_0$ throughout our analysis and the first stage of the referee process, because calibration of SNe Ia requires a series of often subtle choices, and the potential for results to be affected by human bias is significant. Our analysis departs from that of Riess et al. 2011 by incorporating the covariance matrix method adopted in SNLS and JLA to quantify SN Ia systematics, and by including a simultaneous fit of all SN Ia and Cepheid data. We find $H_0 = 72.5 \pm 3.1$ (stat) $\pm 0.77$ (sys) km s$^{-1}$ Mpc$^{-1}$ with a three-galaxy (NGC 4258+LMC+MW) anchor. The relative uncertainties are 4.3% statistical, 1.1% systematic, and 4.4% total, larger than in Riess et al. 2011 (3.3% total) and the Efstathiou 2014 reanalysis (3.4% total). Our error budget for $H_0$ is dominated by statistical errors due to the small size of the supernova sample, whilst the systematic contribution is dominated by variation in the Cepheid fits, and for the SNe Ia, uncertainties in the host galaxy mass dependence and Malmquist bias.
Cosmological parameter constraints from observations of time-delay lenses are becoming increasingly precise. However, there may be significant bias and scatter in these measurements due to, among other things, the so-called mass-sheet degeneracy. To estimate these uncertainties, we analyze strong lenses from the largest EAGLE hydrodynamical simulation. We apply a mass-sheet transformation to the radial density profiles of lenses, and by selecting lenses near isothermality, we find that the bias on H0 can be reduced to 5% with an intrinsic scatter of 10%, confirming previous results performed on a different simulation data set. We further investigate whether combining lensing observables with kinematic constraints helps to minimize this bias. We do not detect any significant dependence of the bias on lens model parameters or observational properties of the galaxy, but depending on the source--lens configuration, a bias may still exist. Cross lenses provide an accurate estimate of the Hubble constant, while fold (double) lenses tend to be biased low (high). With kinematic constraints, double lenses show bias and intrinsic scatter of 6% and 10%, respectively, while quad lenses show bias and intrinsic scatter of 0.5% and 10%, respectively. For lenses with a reduced $\chi^2 > 1$, a power-law dependence of the $\chi^2$ on the lens environment (number of nearby galaxies) is seen. Lastly, we model, in greater detail, the cases of two double lenses that are significantly biased. We are able to remove the bias, suggesting that the remaining biases could also be reduced by carefully taking into account additional sources of systematic uncertainty.
We present an overview of scenarios where the observed Dark Matter (DM) abundance consists of Feebly Interacting Massive Particles (FIMPs), produced non-thermally by the so-called freeze-in mechanism. In contrast to the usual freeze-out scenario, frozen-in FIMP DM interacts very weakly with the particles in the visible sector and never attained thermal equilibrium with the baryon-photon fluid in the early Universe. Instead of being determined by its annihilation strength, the DM abundance depends on the decay and annihilation strengths of particles in equilibrium with the baryon-photon fluid, as well as couplings in the DM sector. This makes frozen-in DM very difficult but not impossible to test. In this review, we present the freeze-in mechanism and its variations considered in the literature (dark freeze-out and reannihilation), compare them to the standard DM freeze-out scenario, discuss several aspects of model building, and pay particular attention to observational properties and general testability of such feebly interacting DM.
We measure HI rotation curves for simulated galaxies from the APOSTLE suite of $\Lambda$CDM cosmological hydrodynamical simulations with velocities in the range $60 < V_{\rm max}/{\rm km}\,{\rm s}^{-1} < 120$. These galaxies compare well with those in surveys of quiescent discs such as THINGS and LITTLE THINGS in terms of the mass, size and kinematics of their HI discs, as well as the local and global asymmetry of their HI velocity fields. We construct synthetic 'observations', akin to interferometric HI measurements of nearby galaxies, and apply a conventional tilted-ring modelling procedure. The modelling generally results in a large diversity of rotation curves for each individual galaxy, depending on the orientation of the chosen line of sight. These variations arise from non-circular motions in the gas and, in particular, from strong bisymmetric ($m=2$) fluctuations in the azimuthal gas velocity field which the tilted-ring model is ill-suited to account for. Such perturbations are often difficult to detect in model residuals. Still, we show that they are clearly present in DDO 47 and DDO 87, two galaxies with slowly-rising rotation curves in apparent conflict with $\Lambda$CDM predictions. Rotation curves derived using modelling procedures unable to account for non-circular motions are likely to underestimate, sometimes significantly, the circular velocity in the inner regions and risk being misinterpreted as evidence for possibly nonexistent cores in the dark matter. The extent to which these findings affect observed galaxies with an apparent 'core' should be investigated in detail before such cores may be used as dependable evidence against the simplest predictions of the $\Lambda$CDM paradigm.
We present a complete set of exact and fully non-linear equations describing all three types of cosmological perturbations -- scalar, vector and tensor perturbations. Together with a choice out of several gauge conditions, the equations are completely free from the gauge degrees of freedom, and the variables are equivalently gauge-invariant to all perturbation orders. The equations are completely general without any physical restriction except that we assume a flat Friedmann universe. We also comment on the application of our formulation to the non-expanding Minkowski background.
We investigate Euclidean complexified wormholes in de Sitter space. It is known that for a suitable choice of parameters, the probability for the occurrence of the wormhole configuration can be higher than that for the compact instanton, and hence the Hartle-Hawking wave function will be dominated by Euclidean wormholes. Such wormhole configuration can be interpreted as the creation of two classical universes from nothing. In order to classicalize both universes and to connect at least one with inflation, it is necessary that the inflaton potential be specified. We numerically investigate the situation and find that only one end of the wormhole can be classicalized for a convex inflaton potential such as the chaotic inflation model with $\phi^{2}$ potential, while both ends can be classicalized for a concave potential such as the Starobinsky-type $R^{2}$ inflation model. Therefore, if (1) the boundary condition of our universe is determined by the Hartle-Hawking wave function, (2) Euclidean wormholes dominate the contribution to the path integral, and (3) the classicality condition must be satisfied at both ends, then we conclude that it is more probable that our universe began with a concave rather than a convex inflation, which may explain the Planck Mission data.
Links to: arXiv, form interface, find, astro-ph, recent, 1706, contact, help (Access key information)
In this paper we use high-resolution cosmological simulations to study halo intrinsic alignment and its dependence on mass, formation time and large-scale environment. In agreement with previous studies using N-body simulations, it is found that massive halos have stronger alignment. For given mass, older halos have stronger alignment than younger ones. By identifying the cosmic environment of halo using Hessian matrix, we find that for given mass, halos in cluster regions also have stronger alignment than those in filament. The existing theory has not addressed these dependencies explicitly. In this work we extend the linear alignment model with inclusion of halo bias and find that the halo alignment with its mass and formation time dependence can be explained by halo bias. However, the model can not account for the environment dependence, as it is found that halo bias is lower in cluster and higher in filament. Our results suggest that halo bias and environment are independent factors in determining halo alignment. We also study the halo alignment correlation function and find that halos are strongly clustered along their major axes and less clustered along the minor axes. The correlated halo alignment can extend to scale as large as $100h^{-1}$Mpc where its feature is mainly driven by the baryon acoustic oscillation effect.
Exploiting the powerful tool of strong gravitational lensing by galaxy clusters to study the highest-redshift Universe and cluster mass distributions relies on precise lens mass modelling. In this work, we present the first attempt at modelling line-of-sight mass distribution in addition to that of the cluster, extending previous modelling techniques that assume mass distributions to be on a single lens plane. We focus on the Hubble Frontier Field cluster MACS J0416.1-2403, and our multi-plane model reproduces the observed image positions with a rms offset of ~0.53". Starting from this best-fitting model, we simulate a mock cluster that resembles MACS J0416.1-2403 in order to explore the effects of line-of-sight structures on cluster mass modelling. By systematically analysing the mock cluster under different model assumptions, we find that neglecting the lensing environment has a significant impact on the reconstruction of image positions (rms ~0.3"); accounting for line-of-sight galaxies as if they were at the cluster redshift can partially reduce this offset. Moreover, foreground galaxies are more important to include into the model than the background ones. While the magnification factors of the lensed multiple images are recovered within ~10% for ~95% of them, those ~5% that lie near critical curves can be significantly affected by the exclusion of the lensing environment in the models (up to a factor of ~200). In addition, line-of-sight galaxies cannot explain the apparent discrepancy in the properties of massive subhalos between MACS J0416.1-2403 and N-body simulated clusters. Since our model of MACS J0416.1-2403 with line-of-sight galaxies only reduced modestly the rms offset in the image positions, we conclude that additional complexities, such as more flexible halo shapes, would be needed in future models of MACS J0416.1-2403.
We investigate the role of thermal velocities in N-body simulations of structure formation in warm dark matter models. Starting from the commonly used approach of adding thermal velocities, randomly selected from a Fermi-Dirac distribution, to the gravitationally-induced (peculiar) velocities of the simulation particles, we compare the matter and velocity power spectra measured from CDM and WDM simulations with and without thermal velocities. This prescription for adding thermal velocities results in deviations in the velocity field in the initial conditions away from the linear theory predictions, which affects the evolution of structure at later times. We show that this is entirely due to numerical noise. For a warm candidate with mass $3.3$ keV, the matter and velocity power spectra measured from simulations with thermal velocities starting at $z=199$ deviate from the linear prediction at $k \gtrsim10$ $h/$Mpc, with an enhancement of the matter power spectrum $\sim \mathcal{O}(10)$ and of the velocity power spectrum $\sim \mathcal{O}(10^2)$ at wavenumbers $k \sim 64$ $h/$Mpc with respect to the case without thermal velocities. At late times, these effects tend to be less pronounced. Indeed, at $z=0$ the deviations do not exceed $6\%$ (in the velocity spectrum) and $1\%$ (in the matter spectrum) for scales $10 <k< 64$ $h/$Mpc. Increasing the resolution of the N-body simulations shifts these deviations to higher wavenumbers. The noise introduces more spurious structures in WDM simulations with thermal velocities and modifies the radial density profiles of dark matter haloes. We find that spurious haloes start to appear in simulations which include thermal velocities at a mass that is $\sim$3 times larger than in simulations without thermal velocities.
The angular positions of quasars are deflected by the gravitational lensing effect of foreground matter. The Lyman-alpha forest seen in the spectra of these quasars is therefore also lensed. We propose that the signature of weak gravitational lensing of the forest could be measured using similar techniques that have been applied to the lensed Cosmic Microwave Background, and which have also been proposed for application to spectral data from 21cm radio telescopes. As with 21cm data, the forest has the advantage of spectral information, potentially yielding many lensed "slices" at different redshifts. We perform an illustrative idealized test, generating a high resolution angular grid of quasars (of order arcminute separation), and lensing the Lyman-alphaforest spectra at redshifts z=2-3 using a foreground density field. We find that standard quadratic estimators can be used to reconstruct images of the foreground mass distribution at z~1. There currently exists a wealth of Lya forest data from quasar and galaxy spectral surveys, with smaller sightline separations expected in the future. Lyman-alpha forest lensing is sensitive to the foreground mass distribution at redshifts intermediate between CMB lensing and galaxy shear, and avoids the difficulties of shape measurement associated with the latter. With further refinement and application of mass reconstruction techniques, weak gravitational lensing of the high redshift Lya forest may become a useful new cosmological probe.
We study the impact of thermal inflation on the formation of cosmological structures and present astrophysical observables which can be used to constrain and possibly probe the thermal inflation scenario. These are dark matter halo abundance at high redshifts, satellite galaxy abundance in the Milky Way, and fluctuation in the 21-cm radiation background before the epoch of reionization. The thermal inflation scenario leaves a characteristic signature on the matter power spectrum by boosting the amplitude at a specific wavenumber determined by the number of e-foldings during thermal inflation ($N_{\rm bc}$), and strongly suppressing the amplitude for modes at smaller scales. For a reasonable range of parameter space, one of the consequences is the suppression of minihalo formation at high redshifts and that of satellite galaxies in the Milky Way. While this effect is substantial, it is degenerate with other cosmological or astrophysical effects. The power spectrum of the 21-cm background probes this impact more directly, and its observation may be the best way to constrain the thermal inflation scenario due to the characteristic signature in the power spectrum. The Square Kilometre Array (SKA) in phase 1 (SKA1) has sensitivity large enough to achieve this goal for models with $N_{\rm bc}\gtrsim 26$ if a 10000-hr observation is performed. The final phase SKA, with anticipated sensitivity about an order of magnitude higher, seems more promising and will cover a wider parameter space.
We investigate for an Effective Field Theory (EFT) framework that can consistently explain inflation to Large Scale Structures (LSS). With the development of the construction algorithm of EFT, we arrive at a properly truncated action for the entire scenario. Using this, we compute the two-point correlation function for quantum fluctuations from Goldstone modes and related inflationary observables in terms of coefficients of relevant EFT operators, which we constrain using Planck 2015 data. We then carry forward this primordial power spectrum with the same set of EFT parameters to explain the linear and non-linear regimes of LSS by loop-calculations of the matter overdensity two-point function. For comparative analysis, we make use of two widely accepted transfer functions, namely, BBKS and Eisenstein-Hu, thereby making the analysis robust. We finally corroborate our results with LSS data from SDSS-DR7 and WiggleZ. The analysis thus results in a consistent, model-independent EFT framework for inflation to structures.
All estimates of cluster mass have some intrinsic scatter and perhaps some bias with true mass even in absence of measurement errors which are caused by, e.g., cluster triaxiality and large scale structure. Knowledge of the bias and scatter values is fundamental for both cluster cosmology and astrophysics. In this paper we show that the intrinsic scatter of a mass proxy can be constrained by measurements of the gas fraction because masses with larger values of intrinsic scatter with true mass produce more scattered gas fractions. Moreover, the relative bias of two mass estimates can be constrained by comparing the mean gas fraction at the same (nominal) cluster mass. Our observational study addresses the scatter between caustic (i.e. dynamically estimated) and true masses, and the relative bias of caustic and hydrostatic masses. For these purposes, we use the X-ray Unbiased Cluster Sample, a cluster sample selected independently of the intracluster medium content with reliable masses: 34 galaxy clusters in the nearby ($0.050<z<0.135$) Universe, mostly with $14<\log M_{500}/M_\odot \lesssim 14.5$, and with caustic masses. We found a 35\% scatter between caustic and true masses. Furthermore, we found that the relative bias between caustic and hydrostatic masses is small, $0.06\pm0.05$ dex, improving upon past measurements. The small scatter found confirms our previous measurements of a quite variable amount of feedback from cluster to cluster which is the cause of the observed large variety of core-excised X-ray luminosities and gas masses.
We use a cluster sample selected independently of the intracluster medium content with reliable masses to measure the mean gas mass fraction, its scatter, the biases of the X-ray selection on gas mass fraction and covariance between X-ray luminosity and gas mass. The sample is formed by 34 galaxy clusters in the nearby ($0.050<z<0.135$) Universe, mostly with $14<\log M_{500}/M_\odot \lesssim 14.5$, and with masses calculated with the caustic technique. First, we found that integrated gas density profiles have similar shapes, extending earlier results based on sub-populations of clusters such as relaxed or X-ray bright for their mass. Second, the X-ray unbiased selection of our sample allows us to unveil a variegate population of clusters: the gas mass fraction shows a scatter of $0.17\pm0.04$ dex, possibly indicating a quite variable amount of feedback from cluster to cluster, larger than found in previous samples targeting sub-populations of galaxy clusters, such as relaxed or X-ray bright. The similarity of the gas density profiles induces an almost scatter-less relation between X-ray luminosity, gas mass and halo mass, and modulates selection effects on the halo gas mass fraction: gas-rich clusters are preferentially included in X-ray selected samples. The almost scatter-less relation also fixes the relative scatters and slopes of the $L_X-M$ and $M_{gas}-M$ relations and makes core-excised X-ray luminosities and gas masses fully covariant. Therefore, cosmological or astrophysical studies involving X-ray or SZ selected samples need to account for both selection effects and covariance of the studied quantities with X-ray luminosity/SZ strenght.
We obtain the non-linear generalization of the Sachs-Wolfe + integrated Sachs-Wolfe (ISW) formula describing the CMB temperature anisotropies. Our formula is valid at all orders in perturbation theory, is also valid in all gauges and includes scalar, vector and tensor modes. A direct consequence of our results is that the maps of the logarithmic temperature anisotropies are much cleaner than the usual CMB maps, because they automatically remove many secondary anisotropies. This can for instance, facilitate the search for primordial non-Gaussianity in future works. It also disentangles the non-linear ISW from other effects. Finally, we provide a method which can iteratively be used to obtain the lensing solution at the desired order.
In this work, we use gas mass fraction samples of galaxy clusters obtained from their X-ray surface brightness observations jointly with the most recent $H(z)$ data to impose limits on cosmic opacity. The analyses are performed in a flat $\Lambda$CDM framework and the results are consistent with a transparent universe within $1\sigma$ c.l., however, they do not rule out $\epsilon \neq 0$ with high statistical significance. Furthermore, we show that the current limits on the matter density parameter obtained from X-ray gas mass fraction test are strongly dependent on the cosmic transparency assumption.
We derive the stochastic description of a massless, interacting scalar field in de Sitter space directly from the quantum theory. This is done by showing that the density matrix for the effective theory of the long wavelength fluctuations of the field obeys a quantum version of the Fokker-Planck equation. This equation has a simple connection with the standard Fokker-Planck equation of the classical stochastic theory, which can be generalised to any order in perturbation theory. We illustrate this formalism in detail for the theory of a massless scalar field with a quartic interaction.
We study different phenomenological signatures associated with new spin-2 particles. These new degrees of freedom, that we call hidden gravitons, arise in different high-energy theories such as extra-dimensional models or extensions of General Relativity. At low energies, hidden gravitons can be generally described by the Fierz-Pauli Lagrangian. Their phenomenology is parameterized by two dimensionful constants: their mass and their coupling strength. In this work, we analyze two different sets of constraints. On the one hand, we study potential deviations from the inverse-square law on solar-system and laboratory scales. To extend the constraints to scales where the laboratory probes are not competitive, we also study consequences on astrophysical objects. We analyze in detail the processes that may take place in stellar interiors and lead to emission of hidden gravitons, acting like an additional source of energy loss.
We study the dependence of the galaxy content of dark matter halos on large-scale environment and halo formation time using semi-analytic galaxy models applied to the Millennium simulation. We analyze subsamples of halos at the extremes of these distributions and measure the occupation functions for the galaxies they host. We find distinct differences in these occupation functions. The main effect with environment is that central galaxies (and in one model also the satellites) in denser regions start populating lower-mass halos. A similar, but significantly stronger, trend exists with halo age, where early-forming halos are more likely to host central galaxies at lower halo mass. We discuss the origin of these trends and the connection to the stellar mass -- halo mass relation. We find that, at fixed halo mass, older halos and to some extent also halos in dense environments tend to host more massive galaxies. Additionally, we see a reverse trend for the satellite galaxies occupation where early-forming halos have fewer satellites, likely due to having more time for them to merge with the central galaxy. We describe these occupancy variations also in terms of the changes in the occupation function parameters, which can aid in constructing realistic mock galaxy catalogs. Finally, we study the corresponding galaxy auto- and cross-correlation functions of the different samples and elucidate the impact of assembly bias on galaxy clustering. Our results can inform theoretical models of assembly bias and attempts to detect it in the real universe.
We perform a systematic search for long-term extreme variability quasars (EVQs) in the overlapping Sloan Digital Sky Survey (SDSS) and 3-Year Dark Energy Survey (DES) imaging, which provide light curves spanning more than 15 years. We identified ~1000 EVQs with a maximum g band magnitude change of more than 1 mag over this period, about 10% of all quasars searched. The EVQs have L_bol~10^45-10^47 erg/s and L/L_Edd~0.01-1. Accounting for selection effects, we estimate an intrinsic EVQ fraction of ~30-50% among all g<~22 quasars over a baseline of ~15 years. These EVQs are good candidates for so-called "changing-look quasars", where a spectral transition between the two types of quasars (broad-line and narrow-line) is observed between the dim and bright states. We performed detailed multi-wavelength, spectral and variability analyses for the EVQs and compared to their parent quasar sample. We found that EVQs are distinct from a control sample of quasars matched in redshift and optical luminosity: (1) their UV broad emission lines have larger equivalent widths; (2) their Eddington ratios are systematically lower; and (3) they are more variable on all timescales. The intrinsic difference in quasar properties for EVQs suggest that internal processes associated with accretion are the main driver for the observed extreme long-term variability. However, despite their different properties, EVQs seem to be in the tail of a continuous distribution of quasar properties, rather than standing out as a distinct population. We speculate that EVQs are normal quasars accreting at relatively low accretion rates, where the accretion flow is more likely to experience instabilities that drive the factor of few changes in flux on multi-year timescales.
Accurate astronomical distance determination is crucial for all fields in astrophysics, from Galactic to cosmological scales. Despite, or perhaps because of, significant efforts to determine accurate distances, using a wide range of methods, tracers, and techniques, an internally consistent astronomical distance framework has not yet been established. We review current efforts to homogenize the Local Group's distance framework, with particular emphasis on the potential of RR Lyrae stars as distance indicators, and attempt to extend this in an internally consistent manner to cosmological distances. Calibration based on Type Ia supernovae and distance determinations based on gravitational lensing represent particularly promising approaches. We provide a positive outlook to improvements to the status quo expected from future surveys, missions, and facilities. Astronomical distance determination has clearly reached maturity and near-consistency.
The Hawking-Penrose singularity theorem states that a singularity forms inside a black hole in general relativity. To remove this singularity one must resort to a more fundamental theory. Using the corrected dynamical equation of loop quantum cosmology and braneworld models, we study the gravitational collapse of a perfect fluid sphere with a rather general equation of state. In the frame of an observer comoving with this fluid, the sphere pulsates between a maximum and a minimum size, avoiding the singularity. The exterior geometry is also constructed. There are usually {an outer and an inner apparent horizon}, resembling the Reissner-Nordstr\"om situation. For a distant observer the {horizon} crossing occurs in an infinite time and the pulsations of the black hole quantum "beating heart" are completely unobservable. However, it may be observable if the black hole is not spherical symmetric and radiates gravitational wave due to the quadrupole moment, if any.
The James Clerk Maxwell Telescope (JCMT) has been the world's most successful single dish telescope at submillimetre wavelengths since it began operations in 1987. From the pioneering days of single-element photometers and mixers, through the first modest imaging arrays, leading to the state-of-the-art widefield camera SCUBA-2 and the spectrometer array HARP, the JCMT has been associated with a number of major scientific discoveries. Famous for the discovery of "SCUBA" galaxies, which are responsible for a large fraction of the far-infrared background, to the first images of huge discs of cool debris around nearby stars, possibly giving us clues to the evolution of planetary systems, the JCMT has pushed the sensitivity limits more than any other facility in this most difficult of wavebands in which to observe. Now approaching the 30th anniversary of the first observations the telescope continues to carry out unique and innovative science. In this review article we look back on just some of the scientific highlights from the past 30 years.
Links to: arXiv, form interface, find, astro-ph, recent, 1706, contact, help (Access key information)
Following the success of type Ia supernovae in constraining cosmologies at lower redshift $(z\lesssim2)$, effort has been spent determining if a similarly useful standardisable candle can be found at higher redshift. {In this work we determine the largest possible magnitude discrepancy between a constant dark energy $\Lambda$CDM cosmology and a cosmology in which the equation of state $w(z)$ of dark energy is a function of redshift for high redshift standard candles $(z\gtrsim2)$}. We discuss a number of popular parametrisations of $w(z)$ with two free parameters, $w_z$CDM cosmologies, including the Chevallier-Polarski-Linder and generalisation thereof, $n$CPL, as well as the Jassal-Bagla-Padmanabhan parametrisation. For each of these parametrisations we calculate and find extrema of $\Delta \mu$, the difference between the distance modulus of a $w_z$CDM cosmology and a fiducial $\Lambda$CDM cosmology as a function of redshift, given 68\% likelihood constraints on the parameters $P=(\Omega_{m,0}, w_0, w_a)$. The parameters are constrained using cosmic microwave background, baryon acoustic oscillations, and type Ia supernovae data using CosmoMC. We find that none of the tested cosmologies can deviate more than 0.05 mag from the fiducial $\Lambda$CDM cosmology at high redshift, implying that high redshift standard candles will not aid in discerning between a $w_z$CDM cosmology and the fiducial $\Lambda$CDM cosmology. Conversely, this implies that if high redshift standard candles are found to be in disagreement with $\Lambda$CDM at high redshift, then this is a problem not only for $\Lambda$CDM but for the entire family of $w_z$CDM cosmologies.
We measure the effect of high column density absorbing systems of neutral hydrogen (HI) on the one-dimensional (1D) Lyman-alpha forest flux power spectrum using cosmological hydrodynamical simulations from the Illustris project. High column density absorbers (which we define to be those with HI column densities $N(\mathrm{HI}) > 1.6 \times 10^{17}\,\mathrm{atoms}\,\mathrm{cm}^{-2}$) cause broadened absorption lines with characteristic damping wings. These damping wings bias the 1D Lyman-alpha forest flux power spectrum by causing absorption in quasar spectra away from the location of the absorber itself. We investigate the effect of high column density absorbers on the Lyman-alpha forest using hydrodynamical simulations for the first time. We provide templates as a function of column density and redshift, allowing the flexibility to accurately model residual contamination, i.e., if an analysis selectively clips out the largest damping wings. This flexibility will improve cosmological parameter estimation, e.g., allowing more accurate measurement of the shape of the power spectrum, with implications for cosmological models containing massive neutrinos or a running of the spectral index. We provide fitting functions to reproduce these results so that they can be incorporated straightforwardly into a data analysis pipeline.
We investigate warm dark matter (WDM) features in a model independent approach through the very simple approximation of the Reduced Relativistic Gas (RRG). Our only and generic supposition is a non-negligible velocity $v$ for dark matter particles which is parameterized by a free parameter $b$. We show that high values for WDM velocities would erase radiation dominated epoch. This would cause an early warm matter domination after inflation, unless $b^2\lesssim 10^{-6}$ (or $v\lesssim 300 km/s$). Also it is shown that RRG approach allows to quantify the lack of power in linear matter spectrum at small scales and in particular, reproduces the relative transfer function commonly used in context of WDM with accuracy of $\lesssim 1\%$. This result with such accuracy does not alter significantly the CMB power spectrum agreeing also with the background observational tests. This suggests that the RRG approximation can be used as a complementary approach to investigate consequences of warmness of dark matter and especially for deriving the main observational exponents for the WDM in a model-independent way in linear and non-linear regime.
Shot noise is an important ingredient to any measurement or theoretical modeling of discrete tracers of the large scale structure. Recent work has shown that the shot noise in the halo power spectrum becomes increasingly sub-Poissonian at high mass. Interestingly, while the halo model predicts a shot noise power spectrum in qualitative agreement with the data, it leads to an unphysical white noise in the cross halo-matter and matter power spectrum. In this work, we show that absorbing all the halo model sources of shot noise into the halo fluctuation field leads to meaningful predictions for the shot noise contributions to halo clustering statistics and remove the unphysical white noise from the cross halo-matter statistics. Our prescription straightforwardly maps onto the general bias expansion, so that the renormalized shot noise terms can be expressed as combinations of the halo model shot noises. Furthermore, we demonstrate that non-Poissonian contributions are related to volume integrals over correlation functions and their response to long-wavelength density perturbations. This leads to a new class of consistency relations for discrete tracers, which appear to be satisfied by our reformulation of the halo model. We test our theoretical predictions against measurements of halo shot noise bispectra extracted from a large suite of numerical simulations. Our model reproduces qualitatively the observed sub-Poissonian noise, although it underestimates the magnitude of this effect.
Neutral hydrogen (HI) will soon be the dark matter tracer observed over the largest volumes of Universe thanks to the 21 cm intensity mapping technique. To unveil cosmological information it is indispensable to understand the HI distribution with respect to dark matter. Using a full one-loop derivation of the power spectrum of HI, we show that higher order corrections change the amplitude and shape of the power spectrum on typical cosmological (linear) scales. These effects go beyond the expected dark matter non-linear corrections and include non-linearities in the way the HI signal traces dark matter. We show that, on linear scales at z = 1, the HI bias drops by up to 15% in both real and redshift space, which results in underpredicting the mass of the halos in which HI lies. Non-linear corrections give rise to a significant scale dependence when redshift space distortions arise, in particular on the scale range of the baryonic acoustic oscillations (BAO). There is a factor of 5 difference between the linear and full HI power spectra over the full BAO scale range, which will modify the ratios between the peaks. This effect will also be seen in other types of survey and it will be essential to take it into account in future experiments in order to match the expectations of precision cosmology.
We investigate the effects of intrinsic alignments (IA) of dark-matter halo shapes on cosmic density and velocity fields from cluster to cosmic scales beyond 100 Mpc/h. Besides the density correlation function binned by the halo orientation angle which was used in the literature, we introduce, for the first time, the corresponding two velocity statistics, the angle-binned pairwise infall momentum and momentum correlation function. Using large-volume, high-resolution N-body simulations, we measure the alignment statistics of density and velocity, both in real and redshift space. We find that the alignment signal is not amplified by redshift-space distortions at linear scales. Behaviors of IA in the velocity statistics are similar to those in the density statistics, except that the halo orientations are aligned with the velocity field up to a scale larger than those with the density field, x>100 Mpc/h. On halo scales, x~ R_{200m} ~ 1 Mpc/h, we detect a sharp steepening in the momentum correlation associated with the physical halo boundary, or the splashback feature, which is found more prominent than in the density correlation. Our results indicate that observations of IA with the velocity field can provide additional information on cosmological models from large scales and on physical sizes of halos from small scales.
We have proposed a method for measuring weak lensing using the Lyman-alpha forest. Here we estimate the noise expected in weak lensing maps and power spectra for different sets of observational parameters. We find that surveys of the size and quality of the ones being done today and ones planned for the future will be able to measure the lensing power spectrum at a source redshift of z~2.5 with high precision and even be able to image the distribution of foreground matter with high fidelity on degree scales. For example, we predict that Lyman-alpha forest lensing measurement from the Dark Energy Spectroscopic Instrument survey should yield the mass fluctuation amplitude with statistical errors of 1.5%. By dividing the redshift range into multiple bins some tomographic lensing information should be accessible as well. This would allow for cosmological lensing measurements at higher redshift than are accessible with galaxy shear surveys and correspondingly better constraints on the evolution of dark energy at relatively early times.
At a redshift of $z=0.03$, the recently discovered SN2017egm is the nearest Type I superluminous supernova (SLSN) to date. It is the first to be found in a massive spiral galaxy (NGC 3191). Using SDSS spectra of NGC 3191, we find a metallicity ~2 Zsun at the nucleus and ~1.3 Zsun for a star forming region at a radial offset similar to SN2017egm. Archival photometry from radio to UV reveals a star formation rate ~15 Msun/yr (with ~70% obscured by dust), which can account for a Swift X-ray detection, and a stellar mass ~$10^{10.7}$ Msun. We model the UV and optical light curves over the first month after explosion with a magnetar central engine model, using the Bayesian light curve fitting tool MOSFiT. The fits indicate an ejecta mass of 2-6 Msun, a spin period of 4-6 ms, a magnetic field of $(0.7-1.7)\times 10^{14}$G, and a kinetic energy of $2\times 10^{51}$ erg. These parameters are consistent with the overall distributions for SLSNe, modeled by Nicholl et al (2017), although we note that the derived mass and rotation rate are at the low end of the distribution, possibly indicating enhanced loss of mass and angular momentum prior to explosion. This leads to two critical implications: (i) Type I SLSNe can occur at solar metallicity, although with a low fraction of ~10%; and (ii) metallicity has at most a modest effect on the SLSN properties. Both of these conclusions are in line with results for long gamma-ray bursts. Our modeling suggests an explosion date of MJD $57890\pm4$. A short-lived excess in the data relative to the best-fitting models may indicate an early-time 'bump' similar to those seen in other SLSNe. If confirmed, SN2017egm would be the first SLSN with an observed spectrum during the bump phase; this early spectrum shows the same characteristic oxygen lines seen at maximum light, which may be an important clue in understanding the underlying mechanism for the bumps.
We point out that a simple inflationary model in which the axionic inflaton couples to a pure SU(N) Yang-Mills theory may give the scalar spectral index (n_s) and tensor-to-scalar ratio (r) in complete agreement with the current observational data.
Maybe not. String theory approaches to both beyond the Standard Model and Inflationary model building generically predict the existence of scalars (moduli) that are light compared to the scale of quantum gravity. These moduli become displaced from their low energy minima in the early universe and lead to a prolonged matter-dominated epoch prior to BBN. In this paper, we examine whether non-perturbative effects such as parametric resonance or tachyonic instabilities can shorten, or even eliminate, the moduli condensate and matter-dominated epoch. Such effects depend crucially on the strength of the couplings, and we find that unless the moduli become strongly coupled the matter-dominated epoch is unavoidable. In particular, we find that in string and M-theory compactifications where the lightest moduli are near the TeV-scale that a matter-dominated epoch will persist until the time of Big Bang Nucleosynthesis.
We study the dynamics of the Affleck-Dine field after inflation in more detail. After inflation, the Affleck-Dine field inevitably oscillates around the potential minimum. This oscillation is hard to decay and can cause accidental suppression of the consequential baryon asymmetry. This suppression is most effective for the model with non-renormalizable superpotential $W_{AD}\sim \Phi^4$ ($\Phi$: Affleck-Dine field). It is found that the Affleck-Dine leptogenesis in high-scale inflation, which suffers from serious gravitino overproduction, becomes workable owing to this effect.
The existence of dark matter is undisputed, while the nature of it is still unknown. Explaining dark matter with the existence of a new unobserved particle is among the most promising possible solutions. Recently dark matter candidates in the MeV mass region received more and more interest. In comparison to the mass region between a few GeV to several TeV, this region is experimentally largely unexplored. We discuss the application of a RNDR DEPFET semiconductor detector for direct searches for dark matter in the MeV mass region. We present the working principle of the RNDR DEPFET devices and review the performance obtained by previously performed prototype measurements. The future potential of the technology as dark matter detector is discussed and the sensitivity for MeV dark matter detection with RNDR DEPFET sensors is presented. Under the assumption of three background events in the region of interest and an exposure of one kg$\cdot$y a sensitivity of $\bar{\sigma}_{\bar{e}} = 10^{-41}$ cm$^{2}$ for dark matter particles with a mass of 10 MeV can be reached.
We study the stability of the electroweak vacuum in low-scale inflation models whose Hubble parameter is much smaller than the instability scale of the Higgs potential. In general, couplings between the inflaton and Higgs are present, and hence we study effects of these couplings during and after inflation. We derive constraints on the couplings between the inflaton and Higgs by requiring that they do not lead to catastrophic electroweak vacuum decay, in particular, via resonant production of the Higgs particles.
Links to: arXiv, form interface, find, astro-ph, recent, 1706, contact, help (Access key information)
This work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power-spectra and three dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presented ARES framework for Bayesian large scale structure analyses. As a result the method infers jointly and fully self-consistently three dimensional density fields, cosmological power-spectra, luminosity dependent galaxy biases, noise levels of respective galaxy distributions and coefficients for a set of a priori specified foreground templates. In addition this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power-spectra via applications to realistic mock galaxy observations subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels our method reliably and robustly infers three dimensional density fields and corresponding cosmological power-spectra from deep galaxy surveys. Further our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power-spectrum. An effect amounting up to $10$ percent correlations and anti-correlations across large ranges in Fourier space.
Recent detections of the cross-correlation of the thermal Sunyaev-Zel'dovich (tSZ) effect and weak gravitational lensing (WL) enable unique studies of cluster astrophysics and cosmology. In this work, we present constraints on the amplitude of the non-thermal pressure fraction in galaxy clusters, $\alpha_0$, and the amplitude of the matter power spectrum, $\sigma_8$, using measurements of the tSZ power spectrum from Planck, and the tSZ-WL cross-correlation from Planck and the Red Cluster Sequence Lensing Survey. We fit the data to a semi-analytic model with the covariance matrix using $N$-body simulations. We find that the tSZ power spectrum alone prefers $\sigma_8 \sim 0.8$ and a large fraction of non-thermal pressure ($\alpha_0 \sim 0.2-0.3$). The tSZ-WL cross-correlation on the other hand prefers a significantly lower $\sigma_8 \sim 0.6$, and low $\alpha_0 \sim 0.05$. We show that this tension can be mitigated by allowing for a steep slope in the stellar-mass-halo-mass relation, which would cause a reduction of the gas in low-mass halos. In such a model, the combined data prefer $\sigma_8 \sim 0.7$ and $\alpha_0 \sim 0.15$, consistent with predictions from hydrodynamical simulations.
Scalar condensates with large expectation values can form in the early universe, for example, in theories with supersymmetry. The condensate can undergo fragmentation into Q-balls before decaying. If the Q-balls dominate the energy density for some period of time, statistical fluctuations in their number density can lead to formation of primordial black holes (PBH). In the case of supersymmetry the mass range is limited from above by $10^{23}$g. For a general charged scalar field, this robust mechanism can generate black holes over a much broader mass range, including the black holes with masses of 1-100 solar masses, which is relevant for LIGO observations of gravitational waves. Topological defects can lead to formation of PBH in a similar fashion.
The "Tapered Gridded Estimator" (TGE) is a novel way to directly estimate the angular power spectrum from radio-interferometric visibility data that reduces the computation by efficiently gridding the data, consistently removes the noise bias, and suppresses the foreground contamination to a large extent by tapering the primary beam response through an appropriate convolution in the visibility domain. Here we demonstrate the effectiveness of TGE in recovering the diffuse emission power spectrum through numerical simulations. We present details of the simulation used to generate low frequency visibility data for sky model with extragalactic compact radio sources and diffuse Galactic synchrotron emission. We then use different imaging strategies to identify the most effective option of point source subtraction and to study the underlying diffuse emission. Finally, we apply TGE to the residual data to measure the angular power spectrum, and assess the impact of incomplete point source subtraction in recovering the input power spectrum $C_{\ell}$ of the synchrotron emission. This estimator is found to successfully recovers the $C_{\ell}$ of input model from the residual visibility data. These results are relevant for measuring the diffuse emission like the Galactic synchrotron emission. It is also an important step towards characterizing and removing both diffuse and compact foreground emission in order to detect the redshifted $21\, {\rm cm}$ signal from the Epoch of Reionization.
We perform a measurement of the Hubble constant, $H_0$, using the latest baryonic acoustic oscillations (BAO) measurements from galaxy surveys of 6dFGS, SDSS DR7 Main Galaxy Sample, BOSS DR12 sample, and eBOSS DR14 quasar sample, in the framework of a flat $\Lambda$CDM model. Based on the Kullback-Leibler (KL) divergence, we examine the consistency of $H_0$ values derived from various data sets. We find that our measurement is consistent with that derived from Planck and with the local measurement of $H_0$ using the Cepheids and type Ia supernovae. We perform forecasts on $H_0$ from future BAO measurements, and find that the uncertainty of $H_0$ determined by future BAO data alone, including complete eBOSS, DESI and Euclid-like, is comparable with that from local measurements.
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
A persistent theme in the study of dark energy is the question of whether it really exists or not. It is often claimed hat we are mis-calculating the cosmological model by neglecting the effects associated with averaging over large-scale structures. In the Newtonian approximation this is clear: there is no effect. Within the full relativistic picture this remains an important open question however, owing to the complex mathematics involved. We study this issue using particle numerical simulations which account for all relevant relativistic effects without any problems from shell crossing. In this context we show for the first time that the backreaction from structure can differ by many orders of magnitude depending upon the slicing of spacetime one chooses to average over. In the worst case, where smoothing is carried out in synchronous spatial surfaces, the corrections can reach ten percent and more. However, when smoothing on the constant time hypersurface of the Newtonian gauge backreaction contributions remain 4-5 orders of magnitude smaller.
We present the methodology for and detail the implementation of the Dark
Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines
configuration-space two-point statistics from three different cosmological
probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data
from the first year of DES observations. We have developed two independent
modeling pipelines and describe the code validation process. We derive
expressions for analytical real-space multi-probe covariances, and describe
their validation with numerical simulations. We stress-test the inference
pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters
plus 20 nuisance parameters and precisely resemble the analysis to be presented
in the DES 3x2pt analysis paper, using a variety of simulated input data
vectors with varying assumptions.
We find that any disagreement between pipelines leads to changes in assigned
likelihood $\Delta \chi^2 \le 0.045$ with respect to the statistical error of
the DES Y1 data vector. We also find that angular binning and survey mask do
not impact our analytic covariance at a significant level. We determine lower
bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and
galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling
uncertainties in the non-linear regime is well below statistical errors, and
show that our analysis choices are robust against a variety of systematics.
These tests demonstrate that we have a robust analysis pipeline that yields
unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1
analysis. We emphasize that the level of independent code development and
subsequent code comparison as demonstrated in this paper is necessary to
produce credible constraints from increasingly complex multi-probe analyses of
current data.
A novel idea is proposed for a natural solution of the dark energy and its cosmic coincidence problem. The existence of local antigravity sources, associated with astrophysical matter configurations distributed throughout the universe, can lead to a recent cosmic acceleration effect. Various physical theories can be compatible with this idea, but here, in order to test our proposal, we focus on quantum originated spherically symmetric metrics matched with the cosmological evolution through a Swiss cheese analysis. In the context of asymptotically safe gravity, we have explained the observed amount of dark energy using Newton's constant, the galaxy or cluster length scales, and dimensionless order one parameters predicted by the theory, without fine-tuning or extra unproven energy scales. The interior modified Schwarzschild-de Sitter metric allows us to approximately interpret this result as that the standard cosmological constant is a composite quantity made of the above parameters, instead of a fundamental one.
We propose to study of the light speed isotropy and Lorentz invariance at Jefferson Laboratory by means of the measurements of the Compton Edge using of the Hall A/C existing experimental setup. Methodologically the same experiment has already been successfully elaborated at GRAAL experiment at the European Synchrotron Radiation Facility in Grenoble with 6 GeV electron beam. This Proposal states two goals expected to be reached at Jefferson Laboratory, both on Lorentz invariance: (a) the one-way light speed isotropy testing accuracy, following from conservative evaluations at numerical simulations, to about an order of magnitude better than was GRAAL's; (b) the dependence of the light speed on the velocity of the apparatus (Kennedy-Thorndike measurement) will be traced to an accuracy about 3 orders of magnitudes better than the available limits.
The standard theory of weak gravitational lensing relies on the infinitesimal light beam approximation. In this context, images are distorted by convergence and shear, the respective sources of which unphysically depend on the resolution of the distribution of matter---the so-called Ricci-Weyl problem. In this letter, we propose a strong-lensing-inspired formalism to describe the lensing of finite beams. We address the Ricci-Weyl problem by showing explicitly that convergence is caused by the matter enclosed by the beam, regardless of its distribution. Furthermore, shear turns out to be systematically enhanced by the finiteness of the beam. This implies, in particular, that the Kaiser-Squires relation between shear and convergence is violated, which could have profound consequences on the interpretation of weak lensing surveys.
Links to: arXiv, form interface, find, astro-ph, recent, 1706, contact, help (Access key information)
We explore the relationship between features in the Planck 2015 temperature and polarization data, shifts in the cosmological parameters, and features from inflation. Residuals in the temperature data at low multipole $\ell$, which are responsible for the high $H_0\approx 70$ km s$^{-1}$Mpc$^{-1}$ and low $\sigma_8\Omega_m^{1/2}$ values from $\ell<1000$ in power-law $\Lambda$CDM models, are better fit to inflationary features with a $1.9\sigma$ preference for running of the running of the tilt or a stronger $99\%$ CL local significance preference for a sharp drop in power around $k=0.004$ Mpc$^{-1}$ in generalized slow roll and a lower $H_0\approx 67$ km s$^{-1}$Mpc$^{-1}$. The same in-phase acoustic residuals at $\ell>1000$ that drive the global $H_0$ constraints and appear as a lensing anomaly also favor running parameters which allow even lower $H_0$, but not once lensing reconstruction is considered. Polarization spectra are intrinsically highly sensitive to these parameter shifts, and even more so in the Planck 2015 TE data due to an outlier at $\ell \approx 165$, which disfavors the best fit $H_0$ $\Lambda$CDM solution by more than $2\sigma$, and high $H_0$ value at almost $3\sigma$. Current polarization data also slightly enhance the significance of a sharp suppression of large-scale power but leave room for large improvements in the future with cosmic variance limited $E$-mode measurements.
The light-cone (LC) effect causes the Epoch of Reionization (EoR) 21-cm signal $T_{\rm b} (\hat{\bf{n}}, \nu)$ to evolve significantly along the line of sight (LoS) direction $\nu$. In the first part of this paper, we present a method to properly incorporate the LC effect in simulations of the EoR 21-cm signal that include peculiar velocities. Subsequently, we discuss how to quantify the second order statistics of the EoR 21-cm signal in the presence of the LC effect. We demonstrate that the 3D power spectrum $P({\bf{k}})$ fails to quantify the entire information because it assumes the signal to be ergodic and periodic, whereas the LC effect breaks these conditions along the LoS. Considering a LC simulation centered at redshift $8$ where the mean neutral fraction drops from $0.65$ to $0.35$ across the box, we find that $P({\bf{k}})$ misses out $\sim 40 \%$ of the information at the two ends of the $17.41 \, {\rm MHz}$ simulation bandwidth. The multi-frequency angular power spectrum (MAPS) ${\mathcal C}_{\ell}(\nu_1,\nu_2)$ quantifies the statistical properties of $T_{\rm b} (\hat{\bf{n}}, \nu)$ without assuming the signal to be ergodic and periodic along the LoS. We expect this to quantify the entire statistical information of the EoR 21-cm signal. We apply MAPS to our LC simulation and present preliminary results for the EoR 21-cm signal.
The number density of field galaxies per rotation velocity, referred to as the velocity function, is an intriguing statistical measure probing the smallest scales of structure formation. In this paper we point out that the velocity function is sensitive to small shifts in key cosmological parameters such as the amplitude of primordial perturbations ($\sigma_8$) or the total matter density ($\Omega_{\rm m}$). Using current data and applying conservative assumptions about baryonic effects, we show that the observed velocity function of the Local Volume favours cosmologies in tension with the measurements from Planck but in agreement with the latest findings from weak lensing surveys. While the current systematics regarding the relation between observed and true rotation velocities are potentially important, upcoming data from HI surveys as well as new insights from hydrodynamical simulations will dramatically improve the situation in the near future.
We study the imprint of non-standard dark energy (DE) and dark matter (DM) models on the 21cm intensity map power spectra from high-redshift neutral hydrogen (HI) gas. To this purpose we use halo catalogs from N-body simulations of dynamical DE models and DM scenarios which are statistically indistinguishable from the standard Cold Dark Matter model with Cosmological Constant (LCDM) using currently available cosmological observations. We limit our analysis to halo catalogs at redshift z = 1 and 2.3 which are common to all simulations. For each catalog we model the HI distribution by using a simple prescription to associate the HI gas mass to N-body halos. We find that the DE models leave a distinct signature on the HI spectra across a wide range of scales, which correlates with differences in the halo mass function and the onset of the non-linear regime of clustering. In the case of the non-standard DM model significant differences of the HI spectra with respect to the LCDM model only arise from the suppressed abundance of low mass halos. These cosmological model dependent features also appear in the 21cm spectra. In particular, we find that future SKA measurements can distinguish the imprints of DE and DM models at high statistical significance.
Future Cosmic Microwave Background (CMB) satellite missions aim to use the $B$ mode polarization to measure the tensor-to-scalar ratio $r$ with a sensitivity of about $10^{-3}$. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent to CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious $B$ mode signal. Because the expected primordial $B$ mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring $r.$ Using simulations we estimate the magnitude and angular spectrum of the spurious $B$ mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces difference in the relative calibration of galactic emission components. We simulate using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to $r$ from reionization bump on large angular scales ($\ell < 10$) is $\approx 10^{-3}$ assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the angular coverage per pixels that results from the scan pattern.
In this paper we present and characterize a nearest-neighbors color-matching photometric redshift estimator that features a direct relationship between the precision and accuracy of the input magnitudes and the output photometric redshifts. This aspect makes our estimator an ideal tool for evaluating the impact of changes to LSST survey parameters that affect the measurement errors of the photometry, which is the main motivation of our work (i.e., it is not intended to provide the "best" photometric redshifts for LSST data). We show how the photometric redshifts will improve with time over the 10-year LSST survey and confirm that the nominal distribution of visits per filter provides the most accurate photo-$z$ results. We also demonstrate how deep LSST imaging of a spectroscopic galaxy sample can significantly improve photo-$z$ quality, especially in the survey's early years. The LSST survey strategy naturally produces observations over a range of airmass, which offers the opportunity of using an SED- and $z$-dependent atmospheric affect on the observed photometry as a color-independent redshift indicator. We show that measuring this airmass effect and including it as a prior has the potential to improve the photometric redshifts and can ameliorate extreme outliers, but also find that it will only be adequately measured for the brightest galaxies, which limits its overall impact on LSST photometric redshifts. Ultimately, we intend for this work to serve as a guide for the expectations and preparations of the LSST science community with regards to the minimum quality of photo-$z$ as the survey progresses.
The current >3 sigma tension between the Hubble constant H0 measured from local distance indicators and from cosmic microwave background is one of the most highly debated issues in cosmology, as it possibly indicates new physics or unknown systematics. In this work, we explore whether this tension can be alleviated by the sample variance in the local measurements, which use a small fraction of the Hubble volume. We use a large-volume cosmological N-body simulation to model the local measurements and to quantify the variance due to local density fluctuations and sample selection. We explicitly take into account the inhomogeneous spatial distribution of type Ia supernovae. Despite the faithful modeling of the observations, our results confirm previous findings that sample variance in local Hubble constant measurements is small; we find sigma(local H0)=0.31 km/s/Mpc, a nearly negligible fraction of the ~ 6 km/s/Mpc necessary to explain the difference between the local and the global H0 measurements. While the H0 tension could in principle be explained by our local neighborhood being a underdense region of radius ~150 Mpc, the extreme required underdensity of such a void (delta ~ -0.8) makes it very unlikely in a LCDM Universe, and it also violates existing observational constraints. Therefore, sample variance in a LCDM Universe cannot appreciably alleviate the tension in H0 measurements even after taking into account the inhomogeneous selection of type Ia supernovae.
We show that standard candles can provide some valuable information about the density contrast, which could be particularly important at redshifts where other observations are not available. We use an inversion method to reconstruct the local radial density profile from luminosity distance observations assuming background cosmological parameters obtained from large scale observations. Using type Ia Supernovae% (SNe) , Cepheids and the cosmological parameters from the Planck mission we reconstruct the radial density profiles along two different directions of the sky. We compare these profiles to other density maps obtained from luminosity density, in particular Keenan et al. 2013 and the 2M++ galaxy catalogue. The method independently confirms the existence of inhomogeneities, could be particularly useful to correctly normalize density maps from galaxy surveys with respect to the average density of the Universe, and could clarify the apparent discrepancy between local and large scale estimations of the Hubble constant. When better observational supernovae data will be available, the accuracy of the reconstructed density profiles will improve and will allow to further investigate the existence of structures whose size is beyond the reach of galaxy surveys.
Motivated by an updated compilation of observational Hubble data (OHD) which consist of 51 points in the redshift range 0.07<z<2.36, we study an interesting model known as Cardassian which drives the late cosmic acceleration without a dark energy component. Our compilation contains 31 data points measured with the differential age method by Jimenez & Loeb, and 20 data points obtained from clustering of galaxies. We focus on two modified Friedmann equations: the original Cardassian (OC) expansion and the modified polytropic Cardassian (MPC). The dimensionless Hubble, E(z), and the deceleration parameter, q(z), are revisited in order to constraint the OC and MPC free parameters, first with the OHD and then contrasted with modern observations of SN Ia using the compressed joint-light-analysis sample. Our results show that the OC and MPC models are in agreement with the standard cosmology and naturally introduce a cosmological-constant-like extra term in the canonical Friedmann equation with the capability of accelerating the Universe without dark energy.
Nonzero neutrino masses are required by the existence of flavour oscillations, with values of the order of at least 50 meV. We consider the gravitational clustering of relic neutrinos within the Milky Way, and used the $N$-one-body simulation technique to compute their density enhancement factor in the neighbourhood of the Earth with respect to the average cosmic density. Compared to previous similar studies, we pushed the simulation down to smaller neutrino masses, and included an improved treatment of the baryonic and dark matter distributions in the Milky Way. Our results are important for future experiments aiming at detecting the cosmic neutrino background, such as the Princeton Tritium Observatory for Light, Early-universe, Massive-neutrino Yield (PTOLEMY) proposal. We calculate the impact of neutrino clustering in the Milky Way on the expected event rate for a PTOLEMY-like experiment. We find that the effect of clustering remains negligible for the minimal normal hierarchy scenario, while it enhances the event rate by 10 to 20% (resp. a factor 1.7 to 2.5) for the minimal inverted hierarchy scenario (resp. a degenerate scenario with 150 meV masses). Finally we compute the impact on the event rate of a possible fourth sterile neutrino with a mass of 1.3 eV.
We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line-of-sight, $\xi(r_{\bot})$, as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a {\it redshift-dependent scaling} in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured $\xi(r_{\bot})$. We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, 2, drawn from the Horizon Run 4 N-body simulation. The shape of $\xi(r_{\bot})$ can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large scale structure surveys, especially photometric surveys such as DES, LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.
We revisit SN1987A constraints on light, hidden sector gauge bosons ("dark photons") that are coupled to the standard model through kinetic mixing with the photon. These constraints are realized because excessive bremsstrahlung radiation of the dark photon can lead to rapid cooling of the SN1987A progenitor core, in contradiction to the observed neutrinos from that event. The models we consider are of interest as phenomenological models of strongly self-interacting dark matter. We clarify several possible ambiguities in the literature and identify errors in prior analyses. We find constraints on the dark photon mixing parameter that are in rough agreement with the early estimates of Dent et al., but only because significant errors in their analyses fortuitously canceled. Our constraints are in good agreement with subsequent analyses by Rrapaj & Reddy and Hardy & Lasenby. We estimate the dark photon bremsstrahlung rate using one-pion exchange (OPE), while Rrapaj & Reddy use a soft radiation approximation (SRA) to exploit measured nuclear scattering cross sections. We find that the differences between mixing parameter constraints obtained through the OPE approximation or the SRA approximation are roughly a factor of $\sim 2-3$. Hardy & Laseby include plasma effects in their calculations finding significantly weaker constraints on dark photon mixing for dark photon masses below $\sim 10\, \mathrm{MeV}$. We do not consider plasma effects. Lastly, we point out that the properties of the SN1987A progenitor core remain somewhat uncertain and that this uncertainty alone causes uncertainty of at least a factor of $\sim 2-3$ in the excluded values of the dark photon mixing parameter. Further refinement of these estimates is unwarranted until either the interior of the SN1987A progenitor is more well understood or additional, large, and heretofore neglected effects, are identified.
We consider a model with two real scalar fields which admits phantom domain wall solutions. We investigate the structure and evolution of these phantom domain walls in an expanding homogeneous and isotropic universe. In particular, we show that the increase of the tension of the domain walls with cosmic time, associated to the evolution of the phantom scalar field, is responsible for an additional damping term in their equations of motion. We describe the macroscopic dynamics of phantom domain walls, showing that extended phantom defects whose tension varies on a cosmological timescale cannot be the dark energy.
We present a prototype model that resolves the cosmological constant problem using matter alone, i.e., without modifying gravity. Its generic cosmological solutions adjust an arbitrarily large, negative dark energy to a positive value parametrically suppressed by an initial field velocity. Inflationary initial conditions lead to a positive dark energy exponentially smaller in magnitude than any model parameter, or any scale in the initial conditions.
We present $\psi'$MSSM, a model based on a $U(1)_{\psi'}$ extension of the minimal supersymmetric standard model. The gauge symmetry $U(1)_{\psi'}$, also known as $U(1)_N$, is a linear combination of the $U(1)_\chi$ and $U(1)_\psi$ subgroups of $E_6$. The model predicts the existence of three sterile neutrinos with masses $\lesssim 0.1~{\rm eV}$, if the $U(1)_{\psi'}$ breaking scale is of order 10 TeV. Their contribution to the effective number of neutrinos at nucleosynthesis is $\Delta N_{\nu}\simeq 0.29$. The model can provide a variety of possible cold dark matter candidates including the lightest sterile sneutrino. If the $U(1)_{\psi'}$ breaking scale is increased to $10^3~{\rm TeV}$, the sterile neutrinos, which are stable on account of a $Z_2$ symmetry, become viable warm dark matter candidates. The observed value of the standard model Higgs boson mass can be obtained with relatively light stop quarks thanks to the D-term contribution from $U(1)_{\psi'}$. The model predicts diquark and diphoton resonances which may be found at an updated LHC. The well-known $\mu$ problem is resolved and the observed baryon asymmetry of the universe can be generated via leptogenesis. The breaking of $U(1)_{\psi'}$ produces superconducting strings that may be present in our galaxy. A $U(1)$ R symmetry plays a key role in keeping the proton stable and providing the light sterile neutrinos.
The nature of electroweak (EW) phase transition (PT) is of great importance. It may give a clue to the origin of baryon asymmetry if EWPT is strong first order. Although it is second order within the standard model (SM), a great many extensions of the SM are capable of altering the nature. Thus, gravitational wave (GW), which is supposed to be relics of strong first order PT, is a good complementary probe to new physics beyond SM (BSM). We in this paper elaborate the patterns of strong first order EWPT in the next to simplest extension to the SM Higgs sector, by introducing a $Z_3$-symmetric singlet scalar. We find that, in the $Z_3$-symmetric limit, the tree level barrier could lead to strong first order EWPT either via three or two-step PT. Moreover, they could produce two sources of GW, despite of the undetectability from the first-step strong first order PT for the near future GW experiments. But the other source with significant supercooling which then gives rise to $\alpha\sim{\cal O}(0.1)$ almost can be wholly covered by future space-based GW interferometers such as eLISA, DECIGO and BBO.
The quantum mechanical generation of hypermagnetic and hyperlectric fields in four-dimensional conformally flat background geometries rests on the simultaneous continuity of the effective horizon and of the extrinsic curvature across the inflationary boundary. The junction conditions for the gauge fields are derived in general terms and corroborated by explicit examples with particular attention to the limit of a sudden (but nonetheless continuous) transition of the effective horizon. After reducing the dynamics to a pair of integral equations related by duality transformations, we compute the power spectra and deduce a novel class of logarithmic corrections which turn out to be, however, numerically insignificant and overwhelmed by the conductivity effects once the gauge modes reenter the effective horizon. In this perspective the magnetogenesis requirements and the role of the postinflationary conductivity are clarified and reappraised. As long as the total duration of the inflationary phase is nearly minimal, quasi-flat hypermagnetic power spectra are comparatively more common than in the case of vacuum initial data.
Links to: arXiv, form interface, find, astro-ph, recent, 1706, contact, help (Access key information)