Strong lensing time-delay systems constrain cosmological parameters via the so-called time-delay distance and the angular diameter distance to the lens. In previous studies, only the former information was used. In this paper, we show that the cosmological constraints improve significantly when the latter information is also included. Specifically, the angular diameter distance plays a crucial role in breaking the degeneracy between the curvature of the Universe and the time-varying equation of state of dark energy. Using a mock sample of 55 bright quadruple lens systems based on expectations for ongoing/future imaging surveys, we find that adding the angular diameter distance information to the time-delay distance information and the cosmic microwave background data of Planck improves the constraint on the constant equation of state by 30%, on the time variation in the equation of state by a factor of two, and on the Hubble constant in the flat $\Lambda$CDM model by a factor of two. Therefore, previous forecasts for the statistical power of time-delay systems were significantly underestimated, i.e., time-delay systems are more powerful than previously appreciated.
With the first phase of the Square Kilometre Array (SKA1) entering into its final pre-construction phase, we investigate how best to maximise its scientific return. Specifically, we focus on the statistical measurement of the 21 cm power spectrum (PS) from the epoch of reionization (EoR) using the low frequency array, SKA1-low. To facilitate this investigation we use the recently developed MCMC based EoR analysis tool 21CMMC (Greig & Mesinger). In light of the recent 50 per cent cost reduction, we consider several different SKA core baseline designs, changing: (i) the number of antenna stations; (ii) the number of dipoles per station; and also (iii) the distribution of baseline lengths. We find that a design with a reduced number of dipoles per core station (increased field of view and total number of core stations), together with shortened baselines, maximises the recovered EoR signal. With this optimal baseline design, we investigate three observing strategies, analysing the trade-off between lowering the instrumental thermal noise against increasing the field of view. SKA1-low intends to perform a three tiered observing approach, including a deep 100 deg$^{2}$ at 1000 h, a medium-deep 1000 deg$^{2}$ at 100 h and a shallow 10,000 deg$^{2}$ at 10 h survey. We find that the three observing strategies result in comparable ($\lesssim$ per cent) constraints on our EoR astrophysical parameters. This is contrary to naive predictions based purely on the total signal-to-noise, thus highlighting the need to use EoR parameter constraints as a figure of merit, in order to maximise scientific returns with next generation interferometers.
Calibrating the photometric redshifts of >10^9 galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selected to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where - in galaxy color space - redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color-redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.
We consider the prospects for the indirect detection of low mass dark matter which couples dominantly to quarks. If the center of mass energy is below about 280 MeV, the kinematically allowed final states will be dominated by photons and neutral pions, producing striking signatures at gamma ray telescopes. In fact, an array of new instruments have been proposed, which would greatly improve sensitivity to photons in this energy range. We find that planned instruments can improve on current sensitivity to dark matter models of this type by up to a few orders of magnitude.
In~\citep{Shafieloo2009}, Shafieloo, Sanhi and Starobinsky firstly proposed the possibility that the current cosmic acceleration (CA) is slowing down. This is rather counterintuitive, because a slowing down CA cannot be accommodated in almost all the mainstream cosmological models. In this work, by exploring the evolutionary trajectories of dark energy equation of state $w(z)$ and deceleration parameter $q(z)$, we present a comprehensive investigation on the slowing down of CA from both the theoretical and the observational sides. For the theoretical side, we study the impacts of different $w(z)$ by using six parametrization models, and then discuss the effects of spatial curvature. For the observational side, we investigate the effects of different type Ia supernovae (SNe Ia), different baryon acoustic oscillation (BAO), and different cosmic microwave background (CMB) data, respectively. We find that the evolution of CA are insensitive to the specific form of $w(z)$; in contrast, a non-flat Universe more favors a slowing down CA than a flat Universe. Moreover, we find that SNLS3 SNe Ia datasets favor a slowing down CA at 1$\sigma$ confidence level, while JLA SNe Ia samples prefer an eternal CA; in contrast, the effects of different BAO data are negligible. In addition, compared with CMB distance prior data, full CMB data more favor a slowing down CA. Since the evolutionary behavior of CA depends on both the theoretical models and the observational data, the possibility of slowing down CA cannot be confirmed by current observations.
The 21cm-galaxy cross-power spectrum is expected to be one of the promising probes of the Epoch of Reionization (EoR), as it could offer information about the progress of reionization and the typical scale of ionized regions at different redshifts. With upcoming observations of 21cm emission from the EoR with the Low Frequency Array (LOFAR), and of high redshift Lyalpha emitters (LAEs) with Subaru's Hyper Suprime Cam (HSC), we investigate the observability of such cross-power spectrum with these two instruments, which are both planning to observe the ELAIS-N1 field at z=6.6. In this paper we use N-body + radiative transfer (both for continuum and Lyalpha photons) simulations at redshift 6.68, 7.06 and 7.3 to compute the 3D theoretical 21cm-galaxy cross-power spectrum, as well as to predict the 2D 21cm-galaxy cross-power spectrum expected to be observed by LOFAR and HSC. Once noise and projection effects are accounted for, our predictions of the 21cm-galaxy cross-power spectrum show clear anti-correlation on scales larger than ~ 60 h$^{-1}$ Mpc (corresponding to k ~ 0.1 h Mpc$^{-1}$), with levels of significance p=0.04 at z=6.6 and p=0.048 at z=7.3. On smaller scales, instead, the signal is completely contaminated.
Relic gravitational waves (RGWs) generated in the early Universe form a stochastic GW background, which can be directly probed by measuring the timing residuals of millisecond pulsars. In this paper, we investigate the constraints on the RGWs and on the inflationary parameters by the observations of current and potential future pulsar timing arrays. In particular, we focus on effects of various cosmic phase transitions (e.g. $e^{+}e^{-}$ annihilation, QCD transition and SUSY breaking) and relativistic free-streaming gases (neutrinos and dark fluids) in the general scenario of the early Universe, which have been neglected in the previous works. We find that the phase transitions can significantly damp the RGWs in the sensitive frequency range of pulsar timing arrays, and the upper limits of tensor-to-scalar ratio $r$ increase by a factor $\sim 2$ for both current and future observations. However, the effects of free-steaming neutrinos and dark fluids are all too small to be detected. Meanwhile, we find that, if the effective equation of state $w$ in the early Universe is larger than $1/3$, i.e. deviating from the standard hot big bang scenario, the detection of RGWs by pulsar timing arrays becomes much more promising.
We present a dynamical classification system for galaxies based on the shapes of their circular velocity curves (CVCs). We derive the CVCs of 40 SAURON and 42 CALIFA galaxies across Hubble sequence via a full line-of-sight integration as provided by solutions of the axisymmetric Jeans equations. We use Principal Component Analysis (PCA) applied to the circular curve shapes to find characteristic features and use a k-means classifier to separate circular curves into classes. This objective classification method identifies four different classes, which we name Slow-Rising (SR), Flat (F), Sharp-Peaked (SP) and Round-Peaked (RP) circular curves. SR-CVCs are mostly represented by late-type spiral galaxies (Scd-Sd) with no prominent spheroids in the central parts and slowly rising velocities; F-CVCs span almost all morphological types (E,S0,Sab,Sb-Sbc) with flat velocity profiles at almost all radii; SP-CVCs are represented by early-type and early-type spiral galaxies (E,S0,Sb-Sbc) with prominent spheroids and sharp peaks in the central velocities. RP-CVCs are represented by only two morphological types (E,Sa-Sab) with prominent spheroids, but RP-CVCs have much rounder peaks in the central velocities than SP-CVCs. RP-CVCs are typical for high-mass galaxies, while SR-CVCs are found for low-mass galaxies. Intermediate-mass galaxies usually have F-CVCs and SP-CVCs. Circular curve classification presents an alternative to typical morphological classification and may be more tightly linked to galaxy evolution.
Links to: arXiv, form interface, find, astro-ph, recent, 1509, contact, help (Access key information)
Cosmological N-body hydrodynamic computations following atomic and molecular chemistry (e$^-$, H, H$^+$, H$^-$, He, He$^+$, He$^{++}$, D, D$^+$, H$_2$, H$_2^+$, HD, HeH$^+$), gas cooling, star formation and production of heavy elements (C, N, O, Ne, Mg, Si, S, Ca, Fe, etc.) from stars covering a range of mass and metallicity are used to explore the origin of several chemical abundance patterns and to study both the metal and molecular content during simulated galaxy assembly. The resulting trends show a remarkable similarity to up-to-date observations of the most metal-poor damped Lyman-$\alpha$ absorbers at redshift $z\gtrsim 2$. These exhibit a transient nature and represent collapsing gaseous structures captured while cooling is becoming effective in lowering the temperature below $\sim 10^4\,\rm K$, before they are disrupted by episodes of star formation or tidal effects. Our theoretical results agree with the available data for typical elemental ratios, such as [C/O], [Si/Fe], [O/Fe], [Si/O], [Fe/H], [O/H] at redshifts $z\sim 2-7$. Correlations between HI and H$_2$ abundances show temporal and local variations and large spreads as a result of the increasing cosmic star formation activity from $z\sim 6$ to $z\sim 3$. The scatter we find in the abundance ratios is compatible with the observational data and is explained by simultaneous enrichment by sources from different stellar phases or belonging to different stellar populations. Simulated synthetic spectra support the existence of metal-poor cold clumps with large optical depth at $z\sim 6$ that could be potential population~III sites at low or intermediate redshift. The expected dust content is in line with recent determinations.
In this paper, we implement a perturbative approach, first proposed by Bouchet & Gispert (1999), to estimate variation of spectral index of galactic polarized synchrotron emission, using linear combination of simulated Stokes Q polarization maps of selected frequency bands from WMAP and Planck observations on a region of sky dominated by the synchrotron Stokes Q signal. We find that, a first order perturbative analysis recovers input spectral index map well. Along with the spectral index variation map our method provides a fixed reference index, \hat \beta_{0s}, over the sky portion being analyzed. Using Monte Carlo simulations we find that, <\hat \beta_{0s}> = -2.84 \pm 0.01, which matches very closely with position of a peak at \beta_s(p) = -2.85, of empirical probability density function of input synchrotron indices, obtained from the same sky region. For thermal dust, mean recovered spectral index, <\hat \beta_d> = 2.00 \pm 0.004, from simulations, matches very well with spatially fixed input thermal dust spectral index \beta_d = 2.00. As accompanying results of the method we also reconstruct CMB, thermal dust and a synchrotron template component with fixed spectral indices over the {\it entire} sky region. We use full pixel-pixel noise covariance matrices of all frequency bands, estimated from the sky region being analyzed, to obtain reference spectral indices for synchrotron and thermal dust, spectral index variation map, CMB map, thermal dust and synchrotron template components. The perturbative technique as implemented in this work has the interesting property that it can build a model to describe the data with an arbitrary but enough degree of accuracy (and precession) as allowed by the data. We argue that, our method of reference spectral index determination, CMB map, thermal dust and synchrotron template component reconstruction is a maximum likelihood method.
Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they can bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolution bias. It also introduces a term that mimics white noise. These effects can influence forecasting analysis on ultra-large scales.
In this paper, we point out and study a generic type of signals existing in the primordial universe models, which can be used to model-independently distinguish the inflation scenario from alternatives. These signals are generated by massive fields that function as standard clocks. The role of massive fields as standard clocks has been realized in previous works. Although the existence of such massive fields is generic, the previous realizations require sharp features to classically excite the oscillations of the massive clock fields. Here, we point out that the quantum fluctuations of massive fields can actually serve the same purpose as the standard clocks. We show that they are also able to directly record the defining property of the scenario type, namely, the scale factor of the primordial universe as a function of time a(t), but through shape-dependent oscillatory features in non-Gaussianities. Since quantum fluctuating massive fields exist in any realistic primordial universe models, these quantum primordial standard clock signals are present in any inflation models, and should exist quite generally in alternative-to-inflation scenarios as well. However, the amplitude of such signals is very model-dependent.
We measure the weak gravitational lensing shear power spectra and their cross-power in two photometric redshift bins from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). The measurements are performed directly in multipole space in terms of adjustable band powers. For the extraction of the band powers from the data we have implemented and extended a quadratic estimator, a maximum likelihood method that allows us to readily take into account irregular survey geometries, masks, and varying sampling densities. We find the 68 per cent credible intervals in the $\sigma_8$-$\Omega_{\rm m}$-plane to be marginally consistent with results from $Planck$ for a simple five parameter $\Lambda$CDM model. For the projected parameter $S_8 \equiv \sigma_8(\Omega_{\rm m}/0.3)^{0.5}$ we obtain a best-fitting value of $S_8 = 0.768_{-0.039}^{+0.045}$. This constraint is consistent with results from other CFHTLenS studies as well as the Dark Energy Survey. Our most conservative model, including modifications to the power spectrum due to baryon feedback and marginalization over photometric redshift errors, yields an upper limit on the total mass of three degenerate massive neutrinos of $\Sigma m_\nu < 4.53 \, {\rm eV}$ at 95 per cent credibility, while a Bayesian model comparison does not favour any model extension beyond a simple five parameter $\Lambda$CDM model. Combining the shear likelihood with $Planck$ breaks the $\sigma_8$-$\Omega_{\rm m}$-degeneracy and yields $\sigma_8=0.817_{-0.014}^{+0.013}$ and $\Omega_{\rm m} = 0.298 \pm 0.011$ which is fully consistent with results from $Planck$ alone.
The polarization of cosmic microwave background (CMB) can be used to constrain cosmological birefringence, the rotation of the linear polarization of CMB photons potentially induced by parity violating physics beyond the standard model. This effect produces non-null CMB cross correlations between temperature and B mode-polarization, and between E- and B-mode polarization. Both cross-correlations are otherwise null in the standard cosmological model. We use the recently released 2015 Planck likelihood in combination with the Bicep2/Keck/Planck (BKP) likelihood to constrain the birefringence angle $\alpha$. Our findings, that are compatible with no detection, read $\alpha = 0.0^{\circ} \pm 1.3^{\circ} \mbox{ (stat)} \pm 1^{\circ} \mbox{ (sys)} $ for {\sc Planck} data and $\alpha = 0.30^{\circ} \pm 0.27^{\circ} \mbox{ (stat)} \pm 1^{\circ} \mbox{(sys)} $ for BKP data. We finally forecast the expected improvements over present constraints when the Planck BB, TB and EB spectra at high $\ell$ will be included in the analysis.
We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core and non-cool-core clusters. Our simulations include the effects of stellar and AGN feedback and are based on an improved version of the Smoothed-Particle-Hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, our primary diagnostic to classify the degree of cool-coreness of clusters, and on the iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of cool-core systems, to nearly flat core isentropic profiles, characteristic of non cool-core systems. Using observational criteria to distinguish between the two classes of objects, we find them to occur in similar proportions in simulations and in observations. Furthermore, we also find that simulated cool-core clusters have profiles of iron abundance that are steeper than those of non-cool-core clusters, also in agreement with observational results. We show that the capability of our simulations to generate a realistic cool-core structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows to naturally distribute the energy extracted from super-massive black holes and to compensate the radiative losses of low-entropy gas with short cooling time residing in the cluster core.
The uncertain origin of the recently-discovered 'changing-looking' quasar phenomenon - in which a luminous quasar dims significantly to a quiescent state in repeat spectroscopy over ~10 year timescales - may present unexpected challenges to our understanding of quasar accretion. To better understand this phenomenon, we take a first step to building a statistical sample of changing-look quasars with a systematic but simple archival search for these objects in the Sloan Digital Sky Survey Data Release 12. By leveraging the >10 year baselines for objects with repeat spectroscopy, we uncover two new changing-look quasars. Decomposition of the multi-epoch spectra and analysis of the broad emission lines suggest that the quasar accretion disk emission dims due to rapidly decreasing accretion rates, while disfavoring changes in intrinsic dust extinction. Narrow emission line energetics also support intrinsic dimming of quasar emission as the origin for this phenomenon rather than transient tidal disruption events. Although our search criteria included quasars at all redshifts and quasar transitions from either quasar-like to galaxy-like states or the reverse, all the most confident changing-look quasars discovered thus far have been relatively low-redshift (z ~ 0.2 - 0.3) and only exhibit quasar-like to galaxy-like transitions.
We report the discovery of a new "changing-look" quasar, SDSS J101152.98+544206.4, through repeat spectroscopy from the Time Domain Spectroscopic Survey. This is an addition to a small but growing set of quasars whose blue continua and broad optical emission lines have been observed to decline by a large factor on a time scale of approximately a decade. The 5100 Angstrom monochromatic continuum luminosity of this quasar drops by a factor of > 9.8 in a rest-frame time interval of < 9.7 years, while the broad H-alpha luminosity drops by a factor of 55 in the same amount of time. The width of the broad H-alpha line increases in the dim state such that the black hole mass derived from the appropriate single-epoch scaling relation agrees between the two epochs within a factor of 3. The fluxes of the narrow emission lines do not appear to change between epochs. The light curve obtained by the Catalina Sky Survey suggests that the transition occurs within a rest-frame time interval of approximately 500 days. We examine three possible mechanisms for this transition suggested in the recent literature. An abrupt change in the reddening towards the central engine is disfavored by the substantial difference between the timescale to obscure the central engine and the observed timescale of the transition. A decaying tidal disruption flare is consistent with the decay rate of the light curve but not with the prolonged bright state preceding the decay, nor can this scenario provide the power required by the luminosities of the emission lines. An abrupt drop in the accretion rate onto the supermassive black hole appears to be the most plausible explanation for the rapid dimming.
We study the effects of dark-matter annihilation during the epoch of big-bang nucleosynthesis on the primordial abundances of light elements. We improve the calculation of the light-element abundances by taking into account the effects of anti-nucleons emitted by the annihilation of dark matter and the interconversion reactions of neutron and proton at inelastic scatterings of energetic nucleons. Comparing the theoretical prediction of the primordial light-element abundances with the latest observational constraints, we derive upper bounds on the dark-matter pair-annihilation cross section. Implication to some of particle-physics models are also discussed.
We investigate the gauged Nambu-Jona-Lasinio model in curved spacetime at the large $N_c$ limit and in slow-roll approximation. The model can be described by the renormalization group corrected gauge-Higgs-Yukawa theory with the corresponding compositeness conditions. Evaluating the renormalization group (RG) improved effective action, we show that such model can produce CMB fluctuations and find inflationary parameters: spectral index, tensor-to-scalar-ratio and running of the spectral index. We demonstrate that the model can naturally satisfy the Planck 2015 data and maybe considered as an alternative candidate for Higgs inflation.
In the framework of the concordance cosmological model the first-order scalar and vector perturbations of the homogeneous background are derived without any supplementary approximations in addition to the weak gravitational field limit. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The obtained expressions for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge in all points except the locations of the sources, and their average values are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonian cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant it represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggesting itself connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.
We present preliminary results on Dark Matter searches from observations of the Perseus galaxy cluster with the MAGIC Telescopes. MAGIC is a system of two Imaging Atmospheric Cherenkov Telescopes located in the Canary island of La Palma, Spain. Galaxy clusters are the largest known gravitationally bound structures in the Universe, with masses of ~10^15 Solar masses. There is strong evidence that galaxy clusters are Dark Matter dominated objects, and therefore promising targets for Dark Matter searches, particularly for decay signals. MAGIC has taken almost 300 hours of data on the Perseus Cluster between 2009 and 2015, the deepest observational campaign on any galaxy cluster performed so far in the very high energy range of the electromagnetic spectrum. We analyze here a small sample of this data and search for signs of dark matter in the mass range between 100 GeV and 20 TeV. We apply a likelihood analysis optimized for the spectral and morphological features expected in the dark matter decay signals. This is the first time that a dedicated Dark Matter optimization is applied in a MAGIC analysis, taking into account the inferred Dark Matter distribution of the source. The results with the full dataset analysis will be published soon by the MAGIC Collaboration.
Radio interferometers suffer from the problem of missing information in their data, due to the gaps between the antennas. This results in artifacts, such as bright rings around sources, in the images obtained. Multiple deconvolution algorithms have been proposed to solve this problem and produce cleaner radio images. However, these algorithms are unable to correctly estimate uncertainties in derived scientific parameters or to always include the effects of instrumental errors. We propose an alternative technique called Bayesian Inference for Radio Observations (BIRO) which uses a Bayesian statistical framework to determine the scientific parameters and instrumental errors simultaneously directly from the raw data, without making an image. We use a simple simulation of Westerbork Synthesis Radio Telescope data including pointing errors and beam parameters as instrumental effects, to demonstrate the use of BIRO.
Starting with the idea of regularization of singularities due to the variability of the fundamental constants in cosmology we first study the cyclic universe models. We find two models of oscillating mass density and pressure regularized by varying gravitational constant $G$. Then, we extend this idea onto the multiverse containing cyclic individual universes with either growing or decreasing entropy though leaving the net entropy constant. In order to get the key idea, we consider the doubleverse with the same geometrical evolution of the two "parallel" universes with their physical evolution (physical coupling constants $c(t)$ and $G(t)$) being different. An interesting point is that there is a possibility to exchange the universes at the point of maximum expansion -- the fact which was already noticed in quantum cosmology. Similar scenario is also possible within the framework of Brans-Dicke theory.
We consider a modification of GR with a special type of a non-local f(R). The structure of the non-local operators is motivated by the string field theory and p-adic string theory. We pay special account to the stability of the de Sitter solution in our model and formulate the conditions on the model parameters to have a stable configuration. Relevance of unstable configurations for the description of the de Sitter phase during inflation is discussed. Special physically interesting values of parameters are studied in details.
Links to: arXiv, form interface, find, astro-ph, recent, 1509, contact, help (Access key information)
We investigate the cores of fossil galaxy groups and clusters (`fossil
systems') using archival Chandra data for a sample of 17 fossil systems. We
determined the cool-core fraction for fossils via three observable diagnostics,
the central cooling time, cuspiness, and concentration parameter. We quantified
the dynamical state of the fossils by the X-ray peak/brightest cluster galaxy
(BCG), and the X-ray peak/emission weighted centre separations. We studied the
X-ray emission coincident with the BCG to detect the presence of potential
thermal coronae. A deprojection analysis was performed for z < 0.05 fossils to
obtain cooling time and entropy profiles, and to resolve subtle temperature
structures. We investigated the Lx-T relation for fossils from the 400d
catalogue to see if the scaling relation deviates from that of other groups.
Most fossils are identified as cool-core objects via at least two cool-core
diagnostics. All fossils have their dominant elliptical galaxy within 50 kpc of
the X-ray peak, and most also have the emission weighted centre within that
distance. We do not see clear indications of a X-ray corona associated with the
BCG unlike that has been observed for some other objects. Fossils do not have
universal temperature profiles, with some low-temperature objects lacking
features that are expected for ostensibly relaxed objects with a cool-core. The
entropy profiles of the z < 0.05 fossil systems can be well-described by a
power law model, albeit with indices smaller than 1. The 400d fossils Lx-T
relation shows indications of an elevated normalisation with respect to other
groups, which seems to persist even after factoring in selection effects.
We present the Rhapsody-G suite of cosmological hydrodynamic AMR zoom simulations of ten massive galaxy clusters at the $M_{\rm vir}\sim10^{15}\,{\rm M}_\odot$ scale. These simulations include cooling and sub-resolution models for star formation and stellar and supermassive black hole feedback. The sample is selected to capture the whole gamut of assembly histories that produce clusters of similar final mass. We present an overview of the successes and shortcomings of such simulations in reproducing both the stellar properties of galaxies as well as properties of the hot plasma in clusters. In our simulations, a long-lived cool-core/non-cool core dichotomy arises naturally, and the emergence of non-cool cores is related to low angular momentum major mergers. Nevertheless, the cool-core clusters exhibit a low central entropy compared to observations, which cannot be alleviated by thermal AGN feedback. For cluster scaling relations we find that the simulations match well the $M_{500}-Y_{500}$ scaling of Planck SZ clusters but deviate somewhat from the observed X-ray luminosity and temperature scaling relations in the sense of being slightly too bright and too cool at fixed mass, respectively. Stars are produced at an efficiency consistent with abundance matching constraints and central galaxies have star formation rates consistent with recent observations. While our simulations thus match various key properties remarkably well, we conclude that the shortcomings strongly suggest an important role for non-thermal processes (through feedback or otherwise) or thermal conduction in shaping the intra-cluster medium.
We present an investigation into the effects of survey systematics such as varying depth, point spread function (PSF) size, and extinction on the galaxy selection and correlation in photometric, multi-epoch, wide area surveys. We take the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) as an example. Variations in galaxy selection due to systematics are found to cause density fluctuations of up to 10% for some small fraction of the area for most galaxy redshift slices and as much as 50% for some extreme cases of faint high-redshift samples. This results in correlations of galaxies against survey systematics of order $\sim$1% when averaged over the survey area. We present an empirical method for mitigating these systematic correlations from measurements of angular correlation functions using weighted random points. These weighted random catalogs are estimated from the observed galaxy over densities by mapping these to survey parameters. We are able to model and mitigate the effect of systematic correlations allowing for non-linear dependencies of density on systematics. Applied to CFHTLenS we find that the method reduces spurious correlations in the data by a factor two for most galaxy samples and as much as an order of magnitude in others. Such a treatment is particularly important for an unbiased estimation of very small correlation signals, as e.g. from weak gravitational lensing magnification bias. We impose a criterion for using a galaxy sample in a magnification measurement of the majority of the systematic correlations show improvement and are less than 10% of the expected magnification signal when combined in the galaxy cross correlation. After correction the galaxy samples in CFHTLenS satisfy this criterion for $z_{\rm phot}<0.9$ and will be used in a future analysis of magnification.
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. In the era of precision cosmology, accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. For cases where only a limited set of simulations is available, assessing the data covariance is not possible or only leads to a noisy estimate. Also, relying on simulated realisations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these two points in mind, this work aims at presenting a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. The explicit formulas are presented for Fourier space and for configuration space covariance matrices. To validate our model, we create synthetic HOD galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find excellent agreement between the model predictions and the measurements on the synthetic catalogues.
The discrepancy between the amplitudes of matter fluctuations inferred from
Sunyaev-Zel'dovich (SZ) cluster number counts, the primary temperature, and the
polarization anisotropies of the cosmic microwave background (CMB) measured by
the Planck satellite can be reconciled if the local universe is embedded in an
under-dense region as shown by Lee, 2014. Here using a simple void model
assuming the open Friedmann-Robertson-Walker geometry and a Markov Chain Monte
Carlo technique, we investigate how deep the local under-dense region needs to
be to resolve this discrepancy. Such local void, if exists, predicts the local
Hubble parameter value that is different from the global Hubble constant. We
derive the posterior distribution of the local Hubble parameter from a joint
fitting of the Planck CMB data and SZ cluster number counts assuming the simple
void model. We show that the predicted local Hubble parameter value of $H_{\rm
loc}=70.1\pm0.34~{\rm km\,s^{-1}Mpc^{-1}}$ is in better agreement with direct
local Hubble parameter measurements, indicating that the local void model
provides a consistent solution to the cluster number counts and Hubble
parameter discrepancies.
We describe the observing strategy, data reduction tools and early results of
a supernova (SN) search project, named SUDARE, conducted with the ESO VST
telescope aimed at measuring the rate of the different types of SNe in the
redshift range 0.2<z<0.8. The search was performed in two of the best-studied
extragalactic fields, CDFS and COSMOS, for which a wealth of ancillary data are
available in the literature or public archives.
(abridged)
We obtained a final sample of 117 SNe, most of which are SNIa (57%) and the
remaining core collapse events of which 44% type II, 22% type IIn and 34% type
Ib/c. In order to link the transients, we built a catalog of ~1.3x10^5 galaxies
in the redshift range 0<z<1 with a limiting magnitude K_AB=23.5 mag. We
measured the SN rate per unit volume for SN Ia and core collapse SNe in
different bin of redshifts. The values are consistent with other measurements
from the literature. The dispersion of the rate measurements for SNe Ia is
comparable with the scatter of the theoretical tracks for single (SD) and
double degenerate (DD) binary systems models, therefore the data do not allow
to disentangle among the two different progenitor scenarios. However, we may
notice that among the three tested models, SD and two flavours of DD, either
with a steep (DDC) or a wide (DDW) delay time distribution, the SD gives a
better fit across the whole redshift range whereas the DDC better matches the
steep rise up to redshift ~1.2. The DDW appears instead less favoured. The core
collapse SN rate is fully consistent, unlike recent claims, with the prediction
based on recent estimates of the star formation history, and standard
progenitor mass range.
Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using $N$-body simulations. COLA is a novel method designed to make this feasible by following an approximate dynamics but with up to 3 orders of magnitude speed-ups when compared to an exact $N$-body. In this paper we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art $N$-body run, the MICE Grand Challenge simulation (MICE-GC). We find that using 40 time steps linearly spaced since $z_i \sim 20$, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within $1\%$ for $k \lesssim 1\,h {\rm Mpc}^{-1}$ and a halo mass function within $5\%$ of those in the $N$-body. In turn the halo bias is accurate within $2\%$ for $k \lesssim 0.7\,h {\rm Mpc}^{-1}$ whereas, in redshift space, the halo monopole and quadrupole are within $4\%$ for $k \lesssim 0.4\,h {\rm Mpc}^{-1}$. These results hold for a broad range in redshift ($0 < z < 1$) and for all halo mass bins investigated ($M > 10^{12.5} \, h^{-1} \, {\rm M_{\odot}}$). To bring accuracy in clustering to one percent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.
It is an intriguing possibility that dark matter (DM) could have flavor quantum numbers like the quarks. We propose and investigate a class of UV-complete models of this kind, in which the dark matter is in a scalar triplet of an SU(3) flavor symmetry, and interacts with quarks via a colored flavor-singlet fermionic mediator. Such mediators could be discovered at the LHC if their masses are $\sim 1$ TeV. We constrain the DM-mediator couplings using relic abundance, direct detection, and flavor-changing neutral-current considerations. We find that, for reasonable values of its couplings, scalar flavored DM can contribute significantly to the real and imaginary parts of the $B_s$-$\bar B_s$ mixing amplitude. We further assess the potential for such models to explain the galactic center GeV gamma-ray excess.
Using a sample of ~100 nearby line-emitting galaxy nuclei, we have built the currently definitive atlas of spectroscopic measurements of H_alpha and neighboring emission lines at subarcsecond scales. We employ these data in a quantitative comparison of the nebular emission in Hubble Space Telescope (HST) and ground-based apertures, which offer an order-of-magnitude difference in contrast, and provide new statistical constraints on the degree to which Transition Objects and low-ionization nuclear emission-line regions (LINERs) are powered by an accreting black hole at <10 pc. We show that while the small-aperture observations clearly resolve the nebular emission, the aperture dependence in the line ratios is generally weak, and this can be explained by gradients in the density of the line-emitting gas: the higher densities in the more nuclear regions potentially flatten the excitation gradients, suppressing the forbidden emission. The Transition Objects show a threefold increase in the incidence of broad H_alpha emission in the high-resolution data, as well as the strongest density gradients, supporting the composite model for these systems as accreting sources surrounded by star-forming activity. The narrow-line LINERs appear to be the weaker counterparts of the Type 1 LINERs, where the low accretion rates cause the disappearance of the broad-line component. The enhanced sensitivity of the HST observations reveals a 30% increase in the incidence of accretion-powered systems at z~0. A comparison of the strength of the broad-line emission detected at different epochs implies potential broad-line variability on a decade-long timescale, with at least a factor of three in amplitude.
We present estimates for the size and the logarithmic slope of the disk temperature profile of the lensed quasar Q 2237+0305 independent of the component velocities. These estimates are based on 6 epochs of multi-wavelength narrow band images from the Nordic Optical Telescope. For each pair of lensed images and for each photometric band, we determine the microlensing amplitude and chromaticity using pre-existing mid-IR photometry to define the baseline for no microlensing magnification. A statistical comparison of the combined microlensing data (6 epochs $\times$ 5 narrow bands $\times$ 6 image pairs) with simulations based on microlensing magnification maps gives Bayesian estimates for the half-light radius of $R_{1/2}=8.3^{+11.8}_{-4.8}\sqrt{ \langle M \rangle/0.3\, M_\odot}$ light-days, and $p=0.7\pm0.3$ for the logarithmic temperature profile $T\propto R^{ -1/p}$ exponent. This size estimate is in good agreement with most recent studies. Other works based on the study of single microlensing events predict smaller sizes, but could be statistically biased by focussing on high magnification events.
We use cosmological simulations to identify dark matter subhalo host candidates of the Fornax dwarf spheroidal galaxy using the stellar kinematic properties of Fornax. We consider cold dark matter (CDM), warm dark matter (WDM), and decaying dark matter (DDM) simulations for our models of structure formation. The subhalo candidates in CDM typically have smaller mass and higher concentrations at z = 0 than the corresponding candidates in WDM and DDM. We examine the formation histories of the ~ 100 Fornax candidate subhalos identified in CDM simulations and, using approximate luminosity-mass relationships for subhalos, we find two of these subhalos that are consistent with both the Fornax luminosity and kinematics. These two subhalos have a peak mass over ten times larger than their z = 0 mass. We suggest that in CDM the dark matter halo hosting Fornax must have been severely stripped of mass and that it had an infall time into the Milky Way of ~ 9 Gyr ago. In WDM, we find that candidate subhalos consistent with the properties of Fornax have a similar infall time and a similar degree of mass loss, while in DDM we find a later infall time of ~ 3 - 4 Gyr ago and significantly less mass loss. We discuss these results in the context of the Fornax star formation history, and show that these predicted subhalo infall times can be linked to different star formation quenching mechanisms. This emphasizes the links between the properties of the dark matter and the mechanisms that drive galaxy evolution.
We present mid-infrared (MIR) luminosity functions (LFs) of local star-forming (SF) galaxies in the AKARI NEP-Wide Survey field. In order to derive more accurate luminosity function, we used spectroscopic sample only. Based on the NEP-Wide point source catalogue containing a large number of infrared (IR) sources distributed over the wide (5.4 sq. deg.) field, we incorporated the spectroscopic redshift data for about 1790 selected targets obtained by optical follow-up surveys with MMT/Hectospec and WIYN/Hydra. The AKARI continuous 2 to 24 micron wavelength coverage as well as photometric data from optical u band to NIR H-band with the spectroscopic redshifts for our sample galaxies enable us to derive accurate spectral energy distributions (SEDs) in the mid-infrared. We carried out SED fit analysis and employed 1/Vmax method to derive the MIR (8, 12, and 15 micron rest-frame) luminosity functions. We fit our 8 micron LFs to the double power-law with the power index of alpha= 1.53 and beta= 2.85 at the break luminosity. We made extensive comparisons with various MIR LFs from several literatures. Our results for local galaxies from the NEP region are generally consistent with other works for different fields over wide luminosity ranges. The comparisons with the results from the NEP-Deep data as well as other LFs imply the luminosity evolution from higher redshifts towards the present epoch.
In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider $\lambda \phi^4$ inflation with non-minimal gravitational coupling, where the Higgs field is identified as inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. In order to avoid this problem, we need to impose stability conditions on the effective inflation potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but also correlations between the inflationary predictions and the mass spectrum. For concrete discussion, we investigate the minimal $B-L$ extension of the Standard Model with identification of the $B-L$ Higgs field as inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the $B-L$ gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. In other words, if the $B-L$ gauge boson and the right-handed neutrinos are discovered in future, their observed mass ratio provides constraints on the inflationary predictions.
This year marks the hundredth anniversary of Einstein's 1915 landmark paper "Die Feldgleichungen der Gravitation" in which the field equations of general relativity were correctly formulated for the first time, thus rendering general relativity a complete theory. Over the subsequent hundred years physicists and astronomers have struggled with uncovering the consequences and applications of these equations. This contribution, which was written as an introduction to six chapters dealing with the connection between general relativity and cosmology that will appear in the two-volume book "One Hundred Years of General Relativity: From Genesis and Empirical Foundations to Gravitational Waves, Cosmology and Quantum Gravity," endeavors to provide a historical overview of the connection between general relativity and cosmology, two areas whose development has been closely intertwined.
We study the inflation scenarios, in the framework of superstring theory, where the inflaton is an axion producing the adiabatic curvature perturbations while there exists another light axion producing the isocurvature perturbations. We discuss how the non-trivial couplings among string axions can generically arise, and calculate the consequent cross-correlations between the adiabatic and isocurvature modes through concrete examples. Based on the Planck analysis on the generally correlated isocurvature perturbations, we show that there is a preference for the existence of the correlated isocurvature modes for the axion monodromy inflation while the natural inflation disfavors such isocurvature modes.
Gravity theories beyond General Relativity typically predict dipolar gravitational emission by compact-star binaries. This emission is sourced by "sensitivity" parameters depending on the stellar compactness. We introduce a general formalism to calculate these parameters, and show that in shift-symmetric Horndeski theories stellar sensitivities and dipolar radiation vanish, provided that the binary's dynamics is perturbative (i.e. the post-Newtonian formalism is applicable) and cosmological-expansion effects can be neglected. This allows reproducing the binary-pulsar observed orbital decay.
We investigate the role of the H_2^+ channel on H_2 molecule formation during the collapse of primordial gas clouds immersed in strong radiation fields which are assumed to have the shape of a diluted black-body spectra with temperature T_rad. Since the photodissociation rate of H_2^+ depends on its level population, we take full account of the vibrationally-resolved H_2^+ kinetics. We find that in clouds under soft but intense radiation fields with spectral temperature T_rad < 7000 K, the H_2^+ channel is the dominant H_2 formation process. On the other hand, for harder spectra with T_rad > 7000 K, the H^- channel takes over H_2^+ in the production of molecular hydrogen. We calculate the critical radiation intensity needed for supermassive star formation by direct collapse and examine its dependence on the H_2^+ level population. Under the assumption of local thermodynamic equilibrium (LTE) level population, the critical intensity is underestimated by a factor of a few for soft spectra with T_rad < 7000 K. For harder spectra, the value of the critical intensity is not affected by the level population of H_2^+. This result justifies previous estimates of the critical intensity assuming LTE populations since radiation sources like young and/or metal-poor galaxies are predicted to have rather hard spectra.
Polarimetric surveys of the microwave sky at large angular scales are crucial in testing cosmic inflation, as inflation predicts a divergence-free $B$-mode angular power spectrum that extends to the largest scales on the sky. A promising technique for realizing such large surveys is through the use of rapid polarization modulation to mitigate variations in the atmosphere, coupling to the environment, and drifts in instrumental response. VPMs change the state of polarization by introducing a controlled, adjustable delay between orthogonal linear polarizations resulting in transformations between linear and circular polarization states. VPMs are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal.In this paper, we consider and address the emission from a 300 K VPM on the system performance. Though instrument alignment can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. We consider VPM emission in the presence of system misalignments and temperature variation. We use simulations of TOD to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described front-end VPM modulation can be very powerful. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of $r=0.01$.
We provide a model-independent argument indicating that for a black hole of entropy N the non-thermal deviations from Hawking radiation, per each emission time, are of order 1/N, as opposed to exp(-N). This fact abolishes the standard a priory basis for the information paradox.
Imaging and spectroscopy at (sub-)millimeter wavelengths are key frontiers in astronomy and cosmology. Large area spectral surveys with moderate spectral resolution (R=50-200) will be used to characterize large scale structure and star formation through intensity mapping surveys in emission lines such as the CO rotational transitions. Such surveys will also be used to study the SZ effect, and will detect the emission lines and continuum spectrum of individual objects. WSPEC is an instrument proposed to target these science goals. It is a channelizing spectrometer realized in rectangular waveguide, fabricated using conventional high-precision metal machining. Each spectrometer is coupled to free space with a machined feed horn, and the devices are tiled into a 2D array to fill the focal plane of the telescope. The detectors will be aluminum Lumped-Element Kinetic Inductance Detectors (LEKIDs). To target the CO lines and SZ effect, we will have bands at 135-175 GHz and 190-250 GHz, each Nyquist-sampled at R~200 resolution. Here we discuss the instrument concept and design, and successful initial testing of a WR10 (i.e. 90 GHz) prototype spectrometer. We recently tested a WR5 (180 GHz) prototype to verify that the concept works at higher frequencies, and also designed a resonant backshort structure that may further increase the optical efficiency. We are making progress towards integrating a spectrometer with a LEKID array and deploying a prototype device to a telescope for first light.
Links to: arXiv, form interface, find, astro-ph, recent, 1509, contact, help (Access key information)
We revisit the effect of peculiar velocities on low-redshift type Ia supernovae. Velocities introduce an additional guaranteed source of correlations between supernova magnitudes that should be considered in all analyses of nearby supernova samples but has largely been neglected in the past. Applying a likelihood analysis to the latest compilation of nearby supernovae, we find no evidence for the presence of these correlations, although, given the significant noise, the data is also consistent with the correlations predicted for the standard LCDM model. We then consider the dipolar component of the velocity correlations - the frequently studied "bulk velocity" - and explicitly demonstrate that including the velocity correlations in the data covariance matrix is crucial for drawing correct and unambiguous conclusions about the bulk flow. In particular, current supernova data is consistent with no excess bulk flow on top of what is expected in LCDM and effectively captured by the covariance. We further clarify the nature of the apparent bulk flow that is inferred when the velocity covariance is ignored. We show that a significant fraction of this quantity is expected to be noise bias due to uncertainties in supernova magnitudes and not any physical peculiar motion.
Measurements of large-scale B-mode polarization in the cosmic microwave background (CMB) are a fundamental goal of current and future CMB experiments. However, because of the much higher instrumental sensitivity, CMB experiments will be more sensitive to any imperfect modelling of the Galactic foreground polarization in the estimation of the primordial B-mode signal. We compare the sensitivity to B-modes for different concepts of CMB satellite missions (LiteBIRD, COrE, COrE+, PRISM, EPIC, PIXIE) in the presence of Galactic foregrounds that are either correctly or incorrectly modelled. We quantify the impact on the tensor-to-scalar parameter of imperfect foreground modelling in the component separation process. Using Bayesian parametric fitting and Gibbs sampling, we perform the separation of the CMB and the Galactic foreground B-mode polarization. The resulting CMB B-mode power spectrum is used to compute the likelihood distribution of the tensor-to-scalar ratio. We focus the analysis to the very large angular scales (l<12) that can be probed only by CMB space missions, where primordial CMB B-modes dominate over spurious B-modes induced by gravitational lensing while Galactic foregrounds still show the most significant polarization intensity. We find that fitting a single modified blackbody component for the thermal dust where the real sky should account for two dust components may strongly bias the estimation of the tensor-to-scalar ratio by more than 5{\sigma}, at least for the most sensitive experiments. Neglecting in the parametric fitting model a positive curvature of the synchrotron spectral index may bias the estimate of the tensor-to-scalar ratio by 1{\sigma} to 2{\sigma}. For sensitive CMB experiments, omitting in the foreground modelling a 1% polarized spinning dust component can induce a non-negligible bias in the reconstructed tensor-to-scalar ratio.
Under standard assumptions including stationary and serially uncorrelated Gaussian gravitational wave stochastic background signal and noise distributions, as well as homogenous detector sensitivities, the standard cross-correlation detection statistic is known to be optimal in the sense of minimizing the probability of a false dismissal at a fixed value of the probability of a false alarm. The focus of this paper is to analyze the comparative efficiency of this statistic, versus a simple alternative statistic obtained by cross-correlating the \textit{squared} measurements, in situations that deviate from such standard assumptions. We find that differences in detector sensitivities have a large impact on the comparative efficiency of the cross-correlation detection statistic, which is dominated by the alternative statistic when these differences reach one order of magnitude. This effect holds even when both the signal and noise distributions are Gaussian. While the presence of non-Gaussian signals has no material impact for reasonable parameter values, the relative inefficiency of the cross-correlation statistic is less prominent for fat-tailed noise distributions but it is magnified in case noise distributions have skewness parameters of opposite signs. Our results suggest that introducing an alternative detection statistic can lead to noticeable sensitivity gains when noise distributions are possibly non-Gaussian and/or when detector sensitivities exhibit substantial differences, a situation that is expected to hold in joint detections from Advanced LIGO and Advanced Virgo, in particular in the early phases of development of the detectors, or in joint detections from Advanced LIGO and Einstein Telescope.
We study the intermediate evolution model and show that, compared with the recent study of a power-law evolution, the intermediate evolution is a better description of the low-redshift regime supported by observations from type Ia supernovae and BAO. We found also that recent data suggest that the intermediate evolution is as good a fit to this redshift range as the $\Lambda$CDM model.
We revisit the simplest model for dark matter. In this context the dark matter candidate is a real scalar field which interacts with the Standard Model particles through the Higgs portal. We discuss the relic density constraints as well as the predictions for direct and indirect detection. The final state radiation processes are investigated in order to understand the visibility of the gamma lines from dark matter annihilation. We find two regions where one could observe the gamma lines at gamma-ray telescopes. We point out that the region where the dark matter mass is between 100 and 300 GeV can be tested in the near future at direct and indirect detection experiments.
Newtonian cosmological perturbation equations valid to full nonlinear order are well known in the literature. Assuming the absence of the transverse-tracefree part of the metric, we present the general relativistic counterpart valid to full nonlinear order. The relativistic equations are presented without taking the slicing (temporal gauge) condition. The equations do have the proper Newtonian and first post-Newtonian limits. We also present the relativistic pressure correction terms in the Newtonian hydrodynamic equations.
A recent Chandra observation of the nearby galaxy cluster Abell 585 has led to the discovery of an extended X-ray jet associated with the high-redshift background quasar B3 0727+409, a luminous radio source at redshift z=2.5. This is one of only few examples of high-redshift X-ray jets known to date. It has a clear extension of about 10-12", corresponding to a projected length of 80-100 kpc, with a possible hot spot as far as 35" from the quasar. The archival high resolution VLA maps surprisingly reveal no extended jet emission, except for one knot about 1.4" away from the quasar. The high X-ray to radio luminosity ratio for this source appears consistent with the $\propto (1+z)^{4}$ amplification expected from the inverse Compton radiative model. This serendipitous discovery may signal the existence of an entire population of similar systems with bright X-ray and faint radio jets at high redshift, a selection bias which must be accounted for when drawing any conclusions about the redshift evolution of jet properties and indeed about the cosmological evolution of supermassive black holes and active galactic nuclei in general.
In this paper we present a model for dark energy in minimal supergravity with flat K\"ahler metric and a power-law superpotential. These choices of $K$ and $W$ can lead to a spontaneous supersymmetry breaking, with the minimum of the potential at $V(\varphi_0)=0$. We assume that the massive gravitino can decay into a scalar field with same potential than before, but expanded around $\Phi\equiv\varphi-\varphi_0=0$. This expanded potential $V(\Phi)$ leads to an accelerated expansion of the universe with density parameter equals $\Omega_\Phi=0.7$ today.
Links to: arXiv, form interface, find, astro-ph, recent, 1509, contact, help (Access key information)
We use measurements from the Planck satellite mission and galaxy redshift surveys over the last decade to test three of the basic assumptions of the standard model of cosmology, $\Lambda$CDM: the spatial curvature of the universe, the nature of dark energy and the laws of gravity on large scales. We obtain improved constraints on several scenarios that violate one or more of these assumptions. We measure $w_0=-0.94\pm0.17$ (18\% measurement) and $1+w_a=1.16\pm0.36$ (31\% measurement) for models with a time-dependent equation of state, which is an improvement over current best constraints \citep{Aubourg2014}. In the context of modified gravity, we consider popular scalar tensor models as well as a parametrization of the growth factor. In the case of one-parameter $f(R)$ gravity models with a $\Lambda$CDM background, we constrain $B_0 < 1.36 \times 10^{-5} $ (1$\sigma$ C.L.), which is an improvement by a factor of 4 on the current best \citep{XU2015}. We provide the very first constraint on the coupling parameters of general scalar-tensor theory and stringent constraint on the only free coupling parameter of Chameleon models. We also derive constraints on extended Chameleon models, improving the constraint on the coupling by a factor of 6 on the current best \citep{Hojjati2011} . We also measure $\gamma = 0.612 \pm 0.072$ (11.7\% measurement) for growth index parametrization which is an improvement over the current best measurement of $\gamma = 0.699\pm0.110$ (16\%) \citep{Samushia14}. We improve all the current constraints by combining results from various galaxy redshift surveys in a coherent way, which includes a careful treatment of scale-dependence introduced by modified gravity.
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity dependent galaxy biases, the power-spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three dimensional Large Scale Structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z=1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
The Sloan Digital Sky Survey IV extended Baryonic Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) will observe approximately 270,000 emission-line galaxies (ELGs) to measure the Baryonic Acoustic Oscillation standard ruler (BAO) at redshift 0.9. To test different ELG selection algorithms, based on data from several imaging surveys, 9,000 spectra were observed with the SDSS spectrograph as a pilot survey. First, we provide a detailed description of each target selection algorithm tested. Then, using visual inspection and redshift quality flags, we find that the automated spectroscopic redshifts assigned by the pipeline meet the quality requirements for a robust BAO measurement. Also, we show the correlations between sky emission, signal-to-noise ratio in the emission lines and redshift error. As a result, we provide robust redshift distributions for the different target selection schemes tested. Finally, we infer two optimal target selection algorithms to be applied on DECam photometry that fulfill the eBOSS survey efficiency requirements.
One of the most powerful techniques to study the dark sector of the Universe is weak gravitational lensing. In practice, to infer the reduced shear, weak lensing measures galaxy shapes, which are the consequence of both the intrinsic ellipticity of the sources and of the integrated gravitational lensing effect along the line of sight. Hence, a very large number of galaxies is required in order to average over their individual properties and to isolate the weak lensing cosmic shear signal. If this `shape noise' can be reduced, significant advances in the power of a weak lensing surveys can be expected. This paper describes a general method for extracting the probability distributions of parameters from catalogues of data using Voronoi cells, which has several applications, and has synergies with Bayesian hierarchical modelling approaches. This allows us to construct a probability distribution for the variance of the intrinsic ellipticity as a function of galaxy property using only photometric data, allowing a reduction of shape noise. As a proof of concept the method is applied to the CFHTLenS survey data. We use this approach to investigate trends of galaxy properties in the data and apply this to the case of weak lensing power spectra.
We calculate the one-point probability density distribution functions (PDF) and the power spectra of the thermal and kinetic Sunyaev-Zeldovich (tSZ and kSZ) effects and the mean Compton Y parameter using the Magneticum Pathfinder simulations, state-of-the-art cosmological hydrodynamical simulations of a large cosmological volume of (896 Mpc/h)^3. These simulations follow in detail the thermal and chemical evolution of the intracluster medium as well as the evolution of super-massive black holes and their associated feedback processes. We construct full-sky maps of tSZ and kSZ from the light-cones out to z=0.17, and one realization of 8.8x8.8 degree wide, deep light-cone out to z=5.2. The local universe at z<0.027 is simulated by a constrained realisation. The tail of the one-point PDF of tSZ from the deep light-cone follows a power-law shape with an index of -3.2. Once convolved with the effective beam of Planck, it agrees with the PDF measured by Planck. The predicted tSZ power spectrum agrees with that of the Planck data at all multipoles up to l~1000, once the calculations are scaled to the Planck 2015 cosmological parameters with \Omega_m=0.308 and \sigma_8=0.8149. Consistent with the results in the literature, however, we continue to find the tSZ power spectrum at l=3000 that is significantly larger than that estimated from the high-resolution ground-based data. The simulation predicts the mean fluctuating Compton Y value of <Y>=1.18x10^{-6} for \Omega_m=0.272 and \sigma_8=0.809. Nearly half (~ 5x10^{-7}) of the signal comes from halos below a virial mass of 10^{13}M_\odot/h. Scaling this to the Planck 2015 parameters, we find <Y>=1.57x10^{-6}. The PDF and the power spectrum of kSZ from our simulation agree broadly with the previous work.
Gamma rays and microwave observations of the Galactic Center and surrounding areas indicate the presence of anomalous emission, whose origin remains ambiguous. The possibility of dark matter (DM) annihilation explaining both signals through prompt emission at gamma-rays and secondary emission at microwave frequencies from interactions of high-energy electrons produced in annihilation with the Galactic magnetic fields has attracted much interest in recent years. We investigate the DM interpretation of the Galactic Center gamma-ray excess by searching for the associated synchrotron in the WMAP-Planck data. Considering various magnetic field and cosmic-ray propagation models, we predict the synchrotron emission due to DM annihilation in our Galaxy, and compare it with the WMAP-Planck data at 23-70GHz. In addition to standard microwave foregrounds, we separately model the microwave counterpart to the Fermi Bubbles and the signal due to DM, and use component separation techniques to extract the signal associated with each template from the total emission. We confirm the presence of the Haze at the level of 7% of the total sky intensity at 23GHz in our chosen region of interest, with a harder spectrum $I \sim \nu^{-0.8}$ than the synchrotron from regular cosmic-ray electrons. The data do not show a strong preference towards fitting the Haze by either the Bubbles or DM emission only. Inclusion of both components provides a better fit with a DM contribution to the Haze emission of 20% at 23GHz, however, due to significant uncertainties in foreground modeling, we do not consider this a clear detection of a DM signal. We set robust upper limits on the annihilation cross section by ignoring foregrounds, and also report best-fit DM annihilation parameters obtained from a complete template analysis. We conclude that the WMAP-Planck data are consistent with a DM interpretation of the gamma-ray excess.
We comment on the paper "Dark Matter collisions with the Human Body" by K.~Freese and C.~Savage (Phys.\ Lett.\ B {\bf 717}, 25 (2012) [arXiv:1204.1339]) and describe a dark matter model for which the results of the previous paper do not apply. Within this mirror dark matter model, potentially hazardous objects, mirror micrometeorites, can exist potentially leading to diseases triggered by multiple mutations, such as cancer.
Upcoming or future deep galaxy samples with wide sky coverage can provide independent measurement of the kinematic dipole - our motion relative to the rest frame defined by the large-scale structure. Such a measurement would present an important test of the standard cosmological model, as the standard model predicts the galaxy measurement should precisely agree with the existing precise measurements made using the CMB. However, the required statistical precision to measure the kinematic dipole typically makes the measurement susceptible to bias from the presence of the local-structure-induced dipole contamination. In order to minimize the latter, a sufficiently deep survey is required. We forecast both the statistical error and the systematic bias in the kinematic dipole measurements. We find that a survey covering $\sim 75\%$ of the sky in both hemispheres and having $\sim 30$ million galaxies can detect the kinematic dipole at $5\sigma$, while its median redshift should be at least $z_{med} \sim 0.75$ for negligible bias from the local structure.
The connection between dark matter halos and galactic baryons is often not well-constrained nor well-resolved in cosmological hydrodynamical simulations. Thus, Halo Occupation Distribution (HOD) models that assign galaxies to halos based on halo mass are frequently used to interpret clustering observations, even though it is well-known that the assembly history of dark matter halos is related to their clustering. In this paper we use high-resolution hydrodynamical cosmological simulations to compare the halo and stellar mass growth of galaxies in a large-scale overdensity to those in a large-scale underdensity (on scales of about 20 Mpc). The simulation reproduces assembly bias, that halos have earlier formation times in overdense environments than in underdense regions. We find that the stellar mass to halo mass ratio is larger in overdense regions in central galaxies residing in halos with masses between 10$^{11}$-10$^{12.9}$ M$_{\odot}$. When we force the local density (within 2 Mpc) at z=0 to be the same for galaxies in the large-scale over- and underdensities, we find the same results. We posit that this difference can be explained by a combination of earlier formation times, more interactions at early times with neighbors, and more filaments feeding galaxies in overdense regions. This result puts the standard practice of assigning stellar mass to halos based only on their mass, rather than considering their larger environment, into question.
We present results from a subset of simulations from the "Evolution and Assembly of GaLaxies and their Environments" (EAGLE) suite in which the formulation of the hydrodynamics scheme is varied. We compare simulations that use the same subgrid models without re-calibration of the parameters but employing the standard GADGET flavour of smoothed particle hydrodynamics (SPH) instead of the more recent state-of-the-art ANARCHY formulation of SPH that was used in the fiducial EAGLE runs. We find that the properties of most galaxies, including their masses and sizes, are not significantly affected by the details of the hydrodynamics solver. However, the star formation rates of the most massive objects are affected by the lack of phase mixing due to spurious surface tension in the simulation using standard SPH. This affects the efficiency with which AGN activity can quench star formation in these galaxies and it also leads to differences in the intragroup medium that affect the X-ray emission from these objects. The differences that can be attributed to the hydrodynamics solver are, however, likely to be less important at lower resolution. We also find that the use of a time step limiter is important for achieving the feedback efficiency required to match observations of the low-mass end of the galaxy stellar mass function.
Based on the relationship between thermodynamics and gravity, and with the aid of Verlinde's formalism, we propose an alternative interpretation of the dynamical evolution of the Friedmann-Robertson-Walker Universe, which takes into account the entropy and temperature intrinsic to the horizon of the universe due to the information holographically stored there through non-gaussian statistical theories proposed by Tsallis and Kaniadakis. We use the most recent data of type Ia supernovae, baryon acoustic oscillations, and the Hubble expansion rate function to constrain the free parameters on the $\Lambda$CDM and $w$CDM models modified by the non-gaussian statistics. We evaluate the problem of age and we note that such modifications solve the problem at 1$\sigma$ level confidence. Also we analyze the effects on the linear growth of matter density perturbations.
The pseudoscalar resonance or "A-funnel" in the Minimal Supersymmetric Standard Model~(MSSM) is a widely studied framework for explaining dark matter that can yield interesting indirect detection and collider signals. The well-known Galactic Center excess (GCE) at GeV energies in the gamma ray spectrum, consistent with annihilation of a $\lesssim 40$ GeV dark matter particle, has more recently been shown to be compatible with significantly heavier masses following reanalysis of the background. In this paper, we explore the LHC and direct detection implications of interpreting the GCE in this extended mass window within the MSSM A-funnel framework. We find that compatibility with relic density, signal strength, collider constraints, and Higgs data can be simultaneously achieved with appropriate parameter choices. The compatible regions give very sharp predictions of 200-600 GeV CP-odd/even Higgs bosons at low tan$\beta$ at the LHC and spin-independent cross sections $\approx 10^{-11}$ pb at direct detection experiments. Regardless of consistency with the GCE, this study serves as a useful template of the strong correlations between indirect, direct, and LHC signatures of the MSSM A-funnel region.
Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant $\Lambda$ is much smaller than the Planck density and in fact accumulates at $\Lambda=0$. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain $\Lambda$ that is small in Planck units in a toy model, but to explain why $\Lambda$ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.
Combining the covariant coalescence model and a blast-wave-like analytical parametrization for (anti-)nucleon phase-space freezeout configuration, we explore light (anti-)nucleus production in central Au+Au collisions at $\sqrt{s_{NN}} = 200$ GeV. Using the nucleon freezeout configuration (denoted by FO1) determined from the measured spectra of protons (p), deutrons (d) and $^{3}$He, we find the predicted yield of $^{4}$He is significantly smaller than the experimental data. We show this disagreement can be removed by using a nucleon freezeout configuration (denoted by FO2) in which the nucleons are assumed to freeze out earlier than those in FO1 to effectively consider the effect of large binding energy value of $^{4}$He. Assuming the binding energy effect also exists for the production of $^5\text{Li}$, $^5\overline{\text{Li}}$, $^6\text{Li}$ and $^6\overline{\text{Li}}$ due to their similar binding energy values as $^{4}$He, we find the yields of these heavier (anti-)nuclei can be enhanced by a factor of about one order, implying that although the stable (anti-)$^6$Li nucleus is unlikely to be observed, the unstable (anti-)$^5$Li nucleus could be produced in observable abundance in Au+Au collisions at $\sqrt{s_{NN}} = 200$ GeV where it may be identified through the p-$^4\text{He}$ ($\overline{\text{p}}$-$^4\overline{\text{He}}$) invariant mass spectrum. The future experimental measurement on (anti-)$^5\text{Li}$ would be very useful to understand the production mechanism of heavier antimatter.
Inspired by the recent diboson excess observed at the LHC and possible interpretation within a TeV-scale Left-Right symmetric framework, we explore its implications for low-energy experiments searching for lepton number and flavor violation. Assuming a simple Type-II seesaw mechanism for neutrino masses, we show that for the right-handed (RH) gauge boson mass and coupling values required to explain the LHC anomalies, the RH contribution to the lepton number violating process of neutrinoless double beta decay ($0\nu\beta\beta$) is already constrained by current experiments for relatively low-mass (MeV-GeV) RH neutrinos. The future ton-scale $0\nu\beta\beta$ experiments could probe most of the remaining parameter space, irrespective of the neutrino mass hierarchy and uncertainties in the oscillation parameters and nuclear matrix elements. On the other hand, the RH contribution to the lepton flavor violating process of $\mu\to e\gamma$ is constrained for relatively heavier (TeV) RH neutrinos, thus providing a complementary probe of the model. Finally, a measurement of the absolute light neutrino mass scale from future precision cosmology could make this scenario completely testable.
The B-mode Foreground Experiment (BFORE) is a proposed NASA balloon project designed to make optimal use of the sub-orbital platform by concentrating on three dust foreground bands (270, 350, and 600 GHz) that complement ground-based cosmic microwave background (CMB) programs. BFORE will survey ~1/4 of the sky with 1.7 - 3.7 arcminute resolution, enabling precise characterization of the Galactic dust that now limits constraints on inflation from CMB B-mode polarization measurements. In addition, BFORE's combination of frequency coverage, large survey area, and angular resolution enables science far beyond the critical goal of measuring foregrounds. BFORE will constrain the velocities of thousands of galaxy clusters, provide a new window on the cosmic infrared background, and probe magnetic fields in the interstellar medium. We review the BFORE science case, timeline, and instrument design, which is based on a compact off-axis telescope coupled to >10,000 superconducting detectors.
Links to: arXiv, form interface, find, astro-ph, recent, 1509, contact, help (Access key information)