We propose a unified framework that reconciles the stunning success of MOND on galactic scales with the triumph of the LambdaCDM model on cosmological scales. This is achieved through the physics of superfluidity. Dark matter consists of self-interacting axion-like particles that thermalize and condense to form a superfluid in galaxies, with ~mK critical temperature. The superfluid phonons mediate a MOND acceleration on baryonic matter. Our framework naturally distinguishes between galaxies (where MOND is successful) and galaxy clusters (where MOND is not): dark matter has a higher temperature in clusters, and hence is in a mixture of superfluid and normal phase. The rich and well-studied physics of superfluidity leads to a number of striking observational signatures.
We study the spherical collapse in the Parametrized Post-Friedmannian (PPF) scheme. We use a general form of the PPF parameter related to the Poisson equation and found the equations to solve that includes a non-trivial fifth force coming from the convolution of the modified gravity term in the k-space. In order to compute a concrete model, we use the parametrization proposed by Bertschinger and Zukin. The equations of the spherical collapse are solved assuming a Gaussian density profile and we show there is no shell crossing before reaching the turn around point. We show that the fifth force does not satisfy the Birkhoff's theorem and introduces different behaviors for the density threshold $\delta_{c}$, which in this case depends on the size and shape of the initial density profile, and therefore one expects a different statistic of the collapsed objects in the universe.
How do peculiar velocities affect observed voids? To answer this question we use the VIDE toolkit to identify voids in mock galaxy populations embedded within an N-body simulation both with and without peculiar velocities included. We compare the resulting void populations to assess the impact on void properties. We find that void abundances and spherically-averaged radial density profiles are mildly affected by peculiar velocities. However, peculiar velocities can distort by up to 10% the shapes for a particular subset of voids depending on the void size and density contrast, which can lead to increased variance in Alcock-Paczy\'nski test. We offer guidelines for performing optimal cuts on the void catalogue to reduce this variance by removing the most severely affected voids while preserving the unaffected ones. In addition, since this shape distortion is largely limited to the line of sight, we show that the void radii are only affected at the $\sim$ 10% level and the macrocenter positions at the $\sim$ 20% (even before performing cuts), meaning that cosmological probes based on the Integrated Sachs-Wolfe and gravitational lensing are not severely impacted by peculiar velocities.
We argue that the lack of power exhibited by cosmic microwave background (CMB) anisotropies at large angular scales might be linked to the onset of inflation. We highlight observational features and theoretical hints that support this view, and present a preliminary estimate of the physical scale that would underlie the phenomenon.
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
We present a detailed analysis of an astrophysical mechanism that generates cosmological magnetic fields during the Epoch of Reionization. It is based on the photoionization of the Intergalactic Medium by the first sources formed in the Universe. First the induction equation is derived, then the characteristic length and time scales of the mechanism are identified, and finally numerical applications are carried out for first stars, primordial galaxies and distant powerful quasars. In these simple examples, the strength of the generated magnetic fields varies between the order of $10^{-23}$ G on hundreds of kiloparsecs to $10^{-19}$ G on hundreds of parsecs in the neutral Intergalactic Medium between the Str\"omgren spheres of the sources. Thus this mechanism contributes to the premagnetization of the whole Universe before large scale structures are in place. It operates with any ionizing source, at any time during the Epoch of Reionization. Finally, the generated fields possess a characteristic spatial configuration which may help discriminate these seeds from those produced by different mechanisms.
By examining the locations of central black holes in two elliptical galaxies, M32 and M87, we derive constraints on the violation of the strong equivalence principle for purely gravitational objects, i.e. black holes, of less than eight percent, $|\eta_N|<0.08$ from M32. The constraints from M87 are substantially weaker but could improve dramatically with better astrometry.
We have observed two massive early-type galaxies with Keck/LRIS and measured radial gradients in the strengths of stellar absorption features from 4000-5500 \AA$\,$ and 8000-10,000 \AA. We present spatially resolved measurements of the dwarf-sensitive spectral indices NaI (8190 \AA) and Wing-Ford FeH (9915 \AA), as well as indices for species of H, C$_2$, CN, Mg, Ca, TiO, and Fe. Our measurements show a metallicity gradient in both objects, and Mg/Fe consistent with uniform $\alpha$-enhancement, matching widely observed trends for massive early-type galaxies. The NaI index and the CN$_1$ index at 4160 \AA$\,$ exhibit significantly steeper gradients, with a break at $r \sim 0.1 r_{\rm eff}$ ($r \sim 300$ pc). Inside this radius NaI and CN$_1$ increase sharply toward the galaxy center, relative to other indices. We interpret this trend as a rapid central rise in [Na/Fe] and [N/Fe]. In contrast, the FeH index exhibits a marginal decrease toward the galaxy center, relative to Fe. Our investigation is among the first to track FeH as a function of radius, and to demonstrate discrepant behavior between NaI and FeH. We suggest that a shallow gradient in FeH and steep, broken NaI profile reflect unique abundance patterns rather than a gradient in the stellar initial mass function.
The loop quantization of the Schwarzschild interior region, as described by a homogenous anisotropic Kantowski-Sachs model, is re-examined. As several studies of different --inequivalent-- loop quantizations have shown, to date there exists no fully satisfactory quantum theory for this model. This fact posses challenges to the validity of some scenarios to address the black hole information problem. Here we put forward a novel viewpoint to construct the quantum theory that builds from some of the models available in the literature. The final picture is a quantum theory that is both independent of any auxiliary structure and possesses a correct low curvature limit. It represents a subtle but non-trivial modification of the original prescription given by Ashtekar and Bojowald. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein's equations is recovered on the "other side" of the bounce. We argue that such metric represents the interior region of a white-hole spacetime, but for which the corresponding "white-hole mass" differs from the original black hole mass. Furthermore, we find that the value of the white-hole mass is proportional to the third power of the starting black hole mass. We discuss possible implications of this phenomena.
We derive novel limits on the masses of the light and heavy Majorana neutrinos by requiring successful leptogenesis in seesaw models of minimal flavour violation (MFV). Taking properly into account radiative flavour effects and avoiding the limitations due to a no-go theorem on leptonic asymmetries, we find that the mass of the lightest of the observable neutrinos must be smaller than $\sim 0.05$ eV, whilst the Majorana scale of lepton number violation should be higher than $\sim 10^{12}$ GeV. The latter lower bound enables one to probe the existence of possible new scales of MFV, up to energies of $\sim 100$ TeV, in low-energy experiments, such as $\mu \to e\gamma$ and $\mu \to e$ conversion in nuclei. Possible realizations of MFV leptogenesis in Grand Unified Theories are briefly discussed.
Links to: arXiv, form interface, find, astro-ph, recent, 1506, contact, help (Access key information)
Aims: We explore the cosmological implications of two types of baryon
acoustic oscillation (BAO) data that are extracted by using the spherically
averaged one-dimensional galaxy clustering (GC) statistics (hereafter BAO1) and
the anisotropic two-dimensional GC statistics (hereafter BAO2), respectively.
Methods: Firstly, making use of the BAO1 and the BAO2 data, as well as the
SNLS3 type Ia supernovae sample and the Planck distance priors data, we
constrain the parameter spaces of the $\Lambda$CDM, the $w$CDM, and the
Chevallier-Polarski-Linder (CPL) model. Then, we discuss the impacts of
different BAO data on parameter estimation, equation of state $w$, figure of
merit and deceleration-acceleration transition redshift. At last, we use
various dark energy diagnosis, including Hubble diagram $H(z)$, deceleration
diagram $q(z)$, statefinder hierarchy $\{S^{(1)}_3, S^{(1)}_4\}$, composite
null diagnosic (CND) $\{S^{(1)}_3, \epsilon(z)\}$ and $\{S^{(1)}_4,
\epsilon(z)\}$, to distinguish the differences between the results given by
different BAO data.
Results: We find that, for all the models, BAO2 data always give a smaller
fractional matter density $\Omega_{m0}$, a larger fractional curvature density
$\Omega_{k0}$, and a larger Hubble constant $h$; for the $w$CDM and the CPL
model, BAO2 data always give a slightly smaller $w$. In addition, BAO1 data
always yield a cosmological result that is closer to the $\Lambda$CDM model,
while BAO2 data give a cosmological constraint that has a slightly better
accuracy. Moreover, we find that using the $H(z)$ and the $q(z)$ diagram have
difficulty to distinguish the differences between different BAO data; in
contrast, both the statefinder hierarchy $\{S^{(1)}_3, S^{(1)}_4\}$, the CND
$\{S^{(1)}_3, \epsilon(z)\}$ and $\{S^{(1)}_4, \epsilon(z)\}$ are powerful
tools that have the ability to distinguish the impacts of different BAO data.
We explore the generation of large-scale magnetic fields in the so-called moduli inflation. The hypercharge electromagnetic fields couple to not only a scalar field but also a pseudoscalar one, so that the conformal invariance of the hypercharge electromagnetic fields can be broken. We explicitly analyze the strength of the magnetic fields on the Hubble horizon scale at the present time, the local non-Gaussianity of the curvature perturbations originating from the massive gauge fields, and the tensor-to-scalar ratio of the density perturbations. As a consequence, we find that the local non-Gaussianity and the tensor-to-scalar ratio are compatible with the recent Planck results.
As one of the probes of universe, strong gravitational lensing systems allow us to compare different cosmological models and constrain vital cosmological parameters. This purpose can be reached from the dynamic and geometry properties of strong gravitational lensing systems, for instance, time-delay $\Delta\tau$ of images, the velocity dispersion $\sigma$ of the lensing galaxies and the combination of these two effects, $\Delta\tau/\sigma^2$. In this paper, in order to carry out one-on-one comparisons between $\Lambda$CDM universe and $R_h=ct$ universe, we use a sample containing 36 strong lensing systems with the measurement of velocity dispersion from the SLACS and LSD survey. Concerning the time-delay effect, 12 two-image lensing systems with $\Delta\tau$ are also used. In addition, Monte Carlo (MC) simulations are used to compare the efficiency of the three methods as mentioned above. From simulations, we estimate the number of lenses required to rule out one model at the $99.7\%$ confidence level. Comparing with constraints from $\Delta\tau$ and the velocity dispersion $\sigma$, we find that using $\Delta\tau/\sigma^2$ can improve the discrimination between cosmological models. Despite the independence tests of these methods reveal a correlation between $\Delta\tau/\sigma^2$ and $\sigma$, $\Delta\tau/\sigma^2$ could be considered as an improved method of $\sigma$ if more data samples are available.
A very general cosmological consideration suggests that, along with galactic dark matter halos, much smaller dark matter structures may exist. These structures are usually called 'clumps', and their mass extends to $10^{-6} M_\odot$ or even lower. The clumps should give the main contribution into the signal of dark matter annihilation, provided that they have survived until the present time. Recent observations favor a cored profile for low-mass astrophysical halos. We consider cored clumps and show that they are significantly less firm than the standard NFW ones. In contrast to the standard scenario, the cored clumps should have been completely destroyed inside $\sim 20$ kpc from the Milky Way center. The dwarf spheroidals should not contain any dark matter clumps as well. On the other hand, even under the most pessimistic assumption about the clump structure, the clumps should have survived in the Milky Way at a distance exceeding $50$ kpc from the center, as well as in low-density cosmic structures. There they significantly boost the dark matter annihilation.
We present a model of spontaneous (or dynamical) C and CP violation where it is possible to generate domains of matter and antimatter separated by cosmologically large distances. Such C(CP) violation existed only in the early universe and later it disappeared with the only trace of generated baryonic and/or antibaryonic domains. So the problem of domain walls in this model does not exist. These features are achieved through a postulated form of interaction between inflaton and a new scalar field, realizing short time C(CP) violation.
Intrinsic alignment (IA) of source galaxies is one of the major astrophysical systematics for ongoing and future weak lensing surveys. This paper presents the first forecasts of the impact of IA on cosmic shear measurements for current and future surveys (DES, Euclid, LSST, WFIRST) using simulated likelihood analyses and realistic covariances that include higher-order moments of the density field in the computation. We consider a range of possible IA scenarios and test mitigation schemes, which parameterize IA by the fraction of red galaxies, normalization, luminosity and redshift dependence of the IA signal (for a subset we consider joint IA and photo-z uncertainties). Compared to previous studies we find smaller biases in time-dependent dark energy models if IA is ignored in the analysis; the amplitude and significance of these biases vary as a function of survey properties (depth, statistical uncertainties), luminosity function, and IA scenario: Due to its small statistical errors and relatively shallow observing strategy Euclid is significantly impacted by IA. LSST and WFIRST benefit from their increased survey depth, while the larger statistical errors for DES decrease IA's relative impact on cosmological parameters. The proposed IA mitigation scheme removes parameter biases due to IA for DES, LSST, and WFIRST even if the shape of the IA power spectrum is only poorly known; successful IA mitigation for Euclid requires more prior information. We explore several alternative IA mitigation strategies for Euclid; in the absence of alignment of blue galaxies we recommend the exclusion of red (IA contaminated) galaxies in cosmic shear analyses. We find that even a reduction of 20% in the number density of galaxies only leads to a 4-10% loss in cosmological constraining power.
The Bipolar Spherical Harmonics (BipoSH) form a natural basis to study the CMB two point correlation function in a non-statistically isotropic (non-SI) universe. The coefficients of expansion in this basis are a generalisation of the well known CMB angular power spectrum and contain complete information of the statistical properties of a non-SI but Gaussian random CMB sky. We use these coefficients to describe the weak lensing of CMB photons in a non-SI universe. Finally we show that the results reduce to the standard weak lensing results in the isotropic limit.
In this paper we generalize the kinetic mixing idea to time reparametrization invariant theories, namely, relativistic point particles and cosmology in order to obtain new insights for dark matter and energy. In the first example, two relativistic particles interact through an appropriately chosen coupling term. It is shown that the system can be diagonalized by means of a non-local field redefinition, and, as a result of this procedure, the mass of one the particles gets rescaled. In the second case, inspired by the previous example, two cosmological models (each with its own scale factor) are made to interact in a similar fashion. The equations of motion are solved numerically in different scenarios (dust, radiation or a cosmological constant coupled to each sector of the system). When a cosmological constant term is present, kinetic mixing rescales it to a lower value which may be more amenable to observations.
Recent measurements of PeV energy neutrinos at IceCube and a 3.5 keV X-ray line in the spectra of several galaxies are both tantalizing signatures of new physics. This paper shows that one or both of these observations can be explained within an extended supersymmetric neutrino sector. Obtaining light active neutrino masses as well as phenomenologically interesting (keV-GeV) sterile neutrino masses without any unnaturally small parameters hints at a new symmetry in the neutrino sector that is broken at the PeV scale, presumably tied to supersymmetry breaking. The same symmetry and structure can sufficiently stabilize an additional PeV particle, produce its abundance through the freeze-in mechanism, and lead to decays that can give the energetic neutrinos observed by IceCube. The lightest sterile neutrino, if at 7 keV, is a non-resonantly produced fraction of dark matter, and can account for the 3.5 keV X-ray line. The two signals could therefore be the first probes of an extended supersymmetric neutrino sector.
We describe the design, operation, and first results of a photometric calibration project, called DICE (Direct Illumination Calibration Experiment), aiming at achieving precise instrumental calibration of optical telescopes. The heart of DICE is an illumination device composed of 24 narrow-spectrum, high-intensity, light-emitting diodes (LED) chosen to cover the ultraviolet-to-near-infrared spectral range. It implements a point-like source placed at a finite distance from the telescope entrance pupil, yielding a flat field illumination that covers the entire field of view of the imager. The purpose of this system is to perform a lightweight routine monitoring of the imager passbands with a precision better than 5 per-mil on the relative passband normalisations and about 3{\AA} on the filter cutoff positions. The light source is calibrated on a spectrophotometric bench. As our fundamental metrology standard, we use a photodiode calibrated at NIST. The radiant intensity of each beam is mapped, and spectra are measured for each LED. All measurements are conducted at temperatures ranging from 0{\deg}C to 25{\deg}C in order to study the temperature dependence of the system. The photometric and spectroscopic measurements are combined into a model that predicts the spectral intensity of the source as a function of temperature. We find that the calibration beams are stable at the $10^{-4}$ level -- after taking the slight temperature dependence of the LED emission properties into account. We show that the spectral intensity of the source can be characterised with a precision of 3{\AA} in wavelength. In flux, we reach an accuracy of about 0.2-0.5% depending on how we understand the off-diagonal terms of the error budget affecting the calibration of the NIST photodiode. With a routine 60-mn calibration program, the apparatus is able to constrain the passbands at the targeted precision levels.
We study the sensitivity of multi ton-scale time projection chambers using a liquid xenon target, e.g., the proposed DARWIN instrument, to spin-independent and spin-dependent WIMP-nucleon scattering interactions. Taking into account realistic backgrounds from the detector itself as well as from neutrinos, we examine the impact of exposure, energy threshold, background rejection efficiency and energy resolution on the dark matter sensitivity. With an exposure of 200 t x y and assuming detector parameters which have been already demonstrated experimentally, spin-independent cross sections as low as $2.5 \times 10^{-49}$ cm$^2$ can be probed for WIMP masses around 40 GeV/$c^2$. Additional improvements in terms of background rejection and exposure will further increase the sensitivity, while the ultimate WIMP science reach will be limited by neutrinos scattering coherently off the xenon nuclei.
The long-standing problem of the asymmetry between matter and antimatter in the Universe is, in this paper, analysed in the context of the modified theories of gravity. In particular we study two models of $f(R)$ theories of gravitation that, with the opportune choice of the free parameters, introduce little perturbation to the scale factor of the Universe in the radiation dominated (RD) phase predicted by general relativity (GR), i.e., $a(t)\sim t^{1/2}$. This little perturbation generates a Ricci scalar different by zero, i.e., $R\neq 0$ that reproduces the correct magnitude for the asymmetry factor $\eta$ computed in the frame of the theories of the gravitational baryogenesis and gravitational leptogenesis. The opportune choice of the free parameters is discussed in order to obtain results coherent with experimental data. Furthermore, the form of the potential $V$, for the scalar-tensor theory conformally equivalent to the $f(R)$ theory which reproduces the right asymmetry factor, is here obtained.
We investigate a calculation method for solving the Mukhanov-Sasaki equation in slow-roll $k$-inflation based on the uniform approximation in conjunction with an expansion scheme for slow-roll parameters with respect to the number of $e$-folds about the so-called turning point. Earlier works on this method has so far gained sensible calculation results for the resulting expression for power spectra among others, up to second order with respect to the Hubble and sound flow parameters, when compared to other semi-analytical approaches (e.g., Green's function and WKB methods). However, a closer inspection is suggestive that this may not hold when higher-order parts of the power spectra are considered; residual logarithmic divergences may come out that would make the prediction problematic. Looking at this possibility, we map out up to what order with respect to the mentioned parameters several physical quantities can be calculated before hitting a logarithmically divergent result. It turns out that the power spectra are limited up to second order, the tensor-to-scalar ratio up to third order, and the spectral indices and running can be calculated up to any order.
We show that observation of the time-dependent effect of microlensing of relativistically broadened emission lines (such as e.g. the Fe Kalpha line in X-rays) in strongly lensed quasars could provide data on celestial mechanics of circular orbits in the direct vicinity of the horizon of supermassive black holes. This information can be extracted from the observation of evolution of red / blue edge of the magnified line just before and just after the period of crossing of the innermost stable circular orbit by the microlensing caustic. The functional form of this evolution is insensitive to numerous astrophysical parameters of the accreting black hole and of the microlensing caustics network system (as opposed to the evolution the full line spectrum). Measurement of the temporal evolution of the red / blue edge could provide a precision measurement of the radial dependence of the gravitational redshift and of velocity of the circular orbits, down to the innermost stable circular orbit. These measurements could be used to discriminate between the General Relativity and alternative models of the relativistic gravity in which the dynamics of photons and massive bodies orbiting the gravitating centre is different from that of the geodesics in the Schwarzschild or Kerr space-times.
In this work, we investigate if gravitational microlensing can magnify the polarization signal of a stellar spot and make it be observable. A stellar spot on a source star of microlensing makes polarization signal through two channels of Zeeman effect and breaking circular symmetry of the source surface brightness due to its temperature contrast. We first explore the characteristics of perturbations in polarimetric microlensing during caustic-crossing of a binary lensing as follows: (a) The cooler spots over the Galactic bulge sources have the smaller contributions in the total flux, although they have stronger magnetic fields. (b) The maximum deviation in the polarimetry curve due to the spot happens when the spot is located near the source edge and the source spot is first entering the caustic whereas the maximum photometric deviation occurs for the spots located at the source center. (c) There is a (partial) degeneracy for indicating spot's size, its temperature contrast and its magnetic induction from the deviations in light or polarimetric curves. (d) If the time when the photometric deviation due to spot becomes zero (between positive and negative deviations) is inferred from microlensing light curves, we can indicate the magnification factor of the spot, characterizing the spot properties except its temperature contrast. The stellar spots alter the polarization degree as well as strongly change its orientation which gives some information about the spot position. Although, the photometry observations are more efficient in detecting stellar spots than the polarimetry ones, but polarimetry observations can specify the magnetic field of the source spots.
We present a brief overview of a new generation of high-precision laboratory and astrophysical measurements to search for ultralight (sub-eV) axion, axion-like pseudoscalar and scalar dark matter, which form either a coherent condensate or topological defects (solitons). In these new detection methods, the sought effects are linear in the interaction constant between dark matter and ordinary matter, which is in stark contrast to traditional searches for dark matter, where the sought effects are quadratic or higher order in the underlying interaction constants (which are extremely small).
We present an analysis of the effects of beam deconvolution on noise properties in CMB measurements. The analysis is built around the artDeco beam deconvolver code. We derive a low-resolution noise covariance matrix that describes the residual noise in deconvolution products, both in harmonic and pixel space. The matrix models the residual correlated noise that remains in time-ordered data after destriping, and the effect of deconvolution on it. To validate the results, we generate noise simulations that mimic the data from the Planck LFI instrument. A $\chi^2$ test for the full 70 GHz covariance in multipole range $\ell=0-50$ yields a mean reduced $\chi^2$ of 1.0037. We compare two destriping options, full and independent destriping, when deconvolving subsets of available data. Full destriping leaves substantially less residual noise, but leaves data sets intercorrelated. We derive also a white noise covariance matrix that provides an approximation of the full noise at high multipoles, and study the properties on high-resolution noise in pixel space through simulations.
The disformal transformation of metric $g_{\mu \nu} \to \Omega^2 (\phi)g_{\mu \nu}+\Gamma(\phi,X) \partial_{\mu}\phi \partial_{\nu}\phi$, where $\phi$ is a scalar field with the kinetic energy $X= \partial_{\mu}\phi \partial^{\mu}\phi/2$, preserves the Lagrangian structure of Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories (which is the minimum extension of Horndeski theories). In the presence of matter, this transformation gives rise to a kinetic-type coupling between the scalar field $\phi$ and matter. We consider the Einstein frame in which the second-order action of tensor perturbations on the isotropic cosmological background is of the same form as that in General Relativity and study the role of couplings at the levels of both background and linear perturbations. We show that the effective gravitational potential felt by matter perturbations in the Einstein frame can be conveniently expressed in terms of the sum of a General Relativistic contribution and couplings induced by the modification of gravity. For the theories in which the transformed action belongs to a class of Horndeski theories, there is no anisotropic stress between two gravitational potentials in the Einstein frame due to a gravitational de-mixing. We propose a concrete dark energy model encompassing Brans-Dicke theories as well as theories with the tensor propagation speed $c_{\rm t}$ different from 1. We clarify the correspondence between physical quantities in the Jordan/Einstein frames and study the evolution of gravitational potentials and matter perturbations from the matter-dominated epoch to today in both analytic and numerical approaches.
We show that very general scalar-tensor theories of gravity (including, e.g., Horndeski models) are generically invariant under disformal transformations. However there is a special subset, when the transformation is not invertible, that yields new equations of motion which are a generalization of the so-called "mimetic" dark matter theory recently introduced by Chamsedinne and Mukhanov. These new equations of motion can also be derived from an action containing an additional Lagrange multiplier field. The general mimetic scalar-tensor theory has the same number of derivatives in the equations of motion as the original scalar-tensor theory. As an application we show that the simplest mimetic scalar-tensor model is able to mimic the cosmological background of a flat FLRW model with an irrotational barotropic perfect fluid with any constant equation of state.
We solve the field equations of modified gravity for $f(R)$ model in metric formalism. Further, we obtain the fixed points of the dynamical system in phase space analysis of $f(R)$ models, both with and without the effects of radiation. Stability of these points is studied by invoking perturbations about them. We apply the conditions on the eigenvalues of the matrix obtained in the linearized first-order differential equations for stability of points. Following this, these fixed points are used for the dynamics of different phases of the universe. Certain linear and quadratic forms of $f(R)$ are determined from the geometrical and physical considerations and the dynamics of the scale factor is found for those forms. Further, we determine the Hubble parameter $H(t)$, Ricci scalar $R$ for radiation-, matter- and acceleration-dominated phases of the universe, whose time-ordering may explain an arrow of time throughout the cosmic evolution.
This article deals with the study of Bianchi type-I universe in the context of f(R,T) gravity. Einstein's fi?eld equations in f(R,T) gravity has been solved in presence of cosmological constant ? and quadratic equation of state. Here we have discussed two classes of f(R,T) gravity i.e. f(R,T) = R + 2f(T) and f(R,T) = f_1(R) + f_2(T). A set of models has been taken into consideration based on the plausible relation. Also we have studied the some physical and kinematical properties of the models.
Links to: arXiv, form interface, find, astro-ph, recent, 1506, contact, help (Access key information)
In this work we analyse the properties of cosmic voids in standard and coupled dark energy cosmologies. Using large numerical simulations, we investigate the effects produced by the dark energy coupling on three statistics: the filling factor, the size distribution and the stacked profiles of cosmic voids. We find that the bias of the tracers of the density field used to identify the voids strongly influences the properties of the void catalogues, and, consequently, the possibility of using the identified voids as a probe to distinguish coupled dark energy models from the standard $\Lambda $CDM cosmology. In fact, on one hand coupled dark energy models are characterised by an excess of large voids in the cold dark matter distribution as compared to the reference standard cosmology, due to their higher normalisation of linear perturbations at low redshifts. Specifically, these models present an excess of large voids with $R_{eff}>20, 15, 12$ Mpc h^{-1}, at $z=0, 0.55, 1$, respectively. On the other hand, we do not find any significant difference in the properties of the void detected in the distribution of collapsed dark matter halos. These results imply that the tracer bias has a significant impact on the possibility of using cosmic void catalogues to probe cosmology.
We explore the structures of protoclusters and their relationship with high redshift clusters using the Millennium Simulation combined with a semi-analytic model. We find that protoclusters are very extended, with 90 per cent of their mass spread across $\sim35\,h^{-1}{\rm Mpc}$ comoving at $z=2$ ($\sim30\, \rm{arcmin}$). The `main halo', which can manifest as a high redshift cluster or group, is only a minor feature of the protocluster, containing less than 20 per cent of all protocluster galaxies at $z=2$. Furthermore, many protoclusters do not contain a main halo that is massive enough to be identified as a high redshift cluster. Protoclusters exist in a range of evolutionary states at high redshift, independent of the mass they will evolve to at $z=0$. We show that the evolutionary state of a protocluster can be approximated by the mass ratio of the first and second most massive haloes within the protocluster, and the $z=0$ mass of a protocluster can be estimated to within 0.2 dex accuracy if both the mass of the main halo and the evolutionary state is known. We also investigate the biases introduced by only observing star-forming protocluster members within small fields. The star formation rate required for line-emitting galaxies to be detected is typically high, which leads to the artificial loss of low mass galaxies from the protocluster sample. This effect is stronger for observations of the centre of the protocluster, where the quenched galaxy fraction is higher. This loss of low mass galaxies, relative to the field, distorts the size of the galaxy overdensity, which in turn can contribute to errors in predicting the $z=0$ evolved mass.
We present a study of the cosmological Ly$\alpha$ emission signal at $z > 4$. Our goal is to predict the power spectrum of the spatial fluctuations that could be observed by an intensity mapping survey. The model uses the latest data from the HST legacy fields and the abundance matching technique to associate UV emission and dust properties with the halos, computing the emission from the interstellar medium (ISM) of galaxies and the intergalactic medium (IGM), including the effects of reionization, self-consistently. The Ly$\alpha$ intensity from the diffuse IGM emission is 1.3 (2.0) times more intense than the ISM emission at $z = 4(7)$; both components are fair tracers of the star-forming galaxy distribution. However the power spectrum is dominated by ISM emission on small scales ($k > 0.01 h{\rm Mpc}^{-1}$) with shot noise being significant only above $k = 1 h{\rm Mpc}^{-1}$. At very lange scales ($k < 0.01h{\rm Mpc}^{-1}$) diffuse IGM emission becomes important. The comoving Ly$\alpha$ luminosity density from IGM and galaxies, $\dot \rho_{{\rm Ly}\alpha}^{\rm IGM} = 8.73(6.51) \times 10^{40} {\rm erg}{\rm s}^{-1}{\rm Mpc}^{-3}$ and $\dot \rho_{{\rm Ly}\alpha}^{\rm ISM} = 6.62(3.21) \times 10^{40} {\rm erg}{\rm s}^{-1}{\rm Mpc}^{-3}$ at $z = 4(7)$, is consistent with recent SDSS determinations. We predict a power $k^3 P^{{\rm Ly}\alpha}(k, z)/2\pi^2 = 9.76\times 10^{-4}(2.09\times 10^{-5}){\rm nW}^2{\rm m}^{-4}{\rm sr}^{-2}$ at $z = 4(7)$ for $k = 0.1 h {\rm Mpc}^{-1}$.
Big bang nucleosynthesis in a modified gravity model of $f(R)\propto R^n$ is investigated. The only free parameter of the model is a power-law index $n$. We find cosmological solutions in a parameter region of $1< n \leq (4+\sqrt{6})/5$. We calculate abundances of $^4$He, D, $^3$He, $^7$Li, and $^6$Li during big bang nucleosynthesis. We compare the results with the latest observational data. It is then found that the power-law index is constrained to be $(n-1)=(-0.86\pm 1.19)\times 10^{-4}$ (95 % C.L.) mainly from observations of deuterium abundance as well as $^4$He abundance.
We present an absorption-line survey of optically thick gas clouds -- Lyman Limit Systems (LLSs) -- observed at high dispersion with spectrometers on the Keck and Magellan telescopes. We measure column densities of neutral hydrogen NHI and associated metal-line transitions for 157 LLSs at z=1.76-4.39 restricted to 10^17.3 < NHI < 10^20.3. An empirical analysis of ionic ratios indicates an increasing ionization state of the gas with decreasing NHI and that the majority of LLSs are highly ionized, confirming previous expectations. The Si^+/H^0 ratio spans nearly four orders-of-magnitude, implying a large dispersion in the gas metallicity. Fewer than 5% of these LLSs have no positive detection of a metal transition; by z~3, nearly all gas that is dense enough to exhibit a very high Lyman limit opacity has previously been polluted by heavy elements. We add new measurements to the small subset of LLS (~5-10) that may have super-solar abundances. High Si^+/Fe^+ ratios suggest an alpha-enhanced medium whereas the Si^+/C^+ ratios do not exhibit the super-solar enhancement inferred previously for the Lya forest.
We investigate the consequences of general curved trajectories in multi-field inflation. After setting up a completely general formalism using the mass basis, which naturally accommodates the notion of light and heavy modes, we study in detail the simple case of two successive turns in two-field system. We find the power spectrum of the curvature perturbation receives corrections that exhibit oscillatory features sinusoidal in the logarithm of the comoving wavenumber without slow-roll suppression. We show that this is because of the resonance of the heavy modes inside and outside the mass horizon.
A method is presented for Bayesian model selection without explicitly computing evidences, by using a combined likelihood and introducing an integer model selection parameter $n$ so that Bayes factors, or more generally posterior odds ratios, may be read off directly from the posterior of $n$. If the total number of models under consideration is specified a priori, the full joint parameter space $(\theta, n)$ of the models is of fixed dimensionality and can be explored using standard MCMC or nested sampling methods, without the need for reversible jump MCMC techniques. The posterior on $n$ is then obtained by straightforward marginalisation. We demonstrate the efficacy of our approach by application to several toy models. We then apply it to constraining the dark energy equation-of-state using a free-form reconstruction technique. We show that $\Lambda$CDM is significantly favoured over all extensions, including the simple $w(z){=}{\rm constant}$ model.
SNIA and CMB datasets have shown both of evolving Newton's "constant" and a signature of the coupling of scalar field to matter. These observations motivate the consideration of the scalar-matter coupling in Jordan frame in the framework of scalar-tensor gravity. So far, majority of the works on the coupling of scalar matter has performed in Einstein frame in the framework of minimally coupled scalar fields. In this work, we generalize the original scalar-tensor theories of gravity introducing a direct coupling of scalar to matter in the Jordan frame. The combined consideration of both evolving Newton's constant and scalar-matter coupling using the recent observation datasets, shows features different from the previous works. The analysis shows a vivid signature of the scalar-matter coupling. The variation rate of the Newton's constant is obtained rather greater than that determined in the previous works.
The simplest standard ray tracing scheme employing the Born and Limber approximations and neglecting lens-lens coupling is used for computing the convergence along individual rays in mock N-body data based on Szekeres swiss cheese and onion models. The results are compared with the exact convergence computed using the exact Szekeres metric combined with the Sachs formalism. A comparison is also made with an extension of the simple ray tracing scheme which includes the Doppler convergence. The exact convergence is reproduced very precisely as the sum of the gravitational and Doppler convergences along rays in LTB swiss cheese and single void models. This is not the case when the swiss cheese models are based on non-symmetric Szekeres models. For such models, there is a significant deviation between the exact and ray traced paths and hence also the corresponding convergences. There is also a clear deviation between the exact and ray tracing results obtained when studying both non-symmetric and spherically symmetric Szekeres onion models.
This paper presents a model for the dark halos of galaxy clusters in the framework of Weyl geometric scalar tensor theory with a MOND-like approximation in the weak field static limit. The basics of this approach are introduced in the first part of the paper; then a three component halo model is derived (without presupposing prior knowledge of Weyl geometric gravity). The cluster halo is constituted by the scalar field energy and the phantom energy of the gravitational structure, thus transparent rather than "dark". It is completely determined by the baryonic mass distribution of hot gas and stars. The model is tested against recent observational data for 19 clusters. The total mass of Coma and 15 other clusters is correctly predicted on the basis of data on baryonic mass in the bounds of the error intervals (1 sigma); one cluster lies in the 2 sigma interval, two more in 3 sigma.
In this paper we consider the issue of paradigm evaluation by applying Bayes' theorem along the following nested chain of progressively more complex structures: i) parameter estimation (within a model), ii) model selection and comparison (within a paradigm), iii) paradigm evaluation. In such a chain the Bayesian evidence works both as the posterior's normalization at a given level and as the likelihood function at the next level up. Whilst raising no objections to the standard application of the procedure at the two lowest levels, we argue that it should receive an essential modification when evaluating paradigms, in view of the issue of falsifiability. By considering toy models we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor. We argue that the evidence for a paradigm should not only be high for a given dataset, but exceptional with respect to what it would have been, had the data been different. We propose a measure of falsifiability (which we term predictivity), and a prior to be incorporated into the Bayesian framework, suitably penalising unfalsifiability. We apply this measure to inflation seen as a whole, and to a scenario where a specific inflationary model is hypothetically deemed as the only one viable as a result of information alien to cosmology (e.g. Solar System gravity experiments, or particle physics input). We conclude that cosmic inflation is currently difficult to falsify and thus to be construed as a scientific theory, but that this could change were external/additional information to cosmology to select one of its many models. We also compare this state of affairs to bimetric varying speed of light cosmology.
We generalize ERA method of PSF correction for more realistic situations. The method re-smears the observed galaxy image(galaxy image smeared by PSF) and PSF image by an appropriate function called Re-Smearing Function(RSF) to make new images which have the same ellipticity with the lensed (before smeared by PSF) galaxy image. It has been shown that the method avoids a systematic error arising from an approximation in the usual PSF correction in moment method such as KSB for simple PSF shape. By adopting an idealized PSF we generalize ERA method applicable for arbitrary PSF. This is confirmed with simulated complex PSF shapes. We also consider the effect of pixel noise and found that the effect causes systematic overestimation.
We study tachyon inflation within the $N$--formalism, which takes a prescription for the small Hubble flow slow--roll parameter $\epsilon_1$ as a function of the large number of $e$-folds $N$. This leads to a classification of models through their behaviour at large-$N$. In addition to the perturbative $N$ class, we introduce the polynomial and exponential classes for the $\epsilon_1$ parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon field inflation. We also obtain new families of potentials form the polynomial class. We characterize the realizations of Tachyon inflation by computing the usual cosmological observables at first and second order in the Hubble flow slow--roll parameters. This allows us to look at observable differences between tachyon and canonical scalar field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.
The B-mode polarization of the cosmic microwave background on large scales has been considered as a probe of gravitational waves from the cosmic inflation. Ongoing and future experiments will, however, suffer from contamination due to the B-modes of non-primordial origins, one of which is the lensing induced B-mode polarization. Subtraction of the lensing B-modes, usually referred to as delensing, will be required for further improvement of detection sensitivity of the gravitational waves. In such experiments, knowledge of statistical properties of the B-modes after delensing is indispensable to likelihood analysis particularly because the lensing B-modes are known to be non-Gaussian. In this paper, we study non-Gaussian structure of the delensed B-modes on large scales, comparing them with those of the lensing B-modes. In particular, we investigate the power spectrum correlation matrix and the probability distribution function (PDF) of the power spectrum amplitude. Assuming an experiment in which the quadratic delensing is an almost optimal method, we find that delensing reduces correlations of the lensing B-mode power spectra between different multipoles, and that the PDF of the power spectrum amplitude is well described as a normal distribution function with a variance larger than that in the case of a Gaussian field. These features are well captured by an analytic model based on the 4th order Edgeworth expansion. As a consequence of the non-Gaussianity, the constraint on the tensor-to-scalar ratio after delensing is degraded within approximately a few percent, which depends on the multipole range included in the analysis.
The paucity of observed supermassive black hole binaries (SMBHBs) may imply that the gravitational wave background (GWB) from this population is anisotropic, rendering existing analyses sub-optimal. We present the first constraints on the angular distribution of a nanohertz stochastic GWB from circular, inspiral-driven SMBHBs using the $2015$ European Pulsar Timing Array data [Desvignes et al. (in prep.)]. Our analysis of the GWB in the $\sim 2 - 90$ nHz band shows consistency with isotropy, with the strain amplitude in $l>0$ spherical harmonic multipoles $\lesssim 40\%$ of the monopole value. We expect that these more general techniques will become standard tools to probe the angular distribution of source populations.
Infrared (IR) luminosity is fundamental to understanding the cosmic star formation history and AGN evolution, since their most intense stages are often obscured by dust. Japanese infrared satellite, AKARI, provided unique data sets to probe these both at low and high redshifts. The AKARI performed an all sky survey in 6 IR bands (9, 18, 65, 90, 140, and 160$\mu$m) with 3-10 times better sensitivity than IRAS, covering the crucial far-IR wavelengths across the peak of the dust emission. Combined with a better spatial resolution, AKARI can measure the total infrared luminosity ($L_{TIR}$) of individual galaxies much more precisely, and thus, the total infrared luminosity density of the local Universe. In the AKARI NEP deep field, we construct restframe 8$\mu$m, 12$\mu$m, and total infrared (TIR) luminosity functions (LFs) at 0.15$<z<$2.2 using 4128 infrared sources. A continuous filter coverage in the mid-IR wavelength (2.4, 3.2, 4.1, 7, 9, 11, 15, 18, and 24$\mu$m) by the AKARI satellite allows us to estimate restframe 8$\mu$m and 12$\mu$m luminosities without using a large extrapolation based on a SED fit, which was the largest uncertainty in previous work. By combining these two results, we reveal dust-hidden cosmic star formation history and AGN evolution from $z$=0 to $z$=2.2, all probed by the AKARI satellite. The next generation space infrared telescope, SPICA, will revolutionize our view of the infrared Universe with superb sensitivity of the cooled 3m space telescope. We conclude with our survey proposal and future prospects with SPICA.
We present infrared galaxy luminosity functions (LFs) in the AKARI North Ecliptic Pole (NEP) deep field using recently-obtained, wider CFHT optical/near-IR images. AKARI has obtained deep images in the mid-infrared (IR), covering 0.6 deg$^2$ of the NEP deep field. However, our previous work was limited to the central area of 0.25 deg$^2$ due to the lack of optical coverage of the full AKARI NEP survey. To rectify the situation, we recently obtained CFHT optical and near-IR images over the entire AKARI NEP deep field. These new CFHT images are used to derive accurate photometric redshifts, allowing us to fully exploit the whole AKARI NEP deep field. AKARI's deep, continuous filter coverage in the mid-IR wavelengths (2.4, 3.2, 4.1, 7, 9, 11, 15, 18, and 24$\mu$m) exists nowhere else, due to filter gaps of other space telescopes. It allows us to estimate restframe 8$\mu$m and 12$\mu$m luminosities without using a large extrapolation based on spectral energy distribution (SED) fitting, which was the largest uncertainty in previous studies. Total infrared luminosity (TIR) is also obtained more reliably due to the superior filter coverage. The resulting restframe 8$\mu$m, 12$\mu$m, and TIR LFs at $0.15<z<2.2$ are consistent with previous works, but with reduced uncertainties, especially at the high luminosity-end, thanks to the wide field coverage. In terms of cosmic infrared luminosity density ($\Omega_{\mathrm{IR}}$), we found that the $\Omega_{\mathrm{IR}}$ evolves as $\propto (1+z)^{4.2\pm 0.4}$.
Primordial black holes are studied in the Bose-Einstein condensate description of space-time. The question of baryon-number conservation is investigated with emphasis on possible formation of bound states of the system's remaining captured baryons. This leads to distinct predictions for both the formation time, which for the naively natural assumptions is shown to lie between $10^{-12}\.\srm$ to $10^{12}\.\srm$ after Big Bang, as well as for the remnant's mass, yielding approximately $3 \cdot 10^{23}\.{\rm kg}$ in the same scheme. The consequences for astrophysically formed black holes are also considered.
We present a series of hydrodynamic simulations of isolated galaxies with stellar mass of $10^{9} \, \rm{M}_{\odot}$. The models use a resolution of $750 \, \rm{M}_{\odot}$ per particle and include a treatment for the full non-equilibrium chemical evolution of ions and molecules (157 species in total), along with gas cooling rates computed self-consistently using the non-equilibrium abundances. We compare these to simulations evolved using cooling rates calculated assuming chemical (including ionisation) equilibrium, and we consider a wide range of metallicities and UV radiation fields, including a local prescription for self-shielding by gas and dust. We find higher star formation rates and stronger outflows at higher metallicity and for weaker radiation fields, as gas can more easily cool to a cold (few hundred Kelvin) star forming phase under such conditions. Contrary to variations in the metallicity and the radiation field, non-equilibrium chemistry generally has no strong effect on the total star formation rates or outflow properties. However, it is important for modelling molecular outflows. For example, the mass of H$_{2}$ outflowing with velocities $> 50 \, \rm{km} \, \rm{s}^{-1}$ is enhanced by a factor $\sim 20$ in non-equilibrium. We also compute the observable line emission from CII and CO. Both are stronger at higher metallicity, while CII and CO emission are higher for stronger and weaker radiation fields respectively. We find that CII is generally unaffected by non-equilibrium chemistry. However, emission from CO varies by a factor of $\sim 2 - 4$. This has implications for the mean $X_{\rm{CO}}$ conversion factor between CO emission and H$_{2}$ column density, which we find is lowered by up to a factor $\sim 2.3$ in non-equilibrium, and for the fraction of CO-dark molecular gas.
In the early seventies, Alan Sandage defined cosmology as the search for two
numbers: Hubble parameter ${{H}_{0}}$ and deceleration parameter ${{q}_{0}}$.
The first of the two basic cosmological parameters (the Hubble parameter)
describes the linear part of the time dependence of the scale factor. Treating
the Universe as a dynamical system it is natural to assume that it is
non-linear: indeed, linearity is nothing more than approximation, while
non-linearity represents the generic case. It is evident that future models of
the Universe must take into account different aspects of its evolution. As soon
as the scale factor is the only dynamical variable, the quantities which
determine its time dependence must be essentially present in all aspects of the
Universe' evolution. Basic characteristics of the cosmological evolution, both
static and dynamical, can be expressed in terms of the parameters ${{H}_{0}}$
and ${{q}_{0}}$. The very parameters (and higher time derivatives of the scale
factor) enable us to construct model-independent kinematics of the cosmological
expansion.
Time dependence of the scale factor reflects main events in history of the
Universe. Moreover it is the deceleration parameter who dictates the expansion
rate of the Hubble sphere and determines the dynamics of the observable galaxy
number variation: depending on the sign of the deceleration parameter this
number either grows (in the case of decelerated expansion), or we are going to
stay absolutely alone in the cosmos (if the expansion is accelerated).
The intended purpose of the report is reflected in its title --- "Cosmology
in terms of the deceleration parameter". We would like to show that practically
any aspect of the cosmological evolution is tightly bound to the deceleration
parameter. It is the second part of the report. The first part see here
this http URL
The XMASS project is designed for multiple physics goals using highly-purified liquid xenon scintillator in an ultra-low radioactivity environment. As the first stage of the project, the detector with 835 kg of liquid xenon was constructed and is being operated. In this paper, we present results from our commissioning data, current status of the experiment, and a next step of the project.
Links to: arXiv, form interface, find, astro-ph, recent, 1506, contact, help (Access key information)
In the absence of CMB precision measurements, a Taylor expansion has often been invoked to parametrize the Hubble flow function during inflation. The standard "horizon flow" procedure implicitly relies on this assumption. However, the recent Planck results indicate a strong preference for plateau inflation, which suggests the use of Pad\'e approximants instead. We propose a novel method that provides analytic solutions of the flow equations for a given parametrization of the Hubble function. This method is illustrated in the Taylor and Pad\'e cases, for low order expansions. We then present the results of a full numerical treatment scanning larger order expansions, and compare these parametrizations in terms of convergence, prior dependence, predictivity and compatibility with the data. Finally, we highlight the implications for potential reconstruction methods.
Cosmic voids may be very useful in testing fundamental aspects of cosmology. Yet observationally voids can only be seen as regions with a deficit of bright galaxies. To study how biased galaxies trace matter underdensities and how the properties of voids depend on those of the tracer galaxy population, we use a $\Lambda$CDM N-body simulation populated with mock galaxies based on the halo occupation distribution (HOD) model. We identify voids in these mocks using the ZOBOV void finder and measure their abundances, sizes, tracer densities, and dark matter content. To separate the effects of bias from those of sampling density, we do the same for voids traced by randomly down-sampled subsets of the dark matter particles in the simulation. We find that galaxy bias reduces the total number of voids by $\sim50\%$ and can dramatically change their size distribution. The matter content of voids in biased and unbiased tracers also differs. Using simulations to accurately estimate the cosmological constraints that can be obtained from voids therefore requires the use of realistic mock galaxy catalogues. We discuss aspects of the dark matter content of voids that can be deduced from properties of the tracer distribution, such as the void size and the minimum tracer number density. In particular we consider the compensation of the total mass deficit in voids and find that the distinction between over- and under-compensated voids is not a function of void size alone, as has previously been suggested. However, we find a simple linear relationship between the average density of tracers in the void and the total mass compensation on much larger scales. The existence of this linear relationship holds independent of the bias and sampling density of the tracers. This provides a universal tool to classify void environments and will be important for the use of voids in observational cosmology.
Recently, we have generalized the Bekenstein-Hawking entropy formula for black holes embedded in expanding Friedmann universes. In this letter, we begin the study of this new formula to obtain the first law of thermodynamics for dynamical apparent horizons. In this regard we obtain a generalized expression for the internal energy $U$ together with a distinction between the dynamical temperature $T_D$ of apparent horizons and the related one due to thermodynamics formulas. Remarkable, when the expression for $U$ is applied to the apparent horizon of the universe, we found that this internal energy is a constant of motion. Our calculations thus show that the total energy of our spatially flat universe including the gravitational contribution, when calculated at the apparent horizon, is an universal constant that can be set to zero from simple dimensional considerations. This strongly support the holographic principle.
We combined the spectroscopic information from the 3D-HST survey with the PEP/Herschel data to characterize the H\alpha dust attenuation properties of a sample of 79 normal star-forming galaxies at $0.7\leq z\leq1.5$ in the GOODS-S field. The sample was selected in the far-IR, at \lambda=100 and/or 160 \mu m, and only includes galaxies with a secure H\alpha detection (S/N>3). From the low resolution 3D-HST spectra we measured z and F(H\alpha) for the whole sample, rescaling the observed flux by a constant factor of 1.2 to remove the contamination by [NII]. The stellar masses, infrared and UV luminosities were derived from the SEDs by fitting multi-band data from GALEX near-UV to SPIRE500 \mu m. We derived the continuum extinction Estar(B-V) from both the IRX ratio and the UV-slope, and found an excellent agreement among them. Galaxies in the sample have 2.6x10^9$\leq$M*$\leq$3.5x10^11 Msun, intense infrared luminosity (L_IR>1.2x10^10 Lsun), high level of dust obscuration (0.1$\leq$Estar(B-V)$\leq$1.1) and strong H\alpha emission (typical observed fluxes Fobs(H\alpha)$\geq$4.1x10^-17 erg/s/cm2). The nebular extinction was estimated by comparing the observed SFR_H\alpha and the SFR_UV. We obtained f=Estar(B-V)/Eneb(B-V)=0.93$\pm$0.06, i.e. higher than the value measured in the local Universe. This result could be partially due to the adopted selection criteria, picking up the most obscured but also the H\alpha brightest sources. The derived dust correction produces a good agreement between H\alpha and IR+UV SFRs for objects with SFR$\gtrsim$20 Msun/yr and M*$\gtrsim$5x10^10 Msun, while sources with lower SFR and M* seem to require a smaller f-factor (i.e. higher H\alpha extinction correction). Our results then imply that for our sample the nebular and optical-UV extinctions are comparable and suggest that the f-factor is a function of both M* and SFR, according with previous studies.
In this paper, a modified Eddington-inspired-Born-Infeld (EiBI) theory with a pure trace term $g_{\mu\nu}R$ being added to the determinantal action is analysed from a cosmological point of view. It corresponds to the most general action constructed from a rank two tensor that contains up to first order terms in curvature. This term can equally be seen as a conformal factor multiplying the metric $g_{\mu\nu}$. This very interesting type of amendment has not been considered within the Palatini formalism despite the large amount of works on the Born-Infeld-inspired theory of gravity. This model can provide smooth bouncing solutions which were not allowed in the EiBI model for the same EiBI coupling. Most interestingly, for a radiation filled universe there are some regions of the parameter space that can naturally lead to a de Sitter inflationary stage without the need of any exotic matter field. Finally, in this model we discover a new type of cosmic "quasi-sudden" singularity, where the cosmic time derivative of the Hubble rate becomes very large but finite at a finite cosmic time.
The Peccei-Quinn mechanism suffers from the problem of the isocurvature perturbations. The isocurvature perturbations are suppressed if the Peccei-Quinn breaking scale is large during inflation. The oscillation of the Peccei-Quinn breaking field after inflation, however, leads to the formation of domain walls due to the parametric resonance effect. In this paper, we discuss the evolution of the Peccei-Quinn breaking field after inflation in detail, and propose a model where the parametric resonance is ineffective and hence domain walls are not formed. We also discuss consistency of our model with supersymmetric theory.
In four-particle scattering processes with transfer of mass, unlike mergers in which mass can only increase, mass of the most massive galaxies may be reduced. Elementary model describing such process is considered. In this way, it is supposed to explain observed phenomenon of downsizing when increasing of characteristic mass the heaviest galaxies over cosmological time replaces by its reduction.
We measure the location and energetics of a SIV BALQSO outflow. This ouflow has a velocity of 10,800 km s$^{-1}$ and a kinetic luminosity of $10^{45.7}$ erg s$^{-1}$, which is 5.2% of the Eddington luminosity of the quasar. From collisional excitation models of the observed SIV$/$SIV* absorption troughs, we measure a hydrogen number density of $n_\mathrm{\scriptscriptstyle H}=10^{4.3}$ cm$^{-3}$, which allows us to determine that the outflow is located 110 pc from the quasar. Since SIV is formed in the same ionization phase as CIV, our results can be generalized to the ubiquitous CIV BALs. Our accumulated distance measurements suggest that observed BAL outflows are located much farther away from the central source than is generally assumed (0.01-0.1 pc).
We discuss how a cyclic model for the flat universe can be constructively derived from Loop Quantum Gravity. This model has a lower bounce, at small values of the scale factor, which shares many similarities with that of Loop Quantum Cosmology. We find that quantum gravity corrections can be also relevant at energy densities much smaller than the Planckian one and that they can induce an upper bounce at large values of the scale factor.
We discuss initial conditions for the recently proposed Imperfect Dark Matter (Modified Dust). We show that they are adiabatic under fairly moderate assumptions about the cosmological evolution of the Universe at the relevant times.
Links to: arXiv, form interface, find, astro-ph, recent, 1507, contact, help (Access key information)
We compare DNS calculations of homogeneous isotropic turbulence with the statistical properties of intra-cluster turbulence from the Matryoshka Run (Miniati 2014) and find remarkable similarities between their inertial ranges. This allowed us to use the time dependent statistical properties of intra-cluster turbulence to evaluate dynamo action in the intra-cluster medium, based on earlier results from numerically resolved nonlinear magneto-hydrodynamic turbulent dynamo (Beresnyak 2012). We argue that this approach is necessary (a) to properly normalize dynamo action to the available intra-cluster turbulent energy and (b) to overcome the limitations of low Re affecting current numerical models of the intra-cluster medium. We find that while the properties of intra-cluster magnetic field are largely insensitive to the value and origin of the seed field, the resulting values for the Alfven speed and the outer scale of the magnetic field are consistent with current observational estimates, basically confirming the idea that magnetic field in today's galaxy clusters is a record of its past turbulent activity.
We estimate the incidence of multiply-imaged AGNs among the optical counterparts of X-ray selected point-like sources in the XXL field. We also derive the expected statistical properties of this sample, such as the redshift distribution of the lensed sources and of the deflectors that lead to the formation of multiple images, modelling the deflectors using both spherical (SIS) and ellipsoidal (SIE) singular isothermal mass distributions. We further assume that the XXL survey sample has the same overall properties as the smaller XMM-COSMOS sample restricted to the same flux limits and taking into account the detection probability of the XXL survey. Among the X-ray sources with a flux in the [0.5-2] keV band larger than 3.0x10$^{-15}$ erg cm$^{-2}$ s$^{-1}$ and with optical counterparts brighter than an r-band magnitude of 25, we expect ~20 multiply-imaged sources. Out of these, ~16 should be detected if the search is made among the seeing-limited images of the X-ray AGN optical counterparts and only one of them should be composed of more than two lensed images. Finally, we study the impact of the cosmological model on the expected fraction of lensed sources.
The influence of dark matter particle decay on the baryon-to-photon ratio has been studied for different cosmological epochs. We consider different parameter values of dark matter particles such as mass, lifetime, the relative fraction of dark matter particles. It is shown that the modern value of the dark matter density $\Omega_{\rm CDM}=0.26$ is enough to lead to variation of the baryon-to-photon ratio up to $\Delta \eta / \eta \sim 0.01 \div 1$ for decays of the particles with masses 10 GeV $\div$ 1 TeV. However, such processes can also be accompanied by emergence of an excessive gamma ray flux. The observational data on the diffuse gamma ray background are used to making constraints on the dark matter decay models and on the maximum possible variation of the baryon-to-photon ratio $\Delta\eta/\eta\lesssim10^{-5}$. Detection of such variation of the baryon density in future cosmological experiments can serve as a powerful means of studying properties of dark matter particles.
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solutions to general classification problems, such as star/galaxy separation. We illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at https://github.com/IftachSadeh/ANNZ
In this presentation we summarize our previous results concerning the evolution of primordial magnetic fields with and without helicity during the expansion of the Universe. We address different magnetogenesis scenarios such as inflation, electroweak and QCD phase transitions magnetogenesis. A high Reynolds number in the early Universe ensures strong coupling between magnetic field and fluid motions. After generation the subsequent dynamics of the magnetic field is governed by decaying hydromagnetic turbulence. We claim that primordial magnetic fields can be considered as a seeds for observed magnetic fields in galaxies and clusters. Magnetic field strength bounds obtained in our analysis are consistent with the upper and lower limits of extragalactic magnetic fields.
In this paper we calculate the potential sensitivity of the CUORE detector to axions produced in the Sun through the Primakoff process and detected by coherent Bragg conversion by the inverse Primakoff process. The conversion rate is calculated using density functional theory for the electron density and realistic expectations for the energy resolution and background of CUORE. Monte Carlo calculations for $5~$y$\times741~$kg=$3705~$kg y of exposure are analyzed using time correlation of individual events with the theoretical time-dependent counting rate and lead to an expected limit on the axion-photon coupling $g_{a\gamma\gamma}<3.83\times 10^{-10}~GeV^{-1}$ for axion masses less than several eV.
According to the cosmological principle, Universal large-scale structure is
homogeneous and isotropic. The observable Universe, however, shows complex
structures even on very large scales. The recent discoveries of structures
significantly exceeding the transition scale of 370 Mpc pose a challenge to the
cosmological principle.
We report here the discovery of the largest regular formation in the
observable Universe; a ring with a diameter of 1720 Mpc, displayed by 9 gamma
ray bursts (GRBs), exceeding by a factor of five the transition scale to the
homogeneous and isotropic distribution. The ring has a major diameter of $43^o$
and a minor diameter of $30^o$ at a distance of 2770 Mpc in the 0.78<z<0.86
redshift range, with a probability of $2\times 10^{-6}$ of being the result of
a random fluctuation in the GRB count rate.
Evidence suggests that this feature is the projection of a shell onto the
plane of the sky. Voids and string-like formations are common outcomes of
large-scale structure. However, these structures have maximum sizes of 150 Mpc,
which are an order of magnitude smaller than the observed GRB ring diameter.
Evidence in support of the shell interpretation requires that temporal
information of the transient GRBs be included in the analysis.
This ring-shaped feature is large enough to contradict the cosmological
principle. The physical mechanism responsible for causing it is unknown.
The "Wide Area VISTA Extra-galactic Survey" (WAVES) is a 4MOST Consortium Design Reference Survey which will use the VISTA/4MOST facility to spectroscopically survey ~2million galaxies to $r_{\rm AB} < 22$ mag. WAVES consists of two interlocking galaxy surveys ("WAVES-Deep" and "WAVES-Wide"), providing the next two steps beyond the highly successful 1M galaxy Sloan Digital Sky Survey and the 250k Galaxy And Mass Assembly survey. WAVES will enable an unprecedented study of the distribution and evolution of mass, energy, and structures extending from 1-kpc dwarf galaxies in the local void to the morphologies of 200-Mpc filaments at $z\sim1$. A key aim of both surveys will be to compare comprehensive empirical observations of the spatial properties of galaxies, groups, and filaments, against state-of-the-art numerical simulations to distinguish between various Dark Matter models.
We study the predictions for structure formation in an induced gravity dark energy model with a quartic potential. By developing a dedicated Einstein-Boltzmann code, we study self-consistently the dynamics of homogeneous cosmology and of linear perturbations without using any parametrization. By evolving linear perturbations with initial conditions in the radiation era, we accurately recover the quasi-static analytic approximation in the matter dominated era. We use Planck 2013 data and a compilation of baryonic acoustic oscillation (BAO) data to constrain the coupling $\gamma$ to the Ricci curvature and the other cosmological parameters. By connecting the gravitational constant in the Einstein equation to the one measured in a Cavendish-like experiment, we find $\gamma < 0.0012$ at 95% CL with Planck 2013 and BAO data. This is the tightest cosmological constraint on $\gamma$ and on the corresponding derived post-Newtonian parameters. Because of a degeneracy between $\gamma$ and the Hubble constant $H_0$, we show how larger values for $\gamma$ are allowed, but not preferred at a significant statistical level, when local measurements of $H_0$ are combined in the analysis with Planck 2013 data.
We use a sample of 37 of the densest clusters and protoclusters across $1.3 \le z \le 3.2$ from the Clusters Around Radio-Loud AGN (CARLA) survey to study the formation of massive cluster galaxies. We use optical $i'$-band and infrared 3.6$\mu$m and 4.5$\mu$m images to statistically select sources within these protoclusters and measure their median observed colours; $\langle i'-[3.6] \rangle$. We find the abundance of massive galaxies within the protoclusters increases with decreasing redshift, suggesting these objects may form an evolutionary sequence, with the lower redshift clusters in the sample having similar properties to the descendants of the high redshift protoclusters. We find that the protocluster galaxies have an approximately unevolving observed-frame $i'-[3.6]$ colour across the examined redshift range. We compare the evolution of the $\langle i'-[3.6] \rangle$ colour of massive cluster galaxies with simplistic galaxy formation models. Taking the full cluster population into account, we show that the formation of stars within the majority of massive cluster galaxies occurs over at least 2Gyr, and peaks at $z \sim 2$-3. From the median $i'-[3.6]$ colours we cannot determine the star formation histories of individual galaxies, but their star formation must have been rapidly terminated to produce the observed red colours. Finally, we show that massive galaxies at $z>2$ must have assembled within 0.5Gyr of them forming a significant fraction of their stars. This means that few massive galaxies in $z>2$ protoclusters could have formed via dry mergers.
Type Ia supernovae (SNe Ia) are powerful cosmological "standardizable candles" and the most precise distance indicators. However, a limiting factor in their use for precision cosmology rests on our ability to correct for the dust extinction toward them. SN 2014J in the starburst galaxy M82, the closest detected SN~Ia in three decades, provides unparalleled opportunities to study the dust extinction toward an SN Ia. In order to derive the extinction as a function of wavelength, we model the color excesses toward SN 2014J, which are observationally derived over a wide wavelength range in terms of dust models consisting of a mixture of silicate and graphite. The resulting extinction laws steeply rise toward the far ultraviolet, even steeper than that of the Small Magellanic Cloud (SMC). We infer a visual extinction of $A_V \approx 1.9~\rm mag$, a reddening of $E(B-V)\approx1.1~ \rm mag$, and a total-to-selective extinction ratio of $R_V \approx 1.7$, consistent with that previously derived from photometric, spectroscopic, and polarimetric observations. The size distributions of the dust in the interstellar medium toward SN 2014J are skewed toward substantially smaller grains than that of the Milky Way and the SMC.
Here is proposed the idea of linking the dark matter issue, (considered as a major problem of contemporary research in physics) with two other open theoretical questions, one, almost centenary about the existence of an unavoidable ether in general relativity agreeing with the Mach's principle, and one more recent about the properties of the quantum vacuum of the quantum field theory of strong interactions, QuantumChromodynamics (QCD). According to this idea, on the one hand, dark matter and dark energy that, according to the current standard model of cosmology represent about 95% of the universe content, can be considered as two distinct forms of the Mach's ether, and, on the other hand, dark matter, as a perfect fluid emerging from the QCD vacuum could be modeled as a Bose Einstein condensate.
In this paper, we have considered a spatially flat FRW universe filled with pressureless matter and dark energy. We have considered a phenomenological parametrization of the deceleration parameter $q(z)$ and from this we have reconstructed the equation of state for dark energy $\omega_{\phi}(z)$. Using the combination of datasets (SN Ia + Hubble + BAO/CMB), we have constrained the transition redshift $z_t$ (at which the universe switches from a decelerating to an accelerating phase) and have found the best fit value of $z_t$. We have also found that the reconstructed results of $q(z)$ and $\omega_{\phi}(z)$ are in good agreement with the recent observations. The potential term for the present toy model is found to be functionally similar to a Higgs potential.
Disformal theories of gravity are scalar-tensor theories where the scalar couples derivatively to matter via the Jordan frame metric. These models have recently attracted interest in the cosmological context since they admit accelerating solutions. We derive the solution for a static isolated mass in generic disformal gravity theories and transform it into the parameterised post-Newtonian form. This allows us to investigate constraints placed on such theories by local tests of gravity. The tightest constraints come from preferred-frame effects due to the motion of the Solar System with respect to the evolving cosmological background field. The constraints we obtain improve upon the previous solar system constraints by two orders of magnitude, and constrain the scale of the disformal coupling for generic models to $\mathcal{M} \gtrsim 100$ eV. These constraints render all disformal effects irrelevant for cosmology.
We initiate a study of cosmological implications of sphaleron-mediated CP-violation arising from the electroweak vacuum angle under the reasonable assumption that the semiclassical suppression is lifted at finite temperature. In this article, we explore the implications for existing scenarios of baryogenesis. Many compelling models of baryogenesis rely on electroweak sphalerons to relax a $(B+L)$ charge asymmetry. Depending on the sign of the CP-violating parameter, it is shown that the erasure of positive $(B+L)$ will proceed more or less quickly than the relaxation of negative $(B+L)$. This is a higher order effect in the kinetic equation for baryon number, which we derive here through order $n_{B+L}^2$. Its impact on known baryogenesis models therefore seems minor, since phenomenologically $n_{B+L}$ is much smaller than the entropy density. However, there remains an intriguing unexplored possibility that baryogenesis could be achieved with the vacuum angle alone providing the required CP-violation.
The GAMA survey has now completed its spectroscopic campaign of over 250,000 galaxies ($r<19.8$mag), and will shortly complete the assimilation of the complementary panchromatic imaging data from GALEX, VST, VISTA, WISE, and Herschel. In the coming years the GAMA fields will be observed by the Australian Square Kilometer Array Pathfinder allowing a complete study of the stellar, dust, and gas mass constituents of galaxies within the low-z Universe ($z<0.3$). The science directive is to study the distribution of mass, energy, and structure on kpc-Mpc scales over a 3billion year timeline. This is being pursued both as an empirical study in its own right, as well as providing a benchmark resource against which the outputs from numerical simulations can be compared. GAMA has three particularly compelling aspects which set it apart: completeness, selection, and panchromatic coverage. The very high redshift completeness ($\sim 98$\%) allows for extremely complete and robust pair and group catalogues; the simple selection ($r<19.8$mag) minimises the selection bias and simplifies its management; and the panchromatic coverage, 0.2$\mu$m - 1m, enables studies of the complete energy distributions for individual galaxies, well defined sub-samples, and population assembles (either directly or via stacking techniques). For further details and data releases see: this http URL
Links to: arXiv, form interface, find, astro-ph, recent, 1507, contact, help (Access key information)