scholarly journals Iterative removal of redshift-space distortions from galaxy clustering

2020 ◽  
Vol 497 (3) ◽  
pp. 3451-3471
Author(s):  
Yuchan Wang ◽  
Baojiu Li ◽  
Marius Cautun

ABSTRACT Observations of galaxy clustering are made in redshift space, which results in distortions to the underlying isotropic distribution of galaxies. These redshift-space distortions (RSDs) not only degrade important features of the matter density field, such as the baryonic acoustic oscillation (BAO) peaks, but also pose challenges for the theoretical modelling of observational probes. Here, we introduce an iterative non-linear reconstruction algorithm to remove RSD effects from galaxy clustering measurements, and assess its performance by using mock galaxy catalogues. The new method is found to be able to recover the real-space galaxy correlation function with an accuracy of $\sim \!1{{\ \rm per\ cent}}$, and restore the quadrupole accurately to 0, on scales $s\gtrsim 20\,h^{-1}\, {\rm Mpc}$. It also leads to an improvement in the reconstruction of the initial density field, which could help to accurately locate the BAO peaks. An ‘internal calibration’ scheme is proposed to determine the values of cosmological parameters, as a part of the reconstruction process, and possibilities to break parameter degeneracies are discussed. RSD reconstruction can offer a potential way to simultaneously extract the cosmological parameters, initial density field, real-space galaxy positions, and large-scale peculiar velocity field (of the real Universe), making it an alternative to standard perturbative approaches in galaxy clustering analysis, bypassing the need for RSD modelling.

2020 ◽  
Vol 497 (3) ◽  
pp. 2699-2714
Author(s):  
Xiao Fang (方啸) ◽  
Tim Eifler ◽  
Elisabeth Krause

ABSTRACT Accurate covariance matrices for two-point functions are critical for inferring cosmological parameters in likelihood analyses of large-scale structure surveys. Among various approaches to obtaining the covariance, analytic computation is much faster and less noisy than estimation from data or simulations. However, the transform of covariances from Fourier space to real space involves integrals with two Bessel integrals, which are numerically slow and easily affected by numerical uncertainties. Inaccurate covariances may lead to significant errors in the inference of the cosmological parameters. In this paper, we introduce a 2D-FFTLog algorithm for efficient, accurate, and numerically stable computation of non-Gaussian real-space covariances for both 3D and projected statistics. The 2D-FFTLog algorithm is easily extended to perform real-space bin-averaging. We apply the algorithm to the covariances for galaxy clustering and weak lensing for a Dark Energy Survey Year 3-like and a Rubin Observatory’s Legacy Survey of Space and Time Year 1-like survey, and demonstrate that for both surveys, our algorithm can produce numerically stable angular bin-averaged covariances with the flat sky approximation, which are sufficiently accurate for inferring cosmological parameters. The code CosmoCov for computing the real-space covariances with or without the flat-sky approximation is released along with this paper.


2020 ◽  
Vol 498 (1) ◽  
pp. 128-143 ◽  
Author(s):  
Faizan G Mohammad ◽  
Will J Percival ◽  
Hee-Jong Seo ◽  
Michael J Chapman ◽  
D Bianchi ◽  
...  

ABSTRACT The completed extended Baryon Oscillation Spectroscopic Survey (eBOSS) catalogues contain redshifts of 344 080 quasars at 0.8 < z < 2.2, 174 816 luminous red galaxies between 0.6 < z < 1.0, and 173 736 emission-line galaxies over 0.6 < z < 1.1 in order to constrain the expansion history of the Universe and the growth rate of structure through clustering measurements. Mechanical limitations of the fibre-fed spectrograph on the Sloan telescope prevent two fibres being placed closer than 62 arcsec in a single pass of the instrument. These ‘fibre collisions’ strongly correlate with the intrinsic clustering of targets and can bias measurements of the two-point correlation function resulting in a systematic error on the inferred values of the cosmological parameters. We combine the new techniques of pairwise-inverse probability and the angular upweighting (PIP+ANG) to correct the clustering measurements for the effect of fibre collisions. Using mock catalogues, we show that our corrections provide unbiased measurements, within data precision, of both the projected $\rm {\mathit{ w}_p}\left(\mathit{ r}_p\right)$ and the redshift-space multipole ξ(ℓ = 0, 2, 4)(s) correlation functions down to $0.1\, h^{-1}{\rm Mpc}$, regardless of the tracer type. We apply the corrections to the eBOSS DR16 catalogues. We find that, on scales $s\gtrsim 20\, h^{-1}{\rm Mpc}$ for ξℓ, as used to make baryon acoustic oscillation and large-scale redshift-space distortion measurements, approximate methods such as nearest-neighbour upweighting are sufficiently accurate given the statistical errors of the data. Using the PIP method, for the first time for a spectroscopic program of the Sloan Digital Sky Survey, we are able to successfully access the one-halo term in the clustering measurements down to $\sim 0.1\, h^{-1}{\rm Mpc}$ scales. Our results will therefore allow studies that use the small-scale clustering to strengthen the constraints on both cosmological parameters and the halo occupation distribution models.


2019 ◽  
Vol 15 (12) ◽  
pp. 6859-6864
Author(s):  
Jiayong Zhang ◽  
Yongqiang Cheng ◽  
Wenchang Lu ◽  
Emil Briggs ◽  
Anibal J. Ramirez-Cuesta ◽  
...  

2014 ◽  
Vol 4 (1-2) ◽  
pp. 42-45 ◽  
Author(s):  
A. Tugay

Filaments are clearly visible in galaxy distributions, but they are difficult to detect by computer algorithms. Most methods of filament detection can be used only with numerical simulations of a large-scale structure. New simple and effective methods for the real filament detection should be developed. The method of a smoothed galaxy density field was applied in this work to SDSS data of galaxy positions. Five concentric radial layers of 100 Mpc are appropriate for filaments detection. Two methods were tested for the first layer and one more method is proposed.


2020 ◽  
Vol 495 (2) ◽  
pp. 1613-1640 ◽  
Author(s):  
Mehdi Rezaie ◽  
Hee-Jong Seo ◽  
Ashley J Ross ◽  
Razvan C Bunescu

ABSTRACT Robust measurements of cosmological parameters from galaxy surveys rely on our understanding of systematic effects that impact the observed galaxy density field. In this paper, we present, validate, and implement the idea of adopting the systematics mitigation method of artificial neural networks for modelling the relationship between the target galaxy density field and various observational realities including but not limited to Galactic extinction, seeing, and stellar density. Our method by construction allows a wide class of models and alleviates overtraining by performing k-fold cross-validation and dimensionality reduction via backward feature elimination. By permuting the choice of the training, validation, and test sets, we construct a selection mask for the entire footprint. We apply our method on the extended Baryon Oscillation Spectroscopic Survey (eBOSS) Emission Line Galaxies (ELGs) selection from the Dark Energy Camera Legacy Survey (DECaLS) Data Release 7 and show that the spurious large-scale contamination due to imaging systematics can be significantly reduced by up-weighting the observed galaxy density using the selection mask from the neural network and that our method is more effective than the conventional linear and quadratic polynomial functions. We perform extensive analyses on simulated mock data sets with and without systematic effects. Our analyses indicate that our methodology is more robust to overfitting compared to the conventional methods. This method can be utilized in the catalogue generation of future spectroscopic galaxy surveys such as eBOSS and Dark Energy Spectroscopic Instrument (DESI) to better mitigate observational systematics.


2020 ◽  
Vol 500 (3) ◽  
pp. 4173-4180
Author(s):  
Stephen Stopyra ◽  
Hiranya V Peiris ◽  
Andrew Pontzen

ABSTRACT Cosmic voids provide a powerful probe of the origin and evolution of structures in the Universe because their dynamics can remain near-linear to the present day. As a result, they have the potential to connect large-scale structure at late times to early Universe physics. Existing ‘watershed’-based algorithms, however, define voids in terms of their morphological properties at low redshift. The degree to which the resulting regions exhibit linear dynamics is consequently uncertain, and there is no direct connection to their evolution from the initial density field. A recent void definition addresses these issues by considering ‘anti-haloes’. This approach consists of inverting the initial conditions of an N-body simulation to swap overdensities and underdensities. After evolving the pair of initial conditions, anti-haloes are defined by the particles within the inverted simulation that are inside haloes in the original (uninverted) simulation. In this work, we quantify the degree of non-linearity of both anti-haloes and watershed voids using the Zel’dovich approximation. We find that non-linearities are introduced by voids with radii less than $5\, \mathrm{Mpc}\, h^{-1}$, and that both anti-haloes and watershed voids can be made into highly linear sets by removing these voids.


2018 ◽  
Vol 27 (09) ◽  
pp. 1850102 ◽  
Author(s):  
Antonio Enea Romano

The recent analysis of low-redshift supernovae (SN) has increased the apparent tension between the value of [Formula: see text] estimated from low and high redshift observations such as the cosmic microwave background (CMB) radiation. At the same time other observations have provided evidence of the existence of local radial inhomogeneities extending in different directions up to a redshift of about [Formula: see text]. About [Formula: see text] of the Cepheids used for SN calibration are directly affected because they are located along the directions of these inhomogeneities. We compute with different methods the effects of these inhomogeneities on the low-redshift luminosity and angular diameter distance using an exact solution of the Einstein’s equations, linear perturbation theory and a low-redshift expansion. We confirm that at low redshift the dominant effect is the nonrelativistic Doppler redshift correction, which is proportional to the volume averaged density contrast and to the comoving distance from the center. We derive a new simple formula relating directly the luminosity distance to the monopole of the density contrast, which does not involve any metric perturbation. We then use it to develop a new inversion method to reconstruct the monopole of the density field from the deviations of the redshift uncorrected observed luminosity distance respect to the [Formula: see text]CDM prediction based on cosmological parameters obtained from large scale observations. The inversion method confirms the existence of inhomogeneities whose effects were not previously taken into account because the [Formula: see text] [G. Lavaux and M. J. Hudson, Mon. Not. R. Astron. Soc. 416 (2011) 2840] density field maps used to obtain the peculiar velocity [J. Carrick et al., Mon. Not. R. Astron. Soc. 450 (2015) 317] for redshift correction were for [Formula: see text], which is not a sufficiently large scale to detect the presence of inhomogeneities extending up to [Formula: see text]. The inhomogeneity does not affect the high redshift luminosity distance because the volume averaged density contrast tends to zero asymptotically, making the value of [Formula: see text] obtained from CMB observations insensitive to any local structure. The inversion method can provide a unique tool to reconstruct the density field at high redshift where only SN data is available, and in particular to normalize correctly the density field respect to the average large scale density of the Universe.


2019 ◽  
Vol 621 ◽  
pp. A69 ◽  
Author(s):  
Doogesh Kodi Ramanah ◽  
Guilhem Lavaux ◽  
Jens Jasche ◽  
Benjamin D. Wandelt

We present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via an application of the Alcock-Paczyński (AP) test. Our physical model of the non-linearly evolved density field, as probed by galaxy surveys, employs Lagrangian perturbation theory (LPT) to connect Gaussian initial conditions to the final density field, followed by a coordinate transformation to obtain the redshift space representation for comparison with data. We have implemented a Hamiltonian Monte Carlo sampler to generate realisations of three-dimensional (3D) primordial and present-day matter fluctuations from a non-Gaussian LPT-Poissonian density posterior given a set of observations. This hierarchical approach encodes a novel AP test, extracting several orders of magnitude more information from the cosmic expansion compared to classical approaches, to infer cosmological parameters and jointly reconstruct the underlying 3D dark matter density field. The novelty of this AP test lies in constraining the comoving-redshift transformation to infer the appropriate cosmology which yields isotropic correlations of the galaxy density field, with the underlying assumption relying purely on the geometrical symmetries of the cosmological principle. Such an AP test does not rely explicitly on modelling the full statistics of the field. We verified in depth via simulations that this renders our test robust to model misspecification. This leads to another crucial advantage, namely that the cosmological parameters exhibit extremely weak dependence on the currently unresolved phenomenon of galaxy bias, thereby circumventing a potentially key limitation. This is consequently among the first methods to extract a large fraction of information from statistics other than that of direct density contrast correlations, without being sensitive to the amplitude of density fluctuations. We perform several statistical efficiency and consistency tests on a mock galaxy catalogue, using the SDSS-III survey as template, taking into account the survey geometry and selection effects, to validate the Bayesian inference machinery implemented.


2005 ◽  
Vol 04 (03) ◽  
pp. 867-882 ◽  
Author(s):  
TAKUMI HORI ◽  
HIDEAKI TAKAHASHI ◽  
TOMOSHIGE NITTA

We have developed a novel quantum mechanical/molecular mechanical (QM/MM) code based on the real-space grids in order to realize high parallel efficiency. The details of the methodology and its parallel implementation have been presented. We have computed the electronic state of the QM subsystem using the Kohn–Sham density functional theory, where the one-electron wave functions have been expressed by the real-space grids distributed over a cubic cell. We have performed QM/MM simulations for the peptide hydrolysis in human immunodeficiency virus type-1 aspartyl protease in order to examine the reliability of the present QM/MM approach. The activation energy obtained by the present calculations shows a good agreement with the experimental results and that of the other QM/MM method. Finally, we have parallelized the whole code and found that the grid approach can afford high parallel efficiency (~80%) in such a large scale electronic structure calculation. We conclude that the QM/MM approach utilizing real-space grids is adequate and efficient for the study of the enzymatic reactions.


Sign in / Sign up

Export Citation Format

Share Document