Issue |
A&A
Volume 599, March 2017
|
|
---|---|---|
Article Number | A79 | |
Number of page(s) | 12 | |
Section | Cosmology (including clusters of galaxies) | |
DOI | https://doi.org/10.1051/0004-6361/201629928 | |
Published online | 02 March 2017 |
Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey
1 Laboratoire AIM, UMR CEA-CNRS-Paris 7, Irfu, Service d’Astrophysique, CEA Saclay, 91191 Gif-sur-Yvette, France
e-mail: austin.peel@cea.fr
2 McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA
3 Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
4 Institut d’Astrophysique de Paris, CNRS UMR 7095 & UPMC, 98bis boulevard Arago, 75014 Paris, France
Received: 18 October 2016
Accepted: 15 December 2016
Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8–Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ− statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8–Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.
Key words: gravitational lensing: weak / large-scale structure of Universe / cosmological parameters / methods: statistical
© ESO, 2017
1. Introduction
The observed statistical distribution of matter in the Universe serves as a powerful discriminator of cosmological models. Different relative contributions to the Universe’s mass-energy content produce different expansion histories and different amplitudes of matter clustering. Weak gravitational lensing (WL), through its ability to probe the large-scale structure, has become a primary tool to study the total matter distribution in the Universe, as well as the nature of dark matter and dark energy.
The WL signal consists of tiny coherent distortions of galaxy shapes whose light rays have been bent by gravitational fields of dense matter structures lying along the line of sight. In contrast to the dramatic distortions seen in strong-lensing systems, the images of weakly lensed galaxies are sheared and magnified only at the percent level relative to their original shapes. Weak lensing is therefore an inherently statistical probe that, given a sufficient density of background sources, provides a means to map the projected matter distribution across the sky. Furthermore, such mass maps trace the total matter distribution in an unbiased way, as WL is insensitive to the dynamical relationship between dark matter halos and the luminous galaxies that occupy them.
Numerous analyses have now constrained cosmological parameters using WL data from both ground- and space-based galaxy surveys. Recent studies include the Hubble Space Telescope Cosmic Evolution Survey (COSMOS; Schrabback et al. 2010), the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS; Heymans et al. 2012, 2013; Kilbinger et al. 2013), the Sloan Digital Sky Survey (SDSS; Huff et al. 2014), the Dark Energy Survey (DES; Abbott et al. 2016b,a), and the Kilo Degree Survey (KiDS; Kuijken et al. 2015; Hildebrandt et al. 2017). Future surveys like Euclid (Laureijs et al. 2011) and the Large Synoptic Survey Telescope (LSST; LSST Science Collaboration 2009) will afford unprecedented precision in WL measurements with their large survey areas and advanced optics. Cosmic shear studies planned for missions with a deeper but smaller field of view include the Hyper Suprime-Cam of the Subaru Telescope (HSC; Miyazaki et al. 2012) and the Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA; Spergel et al. 2013).
The basic WL measurements are the shear two-point correlation function (2PCF) and its associated power spectrum. These measures capture the second-order statistics, but as the shear distribution is not merely Gaussian, they neglect the non-Gaussian information encoded by the non-linear formation of structure. Studying higher-order statistics of the shear and convergence fields has therefore also become common. Examples of these include three-point correlation functions (Semboloni et al. 2011; Fu et al. 2014), as well as nth order moments of the convergence field and Minkowski functionals (Petri et al. 2013, 2015). The statistics of peaks, which are local maxima in WL maps, provide another way to access the non-Gaussian part of the signal.
Large signal-to-noise ratio (S/N) peaks trace high-mass regions of the Universe and can often be associated with massive galaxy clusters. High peaks therefore provide a direct and robust probe of the halo mass function. However, the origin of a WL peak does not typically admit of a unique interpretation. Low S/N peaks can arise from projection effects of the large-scale structure, or they can simply be spurious noise fluctuations. Studies aimed at detecting massive galaxy clusters with WL thus focus on high peaks with S/N higher than ~4, whereas cosmological studies include the low peaks, since they also contain significant cosmological information (Yang et al. 2013; Lin et al. 2016).
Jain & Van Waerbeke (2000) initiated the study of WL peaks in their own right as a cosmological probe, using simulations to show that peaks can discriminate models with different total matter density parameters. Since then, many other papers have explored the efficacy of peaks for constraining both standard and nonstandard cosmological models (Marian et al. 2009, 2011, 2012, 2013; Dietrich & Hartlap 2010; Maturi et al. 2010, 2011; Pires et al. 2012; Cardone et al. 2013; Lin & Kilbinger 2015a,b; Lin et al. 2016). Regarding recent surveys, peak analyses have also been used to derive constraints on (σ8,Ωm) with CFHTLenS data (Liu et al. 2015a), CFHT Stripe-82 data (Liu et al. 2015b), and DES Science Verification data (Kacprzak et al. 2016). Euclid will observe approximately 15 000 deg2 of the extragalactic sky, an area about two orders of magnitude larger than CFHTLenS and the currently available DES SV data. It is therefore important to study not only the statistical improvements that will be afforded by such a large survey, but also to anticipate potential challenges regarding systematics and biases in our modelling of observations.
Our goal in this paper is to assess the ability of peak counts, modelled by the fast stochastic algorithm Camelus, to constrain cosmological parameters in a large-area survey. Camelus was introduced in Lin & Kilbinger (2015a) and studied further in the context of parameter constraint strategies and mass-map filtering methods in Lin & Kilbinger (2015b) and Lin et al. (2016), respectively. See also Zorrilla Matilla et al. (2016) for a recent comparison of Camelus predictions to N-body simulations over a broad range of cosmologies. The code uses a forward-model approach to generate lensing catalogues in a way that avoids time-costly N-body simulations and only requires a halo mass function and halo profile as input.
In this work, we apply the Camelus model to mock lensing data of area ~5000 deg2. Implementing a wavelet-based mass-map filtering technique, we compute peak histograms of the mock catalogue as a function of S/N and filtering scale. The multiscale approach allows us to build a peaks summary statistic, which separates out the cosmological information contained at different scales. Using Camelus with approximate Bayesian computation (ABC) for inference, we derive credible contours in σ8–Ωm space and compute the derived parameter Σ8 = σ8(Ωm/ 0.27)α for the mock survey. To compare with probes of Gaussian information, we compare contours and the uncertainty in Σ8 from peaks to that of the two components of the 2PCF ξ+ and ξ− of the shear field.
The remainder of the paper is organized as follows. In Sect. 2 we present the basic theory of weak gravitational lensing relevant to this work. In Sect. 3 we describe our method, including our multiscale mass-mapping technique, the simulated galaxy catalogue used as observations, the Camelus model, and finally our parameter inferences using ABC. Parameter constraint contours, as well as the comparison between peaks and ξ±, are presented in Sect. 4. We conclude in Sect. 5.
2. Theoretical background
2.1. Weak gravitational lensing
Throughout this work, we assume that the Universe can be described by a weakly perturbed FLRW model with Newtonian potential Φ. Local deviations from the average matter density
are characterized by the density contrast,
(1)where coordinates (t,θ,w) represent cosmic time, angular position, and comoving radial distance, respectively. Density perturbations are related to Φ via Poisson’s equation for a pressureless ideal fluid,
(2)which can be written more conveniently in terms of cosmological parameters as
(3)where H0 is the present Hubble parameter, Ωm is the present matter density, and a(t) is the scale factor of the Universe.
The images of distant galaxies undergo distortions as their light propagates through regions of non-uniform potential on their way to us. The original unlensed positions β of light beams undergo a remapping to the positions θ at which we observe them, which we can quantify by the Jacobian matrix
of the transformation. It can be shown that to linear order in Φ,
is determined by a line-of-sight integral of second-order transverse derivatives of Φ; see Schneider et al. (1998), for example. This motivates the definition of the lensing potential
(4)and the decomposition of
in terms of the convergence κ(θ) and shear γ(θ) = γ1(θ) + iγ2(θ):
(5)In the above, fK(w) is the comoving angular diameter distance
(6)and K is the curvature of space. Then κ and γ are expressible directly as second-order derivatives of the lensing potential
where ∂i denotes the partial derivative with respect to θi. In particular, we can now relate κ directly to the density fluctuations as
(9)making the interpretation of convergence as the projected mass density more apparent.
The explicit time dependence of Φ in Eqs. (4) and (9) has been suppressed, since the integral over w′ is understood as being performed along our past light cone. Furthermore, we restrict our attention to the weak-lensing regime of small deviations (i.e. where | κ |, | γ | ≪ 1), and the Born approximation permits us to integrate along the unperturbed light path.
Further details of weak-lensing theory and formalism can be found in the reviews of Bartelmann & Schneider (2001), Hoekstra & Jain (2008), and Kilbinger (2015), for example.
2.2. Mass mapping
Mass maps refer to maps of the convergence κ, a quantity that directly represents the matter distribution projected along the line of sight. This is readily seen in the integral of Eq. (9). As they are scalar fields, mass maps are more convenient for many applications compared to the complex shear γ, as in, for example, cross-correlating with other scalar fields like the galaxy distribution or cosmic microwave background (CMB) temperature. In this work, we use mass maps to facilitate the straightforward identification of weak-lensing peaks.
In practice, we cannot directly measure κ or γ from a galaxy survey. What we observe instead are galaxy ellipticities, which are an additive combination of intrinsic ellipticity and the shear due to lensing. Assuming that the source galaxies are not intrinsically aligned, we can therefore average the observed ellipticities over many sources in a small region of the sky to estimate the shear signal γ. This approximation works in the weak-lensing regime, since ϵobs ≈ ϵint + g, where g = γ/ (1−κ) is the reduced shear. The average of many observed ellipticities is thus an accurate measure of g ≈ γ.
The inversion method of Kaiser and Squires (Kaiser & Squires 1993) gives a parameter-free prescription for computing κ from γ via the Fourier transforms of Eqs. (7) and (8). The equations become (10)where
and
are the Fourier transforms of κ and γ, ℓ is the Fourier counterpart to angular position θ,
denotes complex conjugation, and
(11)Convergence can therefore be determined directly from measurements of galaxy shapes in the weak-lensing regime, up to an additive constant. It is well known that the Kaiser-Squires inversion creates undesirable artefacts at the boundaries of finite fields. We address this issue by excluding a border of 8 pixels (=4 arcmin) around each 5 × 5 deg2 convergence map generated throughout our analysis.
2.3. Shear two-point correlation functions
A standard approach to constraining cosmology with weak-lensing analyses is to use N-point correlation functions of the shear field. Primary among these are the two-point functions ξ+(θ) and ξ−(θ). We present a brief overview of the ξ± statistics, since we later compare their parameter constraint results to those of peak counts.
The shear γ at angular position θ can be decomposed into a tangential component γt and cross component γ× with respect to a point at position θ0. If the separation vector θ−θ0 has polar angle ϕ, then (12)The minus sign ensures that γt> 0 for tangential alignment with respect to the position θ0, and γt< 0 for radial alignment.
The shear 2PCF ξ± are defined as combinations of γt and γ× as (Miralda-Escude 1991) (13)as a function of angular scale θ. Following Schneider et al. (2002), we can estimate ξ± from pairs of measured galaxy ellipticities as
(14)Here ϵt and ϵ× are defined analogously as γt and γ× in Eq. (12). Weights are denoted by w, and the indices i,j refer to galaxies at positions θi and θj. The summation is carried out over all galaxy pairs with | θi−θj | lying within an angular bin around θ.
3. Method
Our goal is to study the constraints on cosmological parameters attainable from peak count statistics in a wide-field survey like Euclid. We describe in the following sections the simulated Euclid-like galaxy catalogue we have chosen for analysis, as well as the algorithm we use to simulate stochastic lensing catalogues as a function of cosmological parameters. We then describe the multiscale mass-mapping technique we use to construct peak abundance data vectors for both the mock catalogue and for those simulated by Camelus. Finally, we present the method of approximate Bayesian computation, which we use for parameter inference.
3.1. Mock observations
The Marenostrum Institut de Ciències de l’Espai Grand Challenge simulation (Fosalba et al. 2008, 2015a,b; Crocce et al. 2010, 2015; Carretero et al. 2015; Hoffmann et al. 2015) is a set of large-volume N-body runs carried out at the Barcelona Supercomputing Center using the GADGET-2 N-body code (Springel 2005). It contains approximately 70 billion dark matter particles in a simulation box of comoving side length ~ 3 h-1 Gpc, giving high mass resolution in a large and statistically independent (i.e. non-repeated) volume. Using a technique combining halo occupation distribution (HOD) and halo abundance matching (HAM) to populate dark matter halos, a mock galaxy catalogue was generated with a light cone extending out to z = 1.4. The MICECATv2.0 catalogue1 (MICE hereafter) we have used in this work covers an area of about 5000 deg2, a contiguous octant of the sky lying within 0° ≤ RA ≤ 90° and 0° ≤ Dec ≤ 90°, which is approximately one-third the size of what is expected for Euclid’s wide-field survey. The catalogue contains lensing information for nearly 500 million magnitude-limited galaxies, with completeness depending on redshift and position. In the least complete areas, for example, the catalogue is complete to observation band H ≈ 24 (23) up to z ≈ 0.45 (1.4). In this work, we only make use of the galaxy positions, observed ellipticities, and true redshifts.
The MICE cosmology is a flat ΛCDM model with parameters Ωm = 0.25, ΩΛ = 0.75, Ωb = 0.044, σ8 = 0.8, h = 0.7, and ns = 0.95. We find that its galaxy redshift distribution is well approximated by the parameterized form(15)with best-fit parameters p = 0.88, q = 1.40, and z0 = 0.78 based on chi-square minimization. The normalization factor is determined by numerical integration. Figure 1 shows the agreement between n(z) and the histogram of true redshifts from MICE. We use this functional form as input to the Camelus algorithm in order to generate its simulated catalogues. With real Euclid data, we would instead use the estimated photo-z distribution, as true galaxy redshifts are not known.
![]() |
Fig. 1
Histogram of the true redshift distribution of the MICE mock catalogue. The solid (blue) line is the best-fit n(z) in the form of Eq. (15) with parameters (p,q,z0) = (0.88,1.40,0.78). The normalization is computed numerically. |
We assign intrinsic ellipticities to the MICE galaxies so that the distributions of the two components ϵ1 and ϵ2 match those observed in the COSMOS survey. These closely approximate Gaussian distributions with zero mean and a standard deviation of 0.3. We generate observed ellipticities, which are used for mass mapping, by the relations (Seitz & Schneider 1997) (16)where ϵobs and ϵint are the observed and intrinsic ellipticities, respectively, and the asterisk represents complex conjugation. Shear γ and convergence κ values that comprise g are provided in the MICE catalogue for each galaxy line of sight. Given that | g | > 1 for only a few of galaxies, the resulting standard deviations of the observed ellipticity components remain at 0.3.
The Camelus algorithm, which we describe in Sect. 3.2, produces shear catalogues of size 25 deg2 in Cartesian space. For consistency, we therefore also extract and analyze 25 deg2 patches from the full MICE catalogue. We aim for as many independent patches as possible to ensure the best statistical constraining power of the peak count information. To do this, we take the straightforward approach of dividing the MICE octant into strips along lines of constant declination (Dec) such that each strip spans at least 5 deg in Dec. We then divide each strip into rectangles along lines of constant right ascension (RA) such that after a gnomonic (flat-sky) projection, we achieve the maximum number of 25 deg2 squares for the strip.
The transformation equations are (17)and
(18)where (λ,φ) are the (RA, Dec) coordinates to be projected, (λ0,φ0) is the projection center, and (x,y) are the tangent plane coordinates. The non-conformal gnomonic projection has the useful property of mapping great circles to straight lines, but it introduces shape and distance distortions in the tangent plane radially away from the origin. For the small size of the patches we consider, however, these effects are sub-percent level. For example, the distortion error in both the area and the maximum angular separation within the 25 deg2 fields is <0.3%.
Because of the spherical geometry, the number of usable patches within each strip varies significantly with declination. The strip with its edge at the equator contains 17, and the strip nearest the north pole contains only one.
Figure 2 shows an example of the extraction and projection geometry for a patch centred on (RA,Dec) = (40.0,57.9) deg. In the upper panel, the shaded rectangular region is the area in RA/Dec space before projection. This area is projected about the origin, marked by a red X, into the region enclosed by the dashed line in the lower panel. The inner tan square in the tangent space represents the area we use for mass-mapping for this patch.
This cutting is clearly not optimal in the sense that some areas are ultimately excluded from our analysis, namely those between the dashed and solid lines of the lower panel. All of the galaxies in the shaded RA/Dec rectangle end up inside the dashed region, but only those that fall in the tan square are used. Despite the simple approach, we still achieve 186 × 25 = 4650 deg2 of total effective area for our analysis. This should be sufficient to guide our intuitions about the constraining power of peak statistics from a survey like Euclid.
![]() |
Fig. 2
Example of our patch extraction and projection scheme for MICE. The upper panel shows the original extent of the patch, which lies within 34.9° ≤ RA ≤ 45.0°, 55.3° ≤ Dec ≤ 60.4°. This area is projected into the region enclosed by the dashed line in the lower panel of the tangent space. The bounded inner square represents the 25 deg2 area for which we generate the multiscale maps and compute peak histograms. |
3.2. Modelling peak counts with fast simulations
In this section we briefly describe the Camelus algorithm that we use to generate shear catalogues from dark matter halo simulations. The code has been developed and tested in Lin & Kilbinger (2015a,b) and Lin et al. (2016), and the software is available on GitHub2. We refer to those papers for a more complete description.
The first step is to sample halo masses from a mass function, which is taken to have the form (Jenkins et al. 2001) (19)The parameter σ(z,M), which serves as a proxy for mass, is defined as the standard deviation of the linear density field, which has been smoothed with a spherical top-hat filter of radius R such that
. Halos are selected with masses within the range 5 × 1012h-1M⊙ ≤ M ≤ 1017h-1M⊙.
Next, the sampled halos are distributed randomly in an observation field of 5 × 5 deg2. The radial mass distributions of each halo are taken to be that of a truncated Navarro-Frenk-White (NFW) profile (Navarro et al. 1996, 1997) (20)where ρs is the characteristic overdensity related to the halo’s concentration, rs is the scale radius, rvir is the virial radius, αNFW is the inner slope, and Θ is the Heaviside step function. The scale radius can be written as rs = rvir/c, the ratio of the physical virial radius to the concentration parameter. Following Takada & Jain (2002), we assume a parameterized form of c as a function of redshift and mass,
(21)Here, the pivot mass M⋆ is defined such that σ(M⋆) equals the critical density for spherical collapse at z = 0. We fix the parameters as (c0,β) = (9,0.13), as we find these to give good agreement with the MICE data in terms peak count histograms across different filtering scales. These, as well as other parameter settings adopted in this work, are summarized in Table 1.
Parameter settings for Camelus.
The final step is to generate a simulated catalogue of reduced shear values for source galaxies in the field. Sources are distributed randomly according to Eq. (15) and assigned ellipticities from a Gaussian distribution with dispersion . Orientations are random, meaning that we neglect intrinsic alignment in this work. The projected lensing quantities are then calculated, which can be done analytically for an NFW profile; see Takada & Jain (2003a,b), for example. Intrinsic source ellipticities are combined with the computed reduced shear g to give the final simulated lensing catalogue. We make S/N maps from this catalogue and compute peak count statistics following the procedure described in Sect. 3.3.
We note here some important assumptions implicit in the Camelus approach to simulating weak-lensing maps. The first is that peak number counts arise primarily due to bound matter, and not from the diffuse matter that makes up, for example, cosmic filaments. The second is that the spatial correlation of dark matter halos does not have a significant impact on peaks. The results of Lin & Kilbinger (2015a) support the reasonableness of these assumptions for relatively small surveys. The present work, however, reveals limitations of Camelus in the context of larger survey areas; see Fig. 4 below and the parameter constraint results in Sect. 4.
![]() |
Fig. 3
Starlet decomposition of an example noiseless convergence map κ(θ). The images are 150 × 150 with pixels of size 0.5 arcmin. The original map is shown along with the wavelet coefficient maps { wj } up to jmax = 4. The final smoothed map c5, i.e., the low-pass filtered version of κ, is not shown. The transform clearly picks out features of κ at successively larger scales as j increases. |
3.3. Multiscale wavelet filtering
Weak-lensing maps are dominated by galaxy shape noise and must be filtered to access their signal. Numerous schemes have been employed to de-noise WL maps, the most common of which is filtering with a Gaussian kernel. For the reasons described below, we choose to filter the mass maps in our analysis using the isotropic undecimated wavelet, or starlet, transform (Starck et al. 2007).
The starlet has many properties that make it useful in astrophysical image processing. First, isotropy makes it well-suited to extract features from astrophysical data containing objects that are roughly round, such as stars, galaxies, and clusters. Next, it is a multiscale transform in which the information contained at different scales in an image is separated out simultaneously. The filter functions associated with the starlet transform are localized in real space, that is, they go to zero within a finite radius. The first wavelet function acts as a high-pass filter, while the remaining wavelets act as band-pass filters for their respective scales, since they are localized in Fourier space as well. Finally, the wavelet functions are compensated, meaning they integrate to zero over their domains. This is beneficial, as it was shown by Lin et al. (2016) that in the context of peak-count analyses, compensated filters are better at capturing cosmological information than non-compensated filters like the Gaussian kernel. We note that the wavelet transform of a convergence map at a particular scale is also formally equivalent to aperture-mass filtering by a corresponding compensated filter (Leonard et al. 2012).
![]() |
Fig. 4
Comparison of peak abundance histograms between Camelus (red circles) and MICE (blue line) for 4650 deg2. Error bars for Camelus represent the standard deviation of 500 realizations for the associated S/N bin. There is good agreement between the mock observations and the Camelus predictions across all wavelet filtering scales, especially for ν lower than ~5. The shaded regions represent bins where the relative different between MICE and Camelus is larger than 50%. We apply a conservative cut and omit these higher-bias bins in our analysis. |
The starlet transform of an N × N image I amounts to successive convolutions of the image with a set of discrete filters corresponding to different resolution scales j = 1,...,jmax. The result is a set of J = jmax + 1 maps { w1,...,wjmax,cJ }, each of size N × N, where the “detail” maps { wj } represent I filtered at a scale of 2j pixels. The final map cJ represents a smoothed version of I, and the original image is exactly recoverable from the decomposition . Further details, including explicit expressions for the discrete filter bank associated with the starlet, can be found in Starck et al. (2007).
As an illustration, we show the starlet transform of a convergence map of 150 × 150 pixels in Fig. 3. The original image is a noiseless map κ(θ) with pixels of size 0.5 arcmin, and the decomposition is shown for jmax = 4 wavelet scales increasing to the right. Brighter pixels indicate higher values. The final smoothed map c5, which is the low-pass filtered version of κ, is not shown. Progressively lower frequency features are picked out from κ as j increases.
After filtering, the noise level is different at different scales, and these levels are related by the ratio of the filter norms. If we denote the wavelet convolution kernel at scale j by Wj, then the noise at this scale is (22)where | | ·| | 2 denotes the ℓ2 norm, and σref is the known noise level at a particular reference scale.
Instead of using the analytic expression corresponding to the input galaxy ellipticity dispersion, we estimate σref from the data as follows. Let κj denote the map filtered at scale j (analogous to wj above). At the finest wavelet scale, j = 1, the noise is dominant, and we expect the wavelet coefficients to follow the noise distribution. We therefore take σref as the dispersion of κ1, the smallest resolution of the convergence map decomposition.
The S/N at scale j of a given coefficient at location θ can then be written as (23)where we only consider positive values of κj. For all maps used in our cosmological analysis, we bin the galaxies into pixels of size 0.5 arcmin, resulting in maps that are 600 × 600 pixels. We transform κ with jmax = 5 wavelet scales and identify peaks in these maps as local maxima of ν, where a peak is simply a pixel with a higher ν value than its eight neighbouring pixels. Since our filtering smooths the maps on scales larger than the pixel size, and we also treat the model and data identically, identifying peaks this way should be robust in terms of the final cosmological constraints.
3.4. Choice of data vector
To compare the predictions of Camelus to MICE, and ultimately to constrain parameters, we need a summary statistic that encapsulates the peak information contained at different wavelet scales. For this, we choose peaks of ν ≥ 1 with bin boundaries defined by [1.0,1.2,...,4.8,5.0] ∪ [5.3,5.6,5.9, + ∞). That is, the bin spacing is Δν = 0.2 for ν ∈ [1.0,5.0] and Δν = 0.3 for ν ∈ [5.0,5.9].
Lin et al. (2016) have shown that keeping the multiscale peak histograms separate, rather than combining them into a single data vector, yields tighter constraints on cosmological parameters. Following this result, we adopt a summary statistic that is the concatenation of peak-count histograms at five consecutive wavelet scales. The scales are arranged in order of decreasing resolution, analogous to the four scales in the example of Fig. 3.
In Fig. 4 we show a comparison of the MICE peak histograms at each scale with the Camelus predictions for the 4650 deg2 field. MICE results are shown as solid blue lines, Camelus as red circles. The data points for both align with the left edges of their associated S/N bins. The Camelus data were obtained by averaging 500 realizations of the code with the MICE cosmology as input, and error bars represent the standard deviations of these runs.
The plots reveal overall good agreement between the mock observations and the Camelus predictions across all scales, especially for ν lower than ~5. The highest S/N bins exhibit the largest bias at each scale, and the shaded regions indicate bins where the relative difference between MICE and the Camelus prediction exceeds 50%. Scales 2 and 5, corresponding to angular filtering scales of 2 and 16 arcmin, respectively, provide the best agreement in the high S/N range. Furthermore, Camelus predicts systematically fewer peaks than MICE at the first two scales, while the reverse is true for scales 3 and 4. This suggests that there could be an optimal filtering scale lying between scales 2 and 3; however, Camelus underpredicts the MICE peaks again at scale 5 over most of the range, making the choice of an optimal scale ambiguous.
There are numerous competing factors that contribute to the difference between Camelus peak predictions and MICE. The first is related to differences in the halo mass functions between the model and the data. Halo sampling by Camelus agrees well with the Jenkins model it is based on (Lin & Kilbinger 2015a). On the other hand, the Jenkins model underpredicts the measured halo abundance of the MICE simulation substantially for M> 1014h-1M⊙ (Crocce et al. 2010, 2015). The proportion of high-mass halos in a given field area is therefore expected to be smaller for Camelus than for MICE. Furthermore, as most high S/N peaks are due to single massive halos (Yang et al. 2011), the peak counts in the high S/N bins should be larger for MICE than for Camelus. We see in Fig. 4 that this is only true for the first two wavelet scales.
Another reason for the differences is the lack of halo clustering in the Camelus model. Lin & Kilbinger (2015a) showed that the effect of randomizing the angular positions of halos is to reduce the peak count number density by between 10% and 50%. The reason is that decorrelating halos leads to less overlap between them on the sky and therefore to a decrease in the number of high peaks. This effect is offset, however, by the replacement of N-body halos by spherical NFW profiles, as well as ignoring the contribution of unbound matter to the lensing signal. A third consideration is that the S/N of a halo’s WL signal will be maximized when the filter size approximately matches that of the structure, which depends on its specific parameters such as mass, concentration, and redshift.
Interpreting the precise nature of the differences between MICE and Camelus in Fig. 4 is therefore difficult, and we leave this question for a follow-up study. In any case, we seek to minimize the effect of the high-bias shaded bins on our cosmological analysis, and so we exclude them from the peak abundance summary statistic. This gives a data vector for our ABC analysis that contains 93 total bins. This also has the effect of ensuring that the included bins all contain at least 10 peaks (horizontal dashed line), which is desirable for statistical purposes.
3.5. Parameter inference with approximate Bayesian computation
Approximate Bayesian computation (ABC) is an approach to constraining model parameters that avoids the evaluation of a likelihood function. With ABC in general, statistics of the observed data set are compared with the corresponding statistics derived from simulations assumed to model the process that generated the observations. For our purpose, the observed data set is the MICE catalogue, and the simulations are generated by Camelus. Parameter space is then probed by accept-reject sampling, giving a fast and accurate estimate of the true parameter posterior distributions.
Lin & Kilbinger (2015b) and Lin et al. (2016) showed that ABC is an efficient and successful parameter inference strategy for WL peak count analyses. The method has also been used recently in several other astrophysical and cosmological applications (Cameron & Pettitt 2012; Weyant et al. 2013; Robin et al. 2014; Killedar et al. 2015; Ishida et al. 2015; Akeret et al. 2015). We constrain the parameter set in this work, where
is the equation of state parameter of dark energy.
corresponds to a pure cosmological constant.
We implement a population Monte Carlo (PMC) ABC algorithm to iteratively converge on the posterior distribution of parameters. We refer to algorithm 1 and the surrounding text in Lin & Kilbinger (2015b) for further details of PMC ABC as we implement it here. The algorithm proceeds by the following steps.
-
Draw an initial set of samples (called particles) from the priordistributions of the parameters. We assume flat priors of [0.1,0.9] for Ωm, [0.3,1.6] for σ8,and [−1.8,0] for
.
-
For each particle, generate a data vector from one simulation of Camelus with its parameters set to those of the particle’s location in parameter space.
-
Compute the distance, defined as
(24)between the proposed parameter set and the observed MICE data vectors. Here, x is the model prediction data vector, xobs is the MICE data vector, and C is the covariance matrix. We note that because of the bias found from modelling (see Fig. 6), we also perform constraints with the calibrated data vector. In this case, we add to the MICE data vector the difference between it and the model prediction computed with the MICE input parameters. See further details in Sect. 4.1.
-
Discard the particles whose distance from MICE is larger than a prescribed tolerance ϵ.
-
Reduce ϵ and iterate the process until the particle system representing the posterior distribution converges.
Computing the distance requires the covariance matrix for the peak abundance data vector, which we estimate from 500 Camelus runs. We assume that C does not vary with cosmology and estimate it under . We compute an unbiased inverse covariance estimator
with a correction factor of (n−p−2)/(n−1) = 0.81, where n is the number of realizations, and p the number of bins (Hartlap et al. 2007). Sellentin & Heavens (2016) have shown that although this scaling indeed de-biases C-1, retaining a Gaussian likelihood is not correct. However, our high n value ensures that
is a good approximation to what should properly be a modified multivariate t-distribution.
The associated correlation coefficients for our 93 S/N bins are shown in Fig. 5. Blocks delineated by dotted lines represent the correlation between the two corresponding wavelet scales. We see that S/N bins are not strongly correlated, and neither are different scales, reflecting the efficient multiscale separation of information by the starlet filter.
![]() |
Fig. 5
Correlation coefficients for our multiscale peak abundance data vector, computed from 500 Camelus runs with |
4. Results
We present in this section the cosmological parameter constraints attained from peak counts modelled by Camelus with ABC on the MICE field. We compare results with those of the 2PCF statistics ξ±.
4.1. Parameter constraints from peak counts
![]() |
Fig. 6
Parameter constraints in the σ8–Ωm plane from peaks as modelled by Camelus. The 1σ and 2σ contours represent the 68.3% and 95.4% most probable regions, respectively, after marginalizing over |
![]() |
Fig. 7
Credible parameter contours from calibrated peak counts data vector for the three 2D combinations of |
The results of an ABC analysis are not point estimates of maximum likelihood, but rather credible regions of parameter space. We show the 1σ and 2σ credible contours in the σ8–Ωm plane after 12 iterations of ABC in Fig. 6. The areas within these contours represent the 68.3% and 95.4% most probable regions, respectively, after marginalizing over . With the PMC iteration scheme, ABC converges asymptotically to the true posterior. In practice, the contours stabilize quickly, meaning they do not continue to shrink appreciably after only a few iterations. In this case, we stop after iteration 12, since the size and width of the contours are not significantly different compared to iteration 11.
Kernel density estimation (KDE) was used on the final ABC particle system to derive the contours. The expression for the multivariate KDE posterior is(25)where N = 800 is the number of particles, xk is the kth sample, and WH is the multivariate normal kernel with bandwidth H satisfying Silverman’s rule (Silverman 1986).
It is well known that WL alone does not well constrain σ8 and Ωm separately due to the strong degeneracy between the parameters, and so should be combined with other cosmological probes with orthogonal contours. What we look at therefore is the thickness the contours in the direction of the degeneracy. This is frequently expressed by the derived quantity (26)along with ΔΣ8, the 1σ uncertainty. The parameter α represents the best-fit slope in log space. We find
with α = 0.75, giving a 1σ width of ΔΣ8 = 0.11.
It is clear that despite having excluded the high-bias bins, the residual Camelus systematics cause a significant bias in the parameter estimations. The peak contours are shifted along the degeneracy line so that the biases on Ωm and σ8 are large, while that of Σ8 is much reduced. Excluding the high-bias bins also weakens the constraining power, thereby broadening the contours. Whereas Zorrilla Matilla et al. (2016) have found a good agreement between the Camelus algorithm and N-body runs on small fields, this test suggests that the systematics of Camelus need to be carefully studied in order to achieve accurate results for large-field analyses.
There are many possible sources of bias for peak predictions with Camelus. Zorrilla Matilla et al. (2016) have already identified some concerning the fast halo modelling. These include the lack of halo clustering, inaccurate modelling of the halo concentration, and the use of NFW profiles. Kacprzak et al. (2016) have also argued that ignoring source clustering results is a boost factor to peak counts. See also the discussion in Sect. 3.4 above.
As we are still interested in the constraining power of peaks with large-field statistics, we explore this further by assuming that the Camelus model can be improved and/or calibrated to correct the bias. One way to do this would be to first include effects like halo clustering in the model, and then to tune the model parameters (e.g. of the mass function, NFW profile, or mass-concentration relation) if necessary so that the peak abundance data vector produced by Camelus under the MICE cosmology matches the observation as closely as possible. A simpler variation, although less well physically motivated, would be to tune the model parameters without otherwise modifying the algorithm. Alternatively, we can calibrate directly at the level of the data vector, matching the peak count histograms of Camelus to MICE across all wavelet scales. We choose the latter approach here and leave a study of the former to a future publication.
In effect, calibrating in this way is equivalent to pretending that the Camelus prediction under the MICE cosmology already matches the MICE observation. We note that in a more sophisticated treatment, and indeed in calibrating Camelus to use on real data, we would need to use many simulations with different cosmologies and interpolate by assuming, for example, that the calibration varies smoothly in parameter space.
We show the contours after calibration for the three combinations of in Fig. 7. The three panels represent results after 13 iterations of ABC, and the star again indicates the MICE input cosmology. In the left plot are the credible contours in the σ8–Ωm plane (note the different axis ranges compared to Fig. 6). We find now
with α = 0.65. The 1σ width is therefore reduced to ΔΣ8 = 0.05, which agrees with the expectation that WL is better able to distinguish σ8 for higher Ωm values. The posterior distribution of Σ8 is shown as the blue curve in Fig. 8. While the value of Σ8 is not very meaningful in an absolute sense, it serves as a useful basis for comparing the tightness of constraints attainable from different methods on the same data. We use ΔΣ8 to compare the amount of cosmological information contained in peaks to Gaussian probes of the WL signal in the following section.
The middle and right plots of Fig. 7 show the parameter spaces. The contours reveal that we cannot place constraints on
even with large-field surveys. This is consistent with the understanding that tomography is necessary to probe the late-time evolution of the Universe, for which
plays an important role. Nevertheless, the degeneracy between
and the other parameters is clear.
4.2. Comparison with ξ±
We compute ξ± as defined by Eq. (14) on the same MICE sub-fields that were used for the peaks analysis with the publicly available Athena3 2D tree code. We take the data vector in this case to be the concatenation of ξ+ and ξ− computed in 10 log-spaced angular bins each. The covariance matrix is computed using the 186 extracted patches from MICE, which we take to be independent realizations of 25 deg2 lensing fields. That is, our estimator essentially measures the sub-sample covariances and then re-scales the result for the full survey (Norberg et al. 2009; Friedrich et al. 2016).
![]() |
Fig. 8
Probability density functions (PDFs) of the derived parameter Σ8 from the peak analysis and the two-point correlation functions ξ± (2PCF). For peaks, the mode is Σ8 = 0.76 with ΔΣ8 = 0.05, while for ξ±, these are 0.76 and 0.03, respectively. The comparable values of ΔΣ8 indicate the similar constraining power of the two WL statistics in a large-field survey. |
Estimating the covariance matrix in this way ignores correlations on scales comparable to the size of the patches, as well as smaller-scale separations spanning the boundaries between patches. With the large number of patches and a maximum angular scale of 3 deg, we do not expect these caveats to significantly affect the results. It is likely, however, that we underestimate the covariance to some degree because the patches are not perfectly independent.
We show the correlation coefficients associated with ξ± in Fig. 9. Bins 1–10 correspond to ξ+, and bins 11–20 correspond to ξ−. The angular bins are evenly spaced logarithmically between (θmin,θmax) = (1,180) arcmin. There is strong correlation among the ξ+ bins and little to moderate correlation among the ξ− bins and their cross correlations with ξ+.
![]() |
Fig. 9
Correlation coefficients for ξ±. Bins 1–10 correspond to ξ+, and 11–20 correspond to ξ−. The covariance matrix of the full survey is computed as a re-scaling of the sub-sample covariance of the 186 MICE patches. |
As we do not have an analogous stochastic forward-model to predict ξ± as we do for peak counts, we cannot use ABC for parameter inference. Instead, we use the population Monte Carlo software package CosmoPMC4 (Wraith et al. 2009; Kilbinger et al. 2010). It is a Bayesian algorithm that uses adaptive-importance sampling to improve its estimation of the posterior distribution iteratively. To compute cosmology, the fitting formula used for the non-linear matter power spectrum is the halofit model of Smith et al. (2003). We use KDE in the same way as for the ABC result to analyse the resulting posterior particle system.
![]() |
Fig. 10
Credible parameter contours from ξ± (calibrated) with CosmoPMC for the three combinations of |
The credible parameter contours for the calibrated 2PCF are shown in Fig. 10, analogous to Fig. 7 for peaks. In the σ8–Ωm plane, the 1 and 2σ contours are wider but less elongated than the peaks result. The degeneracy direction is also slightly offset from that of peaks. The middle and right panels show that despite the increased statistical power of the large survey area, as with peaks, cannot be constrained without tomography.
The Σ8 posterior distribution for 2PCF is shown as the red curve in Fig. 8, with and α = 0.70. At first glance, it appears that 2PCF yields a tighter constraint on Σ8. However, ΔΣ8 here does not reflect the real contour width from the two left panels of Figs. 7 and 10. In fact, the contour width from peaks is clearly smaller than from 2PCF. The discrepancy is due to the definition of Σ8. Since it defines a power-law curve, it is a poor fit to the contour of Fig. 7, which is elongated and barely bent. If the synergy between WL and other probes is to be optimized to lift degeneracy, Σ8 could be misleading. We also point out that Zorrilla Matilla et al. (2016) have found that the peak count covariance of Camelus is underestimated. This means that the contours in the left panel of Fig. 7 and the distribution of Σ8 in Fig. 8 should both be wider.
In the literature, similar comparisons have been reported by Dietrich & Hartlap (2010) using N-body simulations and by Liu et al. (2015a) using CFHTLenS data. In both studies, the authors found that peak-only constraints are tighter than 2PCF-only constraints. The reason that our study does not show the same tendency is due to the conservative cut of high-bias bins (see Fig. 4), which also removes cosmological information. A similar problem exists for the choice of scale cutting for 2PCF. The real capacity to extract cosmological information depends strongly on the accuracy of modelling, especially at non-linear scales. Our study indicates that the tilt between the respective degeneracy lines of the two observables can be significant. Peaks and 2PCF are therefore complementary, as their combination facilitates the breaking of parameter degeneracies.
5. Summary and conclusions
Peak count statistics in weak-lensing maps provide an important probe of the large-scale structure of the Universe. Peaks access the non-Gaussian information in the weak-lensing signal, which is not captured by two-point statistics and the related power spectrum. The upcoming Euclid mission will survey nearly one-third of the extragalactic sky and measure galaxy shapes with unprecedented precision, making it ideal for weak-lensing analyses.
To prepare for such next-generation data, we have studied the ability of peak counts to constrain the parameter set in a large-area simulation with Euclid-like settings. We used the 5000 deg2 MICECATv2.0 lensing catalogue as mock observations, which we divided into 186 square 25 deg2 patches under gnomonic projection. Based on our cutting of the MICE octant, we extracted 4650 deg2 of effective area for the analysis. A more sophisticated scheme could be explored to make maximum use of the data, perhaps by computing quantities directly on the sphere. The area was already large enough, however, to see the effect of the increased statistical power on parameter constraints that we can expect from future surveys.
We used the stochastic forward-model code package Camelus to predict peak abundances as a function of cosmology. The model is semi-analytic, requiring only a halo profile and a halo mass function as input to generate a 25 deg2 lensing catalogue in a few seconds. We implemented a wavelet-based multiscale mass-map filtering scheme in order to take advantage of cosmological information encoded at different scales. We showed that the Camelus model provides good agreement with the MICE peak histograms across five smoothing scales for S/N values lower than ~5. On the other hand, for higher S/N values, we found an increasing systematic deviation from MICE with increasing S/N.
Applying a conservative cut to exclude the highest bias S/N bins, we used approximate Bayesian computation to constrain the parameters of the MICE mock survey with peaks. In the σ8–Ωm plane, we found tight 1 and 2σ contours oriented along the typical degeneracy direction seen in weak-lensing analyses. We measured the derived parameter Σ8 = σ8(Ωm/ 0.27)α to be with a best-fit power-law slope α = 0.75 for a flat ΛCDM model. The uncertainty ΔΣ8 is representative of the width of the 1σ contour.
Although we omitted the high-bias S/N bins from the peaks data vector, the residual systematics in Camelus resulted in a large bias in the contours from the MICE input cosmology. The contours were shifted towards higher σ8 and lower Ωm along the degeneracy direction. This indicates that the systematics of Camelus need to be carefully studied before the method can be applied to real large-field data. We expect contributing systematics to include the lack of halo clustering, inaccurate modelling of the halo concentration, and the limitations of NFW profiles. It has recently been shown that these are a minor issue for relatively small fields (Zorrilla Matilla et al. 2016), but the assumptions of Camelus clearly break down for large-area surveys.
To compare peak count results to those from Gaussian probes of WL, we computed the shear two-point correlation functions (2PCF) ξ± on the MICE field. Lacking a similar fast stochastic prediction algorithm for 2PCF as we have for peaks, we used the population Monte Carlo software CosmoPMC instead of ABC for parameter inference. For the comparison, we corrected for the Camelus bias by calibrating the peaks data vector. We also corrected for a smaller bias found in the 2PCF result before comparing it to peaks.
The comparison revealed comparable constraints in the σ8–Ωm plane for the two WL observables, as measured by the width of their 1σ contours. We found ΔΣ8 = 0.05 for peaks and ΔΣ8 = 0.03 for 2PCF, although the actual width of the contours was larger for 2PCF than for peaks. The reason for the apparent discrepancy stems from the definition of Σ8, as neither set of contours was particularly well fit by a power law. Nevertheless, the constraints of the two observables followed different degeneracy directions, indicating the benefit of a combined analysis with the two probes. Neither peak counts nor 2PCF were able to constrain without tomography.
We leave a tomographic study of peaks with Camelus to future work. We also did not consider here certain realistic observational effects, such as intrinsic alignment and masks. A primary benefit of the Camelus approach lies in its ability to probe the true underlying PDF of observables. Its flexibility as a forward model will make it straightforward to include such effects after other systematics have been addressed.
Acknowledgments
This work is supported in part by Enhanced Eurotalents, a Marie Skłodowska-Curie Actions Programme co-funded by the European Commission and Commissariat à l’énergie atomique et aux énergies alternatives (CEA). The authors acknowledge the Euclid Collaboration, the European Space Agency, and the support of the Centre National d’Etudes Spatiales (CNES). This work is also funded by the DEDALE project, contract No. 665044, within the H2020 Framework Program of the European Commission. The MICE simulations have been developed at the MareNostrum supercomputer (BSC-CNS) thanks to grants AECT-2006-2-0011 through AECT-2015-1-0013. Data products have been stored at the Port d’Informació Científica (PIC), and distributed through the CosmoHub webportal (cosmohub.pic.es). Funding for this project was partially provided by the Spanish Ministerio de Ciencia e Innovacion (MICINN), projects 200850I176, AYA2009-13936, AYA2012-39620, AYA2013-44327, ESP2013-48274, ESP2014-58384, Consolider-Ingenio CSD2007-00060, research project 2009-SGR-1398 from Generalitat de Catalunya, and the Ramon y Cajal MICINN program. Finally, the authors would like to thank Peter Schneider, Sandrine Pires, and Samuel Farrens for useful comments and discussions.
References
- Abbott, T., Abdalla, F. B., Allam, S., et al. 2016a, Phys. Rev. D, 94, 022001 [NASA ADS] [CrossRef] [Google Scholar]
- Abbott, T., Abdalla, F. B., et al. (The Dark Energy Survey Collaboration) 2016b, Phys. Rev. D, 94, 022001 [NASA ADS] [CrossRef] [Google Scholar]
- Akeret, J., Refregier, A., Amara, A., Seehars, S., & Hasner, C. 2015, J. Cosmol. Astropart. Phys.„ 8, 043 [Google Scholar]
- Bartelmann, M., & Schneider, P. 2001, Phys. Rep., 340, 291 [NASA ADS] [CrossRef] [Google Scholar]
- Cameron, E., & Pettitt, A. N. 2012, MNRAS, 425, 44 [NASA ADS] [CrossRef] [Google Scholar]
- Cardone, V. F., Camera, S., Mainini, R., et al. 2013, MNRAS, 430, 2896 [NASA ADS] [CrossRef] [Google Scholar]
- Carretero, J., Castander, F. J., Gaztañaga, E., Crocce, M., & Fosalba, P. 2015, MNRAS, 447, 646 [NASA ADS] [CrossRef] [Google Scholar]
- Crocce, M., Fosalba, P., Castander, F. J., & Gaztañaga, E. 2010, MNRAS, 403, 1353 [Google Scholar]
- Crocce, M., Castander, F. J., Gaztañaga, E., Fosalba, P., & Carretero, J. 2015, MNRAS, 453, 1513 [NASA ADS] [CrossRef] [Google Scholar]
- Dietrich, J. P., & Hartlap, J. 2010, MNRAS, 402, 1049 [NASA ADS] [CrossRef] [Google Scholar]
- Fosalba, P., Gaztañaga, E., Castander, F. J., & Manera, M. 2008, MNRAS, 391, 435 [NASA ADS] [CrossRef] [Google Scholar]
- Fosalba, P., Crocce, M., Gaztañaga, E., & Castander, F. J. 2015a, MNRAS, 448, 2987 [NASA ADS] [CrossRef] [Google Scholar]
- Fosalba, P., Gaztañaga, E., Castander, F. J., & Crocce, M. 2015b, MNRAS, 447, 1319 [NASA ADS] [CrossRef] [Google Scholar]
- Friedrich, O., Seitz, S., Eifler, T. F., & Gruen, D. 2016, MNRAS, 456, 2662 [NASA ADS] [CrossRef] [Google Scholar]
- Fu, L., Kilbinger, M., Erben, T., et al. 2014, MNRAS, 441, 2725 [NASA ADS] [CrossRef] [Google Scholar]
- Hartlap, J., Simon, P., & Schneider, P. 2007, A&A, 464, 399 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Heymans, C., Van Waerbeke, L., Miller, L., et al. 2012, MNRAS, 427, 146 [NASA ADS] [CrossRef] [Google Scholar]
- Heymans, C., Grocutt, E., Heavens, A., et al. 2013, MNRAS, 432, 2433 [NASA ADS] [CrossRef] [Google Scholar]
- Hildebrandt, H., Viola, M., Heymans, C., et al. 2017, MNRAS, 465, 1454 [Google Scholar]
- Hoekstra, H., & Jain, B. 2008, Annu. Rev. Nucl. Part. Sci., 58, 99 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
- Hoffmann, K., Bel, J., Gaztañaga, E., et al. 2015, MNRAS, 447, 1724 [NASA ADS] [CrossRef] [Google Scholar]
- Huff, E. M., Eifler, T., Hirata, C. M., et al. 2014, MNRAS, 440, 1322 [NASA ADS] [CrossRef] [Google Scholar]
- Ishida, E. E. O., Vitenti, S. D. P., Penna-Lima, M., et al. 2015, Astronomy and Computing, 13, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Jain, B., & Van Waerbeke, L. 2000, ApJ, 530, L1 [NASA ADS] [CrossRef] [Google Scholar]
- Jenkins, A., Frenk, C. S., White, S. D. M., et al. 2001, MNRAS, 321, 372 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
- Kacprzak, T., Kirk, D., Friedrich, O., et al. 2016, MNRAS, 463, 3653 [NASA ADS] [CrossRef] [Google Scholar]
- Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
- Kilbinger, M. 2015, Rep. Prog. Phys., 78, 086901 [Google Scholar]
- Kilbinger, M., Wraith, D., Robert, C. P., et al. 2010, MNRAS, 405, 2381 [NASA ADS] [Google Scholar]
- Kilbinger, M., Fu, L., Heymans, C., et al. 2013, MNRAS, 430, 2200 [NASA ADS] [CrossRef] [Google Scholar]
- Killedar, M., Borgani, S., Fabjan, D., et al. 2015, MNRAS, submitted [arXiv:1507.05617] [Google Scholar]
- Kuijken, K., Heymans, C., Hildebrandt, H., et al. 2015, MNRAS, 454, 3500 [NASA ADS] [CrossRef] [Google Scholar]
- Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, ArXiv e-prints [arXiv:1110.3193] [Google Scholar]
- Leonard, A., Pires, S., & Starck, J.-L. 2012, MNRAS, 423, 3405 [NASA ADS] [CrossRef] [Google Scholar]
- Lin, C.-A., & Kilbinger, M. 2015a, A&A, 576, A24 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Lin, C.-A., & Kilbinger, M. 2015b, A&A, 583, A70 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Lin, C.-A., Kilbinger, M., & Pires, S. 2016, A&A, 593, A88 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Liu, J., Petri, A., Haiman, Z., et al. 2015a, Phys. Rev. D, 91, 063507 [NASA ADS] [CrossRef] [Google Scholar]
- Liu, X., Pan, C., Li, R., et al. 2015b, MNRAS, 450, 2888 [NASA ADS] [CrossRef] [Google Scholar]
- LSST Science Collaboration 2009, ArXiv e-prints [arXiv:0912.0201] [Google Scholar]
- Marian, L., Smith, R. E., & Bernstein, G. M. 2009, ApJ, 698, L33 [NASA ADS] [CrossRef] [Google Scholar]
- Marian, L., Hilbert, S., Smith, R. E., Schneider, P., & Desjacques, V. 2011, ApJ, 728, L13 [NASA ADS] [CrossRef] [Google Scholar]
- Marian, L., Smith, R. E., Hilbert, S., & Schneider, P. 2012, MNRAS, 423, 1711 [NASA ADS] [CrossRef] [Google Scholar]
- Marian, L., Smith, R. E., Hilbert, S., & Schneider, P. 2013, MNRAS, 432, 1338 [NASA ADS] [CrossRef] [Google Scholar]
- Maturi, M., Angrick, C., Pace, F., & Bartelmann, M. 2010, A&A, 519, A23 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Maturi, M., Fedeli, C., & Moscardini, L. 2011, MNRAS, 416, 2527 [NASA ADS] [CrossRef] [Google Scholar]
- Miralda-Escude, J. 1991, ApJ, 370, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Miyazaki, S., Komiyama, Y., Nakaya, H., et al. 2012, in Ground-based and Airborne Instrumentation for Astronomy IV, Proc. SPIE, 8446, 84460 [Google Scholar]
- Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563 [NASA ADS] [CrossRef] [Google Scholar]
- Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493 [NASA ADS] [CrossRef] [Google Scholar]
- Norberg, P., Baugh, C. M., Gaztañaga, E., & Croton, D. J. 2009, MNRAS, 396, 19 [NASA ADS] [CrossRef] [Google Scholar]
- Petri, A., Haiman, Z., Hui, L., May, M., & Kratochvil, J. M. 2013, Phys. Rev. D, 88, 123002 [NASA ADS] [CrossRef] [Google Scholar]
- Petri, A., Liu, J., Haiman, Z., et al. 2015, Phys. Rev. D, 91, 103511 [NASA ADS] [CrossRef] [Google Scholar]
- Pires, S., Leonard, A., & Starck, J.-L. 2012, MNRAS, 423, 983 [NASA ADS] [CrossRef] [Google Scholar]
- Robin, A. C., Reylé, C., Fliri, J., et al. 2014, A&A, 569, A13 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Schneider, P., van Waerbeke, L., Jain, B., & Kruse, G. 1998, MNRAS, 296, 873 [NASA ADS] [CrossRef] [Google Scholar]
- Schneider, P., van Waerbeke, L., & Mellier, Y. 2002, A&A, 389, 729 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Schrabback, T., Hartlap, J., Joachimi, B., et al. 2010, A&A, 516, A63 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Seitz, C., & Schneider, P. 1997, A&A, 318, 687 [NASA ADS] [Google Scholar]
- Sellentin, E., & Heavens, A. F. 2016, MNRAS, 456, L132 [NASA ADS] [CrossRef] [Google Scholar]
- Semboloni, E., Schrabback, T., van Waerbeke, L., et al. 2011, MNRAS, 410, 143 [NASA ADS] [CrossRef] [Google Scholar]
- Silverman, B. W. 1986, Density estimation for statistics and data analysis (London: Chapman and Hall) [Google Scholar]
- Smith, R. E., Peacock, J. A., Jenkins, A., et al. 2003, MNRAS, 341, 1311 [Google Scholar]
- Spergel, D., Gehrels, N., Breckinridge, J., et al. 2013, ArXiv e-prints [arXiv:1305.5422] [Google Scholar]
- Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
- Starck, J.-L., Fadili, J., & Murtagh, F. 2007, IEEE Transactions on Image Processing, 16, 297 [Google Scholar]
- Takada, M., & Jain, B. 2002, MNRAS, 337, 875 [NASA ADS] [CrossRef] [Google Scholar]
- Takada, M., & Jain, B. 2003a, MNRAS, 340, 580 [NASA ADS] [CrossRef] [Google Scholar]
- Takada, M., & Jain, B. 2003b, MNRAS, 344, 857 [NASA ADS] [CrossRef] [Google Scholar]
- Weyant, A., Schafer, C., & Wood-Vasey, W. M. 2013, ApJ, 764, 116 [NASA ADS] [CrossRef] [Google Scholar]
- Wraith, D., Kilbinger, M., Benabed, K., et al. 2009, Phys. Rev. D, 80, 023507 [NASA ADS] [CrossRef] [Google Scholar]
- Yang, X., Kratochvil, J. M., Wang, S., et al. 2011, Phys. Rev. D, 84, 043529 [NASA ADS] [CrossRef] [Google Scholar]
- Yang, X., Kratochvil, J. M., Huffenberger, K., Haiman, Z., & May, M. 2013, Phys. Rev. D, 87, 023511 [NASA ADS] [CrossRef] [Google Scholar]
- Zorrilla Matilla, J. M., Haiman, Z., Hsu, D., Gupta, A., & Petri, A. 2016, Phys. Rev. D, 94, 083506 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
All Figures
![]() |
Fig. 1
Histogram of the true redshift distribution of the MICE mock catalogue. The solid (blue) line is the best-fit n(z) in the form of Eq. (15) with parameters (p,q,z0) = (0.88,1.40,0.78). The normalization is computed numerically. |
In the text |
![]() |
Fig. 2
Example of our patch extraction and projection scheme for MICE. The upper panel shows the original extent of the patch, which lies within 34.9° ≤ RA ≤ 45.0°, 55.3° ≤ Dec ≤ 60.4°. This area is projected into the region enclosed by the dashed line in the lower panel of the tangent space. The bounded inner square represents the 25 deg2 area for which we generate the multiscale maps and compute peak histograms. |
In the text |
![]() |
Fig. 3
Starlet decomposition of an example noiseless convergence map κ(θ). The images are 150 × 150 with pixels of size 0.5 arcmin. The original map is shown along with the wavelet coefficient maps { wj } up to jmax = 4. The final smoothed map c5, i.e., the low-pass filtered version of κ, is not shown. The transform clearly picks out features of κ at successively larger scales as j increases. |
In the text |
![]() |
Fig. 4
Comparison of peak abundance histograms between Camelus (red circles) and MICE (blue line) for 4650 deg2. Error bars for Camelus represent the standard deviation of 500 realizations for the associated S/N bin. There is good agreement between the mock observations and the Camelus predictions across all wavelet filtering scales, especially for ν lower than ~5. The shaded regions represent bins where the relative different between MICE and Camelus is larger than 50%. We apply a conservative cut and omit these higher-bias bins in our analysis. |
In the text |
![]() |
Fig. 5
Correlation coefficients for our multiscale peak abundance data vector, computed from 500 Camelus runs with |
In the text |
![]() |
Fig. 6
Parameter constraints in the σ8–Ωm plane from peaks as modelled by Camelus. The 1σ and 2σ contours represent the 68.3% and 95.4% most probable regions, respectively, after marginalizing over |
In the text |
![]() |
Fig. 7
Credible parameter contours from calibrated peak counts data vector for the three 2D combinations of |
In the text |
![]() |
Fig. 8
Probability density functions (PDFs) of the derived parameter Σ8 from the peak analysis and the two-point correlation functions ξ± (2PCF). For peaks, the mode is Σ8 = 0.76 with ΔΣ8 = 0.05, while for ξ±, these are 0.76 and 0.03, respectively. The comparable values of ΔΣ8 indicate the similar constraining power of the two WL statistics in a large-field survey. |
In the text |
![]() |
Fig. 9
Correlation coefficients for ξ±. Bins 1–10 correspond to ξ+, and 11–20 correspond to ξ−. The covariance matrix of the full survey is computed as a re-scaling of the sub-sample covariance of the 186 MICE patches. |
In the text |
![]() |
Fig. 10
Credible parameter contours from ξ± (calibrated) with CosmoPMC for the three combinations of |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.