Free Access
Issue
A&A
Volume 633, January 2020
Article Number A71
Number of page(s) 15
Section Cosmology (including clusters of galaxies)
DOI https://doi.org/10.1051/0004-6361/201935988
Published online 13 January 2020

© ESO 2020

1. Introduction

The concordance Λ cold dark matter (ΛCDM) cosmological model, which is dominated by the dark energy that drives cosmic speed up and by the dark matter that is responsible for the clustering, assumes that the structures we observe today formed from gravitational instability and successive growth of the primordial fluctuations that were generated during the inflation epoch. Although the density field may be approximated as Gaussian on large scales, the non-Gaussianity on small scales carries additional information that can break some degeneracy among model parameters.

Weak lensing (WL) has proven to be an efficient tool for accessing this information. In any metric theory, light propagates along the geodesics of the metrics, which are determined by the matter distribution along the line of sight. As a consequence, lensing thus probes both the background expansion and the growth of structures. Therefore it is able to place strong constraints on the dark energy equation of state and distinguish between general relativity and modified gravity. In the WL regime, lensing distorts the image of the emitting source, but this cosmic shear effect is so small that it can only be detected statistically through the analysis of a large sample of galaxies. Up to now, second-order statistics have been used with the two-point correlation function, and its Fourier counterpart, the power spectrum, has been considered with remarkable results (see, e.g., Munshi et al. 2008; Kilbinger 2015; Bartelmann & Maturi 2017, and references therein).

The unprecedented amount of high-quality data that Stage IV lensing surveys are expected to deliver will make it possible to deepen the analysis of the density field by probing its non-Gaussianity. To this end, statistics higher than second order is needed. Among the different possibilities, Minkowski functionals (MFs) have already proven their reliability in the context of cosmic microwave background (CMB) studies (Komatsu et al. 2003; Eriksen et al. 2004; Hikage et al. 2008; Matsubara 2010) and WL convergence maps (Matsubara & Jain 2001; Sato et al. 2001; Taruya et al. 2002; Matsubara 2010; Kratochvil et al. 2011; Pratten & Munshi 2012; Petri et al. 2013; Shirasaki & Yoshida 2014). MFs are topological descriptors of the convergence field that depend on the complete set of higher order terms and multi-point correlation functions. In particular, non-Gaussianity manifests itself in deviations from the predictions for Gaussian random fields. Even when the perturbative expansion is cut to the lowest order, these deviations are related to the convergence bispectrum (Fourier counterpart of the three-point correlation function) through the generalized skewness parameters. The need to extend to large scales, however, asks for a detailed description of the nonlinearities that affect the matter power spectrum and bispectrum. Moreover, in any practical application, the infinite series that determines MFs deviations from the Gaussian case is truncated at the lowest order, thus introducing a mismatch with the observed MFs even in the idealized case of noiseless maps. Needless to say, noise and the imperfect reconstruction of the convergence map from shear data make the theoretical predictions of MFs still more daunting.

Motivated by these considerations, Vicinanza et al. (2019) have first presented a calibration procedure to correct theoretical predictions for noiseless maps so that they match the MFs measured in reconstructed noisy convergence maps. We here propose a modification of their approach that reduces the number of nuisance parameters, starting from simplifying yet reasonable approximations. This offers us the possibility to improve the constraints on the cosmological parameters so that we repeat their Fisher matrix forecast analysis. We also investigate which survey strategy (e.g., wide and shallow or deep and narrow) is better suited to optimize the scientific return of MFs. Next-generations surveys from the ground, such as the Large Synoptic Survey Telescope (LSST; LSST Science Collaboration 2009), or onboard satellites, such as the ESA Euclid (Laureijs et al. 2011) and the NASA Wide Field Infrared Survey Telescope (WFIRST; Green et al. 2012) missions, will nevertheless rely on second-order statistics. This makes it mandatory that the optimization is carried out considering the combination of cosmic shear tomography and MFs rather than single probes alone. We therefore consider several realistic combinations of area coverage and survey depth (keeping the survey observation time fixed) to generate simulated lognormal convergence fields. These are taken as input for estimating a reliable MFs dataset with the corresponding covariance matrix.

The paper is organized as follows: in Sect. 2 we describe the set of simulated convergence maps that we obtained, starting from the catalog generation for different survey depths to the map-reconstruction method we used. In Sect. 3 we introduce MFs and describe the measurements we performed on the simulated convergence maps, showing the results for some cases of interest. In Sect. 4 we describe MFs from a theoretical point of view, show their connection to cosmology, and present our new calibration procedure. In Sect. 5 we discuss the results we obtained from our Fisher matrix analysis in terms of the figure of merit (FoM) improvement and survey optimization. In Sect. 6 we draw our conclusions.

2. Simulation of convergence maps

In Vicinanza et al. (2019), some of us have developed a calibration procedure to match the theoretical predictions for MFs measured in noiseless convergence fields with those estimated in reconstructed maps from noisy shear data. This was validated using MICEv2.0 simulations, which cover a limited redshift range and model galaxies up to a limiting magnitude (maglim = 24.5) that is shallower than those we are interested in here. We therefore need to produce a different set of simulated convergence maps to probe the extended redshift range that is probed at deeper magnitude. Moreover, we wish to mimic as closely as possible what is expected for the Euclid satellite mission, which means that we need to input the same source redshift distribution. To this end, we use FLASK and the setting we describe in the following two subsections.

2.1. FLASK simulations

The full-sky lognormal astro-fields simulation kit (FLASK; Xavier et al. 2016) is a public code designed to create two- or three-dimensional random realizations of different astrophysical fields, including WL convergence and shear, reproducing the expected cross-correlations between the input fields. These realizations follow a multivariate lognormal distribution, which, compared to a multivariate Gaussian distribution, results in a better approximation to the density and convergence fields and avoids non-physical negative density values, for example. In this study we are interested in capturing the non-Gaussian features that are contained in the convergence field, therefore the lognormal distribution moreover represents the simpler approximation that can convey this information.

Its computational speed and flexibility make FLASK strongly preferable with respect to full ray-tracing or full N-body simulations. This is a key aspect because we need to simulate a large field area (to be split into very many patches) and vary the source redshift distribution according to the limiting magnitude. We briefly outline the FLASK inner workings and refer to Xavier et al. (2016) for further details.

The code takes as input angular auto- and cross-power spectra calculated at a number of redshift slices provided by the user. These are then transformed into the real space to compute the associated Gaussian correlation functions that are to be transformed back to the harmonic space. Choleski decomposition is then used to generate Gaussian multipoles, which are the input for the creation of a HEALPix map whose pixels are exponentiated to obtain the associated lognormal fields. The user can then sample these fields according to its desired angular and radial selection functions, thus mimicking the specifics of its desired survey. A catalog is finally generated by assigning to each pixel a random angular position that is sampled within the pixel boundaries, and a redshift within its redshift slice.

A caveat is in order when correlated density and convergence fields are simulated. When the density field is modeled as lognormal, FLASK computes the convergence through an approximated line-of-sight integration obtained as a weighted Riemann sum of the simulated density in redshift bins. As a consequence of the small number of bins in the sum at low redshift, the resulting convergence field is not exactly lognormal. However, the corresponding power spectra reproduce the theoretical spectra within 3% for z >  0.5, while the precision quickly degrades at lower z. We therefore imposed a conservative cut z >  0.55 to select the sources that we included in our analysis.

We used CLASS (Blas et al. 2011; Dio et al. 2013) to compute the input power spectra for 25 top-hat equi-spaced redshift bins over the range 0.0 ≤ z ≤ 2.5 for a flat ΛCDM model with fiducial cosmological parameters

where ΩMb) is the present-day value of the total matter (baryons only) density parameter, h is the Hubble constant (in units of 100 kms−1 Mpc−1), ns is the slope of the primordial power spectrum, and σ8 is the variance of the linear power spectrum smoothed over a top-hat window with size R = 8 h−1 Mpc. Nonlinearities at large k are corrected for using the Halofit recipe (Smith et al. 2003; Takahashi et al. 2012). We also modified some keyword setting in FLASK with respect to the default keywords, setting LRANGE = 1 − 6000, SHEAR_LMAX = 2000, and NSIDE = 2048. We finally used a customized angular selection function to split the full-sky simulation into more manageable set of subfields.

2.2. Survey depth and number density

FLASK allows the user to input their own radial selection function so that the total source number density and their redshift distribution match those of a given survey. Because we are interested in a Euclid-like survey, we set

(1)

with ng the number of galaxies per arcmin2, and with zm the median redshift. ng (in gal arcmin−2) and zm are a function of the limiting magnitude maglim, with (ng, zm) = (30, 0.9) for maglim = 24.5 for the wide area Euclid survey. We therefore need to model their scaling with maglim, which we qualitatively did as follows. First, we note that Hoekstra et al. (2017) investigated the effect of undetected galaxies on estimating the shape measurement bias. To this end, they modeled the dependence of the number of galaxies at a given maglim as a power law, which approximates the number counts from the GEMS survey well (Rix et al. 2004) in the F606W band and from the Hubble Ultra-Deep Field (HUDF; Coe et al. 2006) in the F775W band. We integrated this power law up to the desired limiting magnitude and obtained the slope of the ng–maglim relation, while its amplitude was set so that it was ng(maglim = 24.5) = 30, as for Euclid. This rescaling is required because neither the F606W nor the F775W bands match the wide RIZ filter used by the Euclid imaging instrument. A similar rescaling was also used for the zm–maglim relation, whose behavior we obtained by interpolating the values in Table 9 of Coe et al. (2006) based on HUDF data. The values of (ng,  zm) thus obtained for the five different limiting magnitudes we considered are given in Table 1, while the corresponding redshift distributions n(z) are shown in Fig. 1. We recall, however, that the calculations we made are based on the i-band magnitude, while Euclid will provide imagining data in the broad RIZ band, which has not been used before for observations. Therefore, we do not expect our approximation to exactly reproduce the Euclid redshift distribution, but it will nevertheless allow us to illustrate the different results obtained by changing the survey area and depth.

Table 1.

Limiting magnitude, total source number density (in gal arcmin−2), and median redshift.

thumbnail Fig. 1.

Redshift selection functions used as input for FLASK to simulate different survey depths corresponding to different limiting magnitudes.

Open with DEXTER

2.3. Map reconstruction

Running FLASK with the input parameters set as detailed above provided catalogs with right ascension, declination, redshift z, convergence κ, and shear components (γ1,  γ2) for all the objects in the catalog. A gnomonic projection was then used to project onto the plane of the sky under a flat sky approximation, which holds for the 5 × 5 deg2 subfields we used. We also left a gap of ∼1° between two consecutive patches so that we were able to consider them as independent realization. The objects in each catalog were then split into redshift bins with equal width Δz = 0.05 and centered in z from 0.5 to 1.8 in steps of 0.3, with the number density set according to the chosen limiting magnitude. We thus obtained 1108 independent convergence and shear maps for each redshift bin and limiting magnitude.

After smoothing the maps to 1′ resolution, we addedGaussian noise to each pixel and fixed the variance as (Hamana et al. 2004)

(2)

with σe = 0.3 the intrinsic ellipticity, Apix the pixel area, and ng the number density of galaxies. Because ng increases with maglim, deeper maps will be less noisy, as expected. In order to mimic an actual data analysis, we reconstructed the convergence maps from the noisy shear data using different methods. After comparing with simulated convergence maps, we finally opted for a variant of the popular KS method (Kaiser & Squires 1993) modified to account for the effect of systematic effects such as projection effects and masking (Pires et al. 2009; Jullo et al. 2013). Figure 2 shows as an example a convergence map at z = 0.9 for maglim = 24.5. Hereafter, whenever we mention convergence maps, we always refer to the set of reconstructed maps.

thumbnail Fig. 2.

Top: original convergence map at z = 0.9 for a limiting magnitude maglim = 24.5, simulated with FLASK. Bottom: same map but reconstructed with the KS method.

Open with DEXTER

3. Minkowki functionals: measurement

We considered a smooth two-dimensional random field k(x,  y) with zero mean and variance . We first defined the excursion set Qν as the region where the normalized field k/σ0 is larger than a given threshold ν. We can then define the three MFs as

(3)

(4)

(5)

where A is the map area, ∂Qν the excursion set boundary, da and dl are the surface and line element along ∂Qν, and 𝒦 its curvature. (V0,  V1, and V2) are the area, the perimeter, and the genus characteristics (i.e., the number of connected regions above a given ν minus that of connected regions below ν) of the excursion set Qν. MFs can be redefined in a more convenient way as

(6)

(7)

(8)

where we explicitly considered the case of the convergence field κ(x,  y) and expressed the threshold as a multiple of its variance σ0. In Eqs. (6)–(8), it is κi = ∂κ/∂xi, and κij = ∂2κ/∂xixj with (i,  j) = (x,  y), that is, MFs are computed in terms of the field and its derivatives. With these definitions, it is then straightforward to implement an algorithm to measure the MFs from the map. However, numerical issues need to be solved that come from the conversion of integrals into discrete sums, derivatives into finite differences, and the Dirac-δ into discrete ν binning. In order to validate our pipeline, we realized 500 random Gaussian maps that we input to our code for measuring the MFs. We then took the mean as the final estimate and the standard deviation as uncertainty, and finally obtained the results in Fig. 3, where the solid black line is the theoretical prediction (see below). The measured (V0,  V1) deviate from the theoretical expectation at ν = 2, the threshold use for the rest of the analysis, by less than 1%, while the discrepancy is slightly larger (up to 1%) for V2. This is related to the way the theoretical value is computed because it relies on the values of (σ0,  σ1), defined below, which are themselves measured on the maps. We therefore do not ascribe this larger difference to a missing ingredient in the theoretical estimate and consider our measurement pipeline for all MFs reliable.

thumbnail Fig. 3.

Numerical (blue dots) vs expected (solid black line) MFs from 500 realizations of a Gaussian random field.

Open with DEXTER

We then measured MFs in the 1108 simulated convergence maps by varying the redshift bin centers z, the limiting magnitude maglim, and the scale θs of the Gaussian filter we used to smooth the maps before the MF estimation. In particular, we considered

Figure 4 shows the three MFs as a function of the signal-to-noise ratio (S/N) ν for the illustrative case of a survey with maglim = 25.5 (other cases are qualitatively similar). For a fixed smoothing scale (left panels), the overall scaling with ν is the same; the redshift value only enters to determine the MF amplitude. In particular, differences in V0 are typically quite small and are no larger than ∼2%, and they increase to ∼8% (∼20%) for V1 (V2). A similar argument holds for the dependence on the smoothing angle for fixed θs, with differences that can now be easily appreciated, as shown by the results in the right panel. These results suggest that the behavior of MFs with ν does not carry the relevant information but rather the dependence on the redshift and the smoothing angle. We therefore set ν = 2 in the remaining analysis and refer to the next section for the reason of this particular value.

thumbnail Fig. 4.

Left: MFs as measured from maps with maglim = 25.5 as a function of the threshold ν for different values of the redshift bin center z and fixed smoothing scale (θs = 6′). Right: same as in the left panel, but for different values of the smoothing scale θs and fixed redshift bin center (z = 1.2).

Open with DEXTER

Our observed MF data vector is

(9)

with

where the values were computed as the mean over the 1108 convergence maps realized for each given maglim. The covariance matrix can then be estimated as

(10)

where Nmaps = 1108 is the total number of convergence maps, is the ith component of the data vector, calculated on the kth map, and is the same component averaged over all maps. In Fig. 5 we show the normalized covariance matrix obtained in the case maglim = 25.5 as an example. We note that for V0 the correlation increases with the smoothing scale, but seems quite insensitive to the redshift. On the other hand, the correlation for V1 and V2 for small values of θs and z is higher. We observe the same pattern for the cross correlation between V1 and V2 and, while V0 and V1 appear correlated for larger θs, V0 and V2 are anticorrelated. The strong correlations that we find among some elements of the data vector suggest that the dimension of D can be reduced without loss of appreciable information. We therefore investigated this possibility as well. We focused on the case ν = 2 to investigate the change in MFs as a function of the limiting magnitude. This is shown in Fig. 6, where we plot MFs as a function of z and θs for five different maglim. For a fixed smoothing angle, the difference among the MF amplitudes at different maglim tends to decrease, with z being no larger than ∼9%. The only remarkable exception is the case with maglim = 24.5, which gives a ∼35% difference in the V2 amplitude at large z. However, this is a consequence of the small number of galaxies in the high z bins for a survey as shallow as the one with this limiting magnitude. As a consequence, the map reconstruction becomes more noisy and less reliable, and this needs to be corrected for, as we demonstrate in the next section. The right panels show that the dependence on maglim is increasingly less important as the smoothing angle θs increases. This is expected becaues the larger θs, the more Gaussian the convergence field, so that the MFs depend on ν alone. As a consequence, we obtain the unfortunate result that the scales with the most information are those at small θs, which at the same time are the noisiest. The next section discusses how the effect of noise can be mitgated through a suitable calibration procedure.

thumbnail Fig. 5.

Normalized covariance matrix for the MF data vector D defined in the text for the case with maglim = 25.5.

Open with DEXTER

thumbnail Fig. 6.

Left: MFs for ν = 2 and different values of the limiting magnitude maglim as a function of the redshift bin center z, with fixed smoothing scale (θs = 6′). Right: same as in the left panel, but as a function of the smoothing angle θs with fixed redshift bin center (z = 1.2).

Open with DEXTER

4. Minkowski functionals: theoretical predictions

In order for an observable to be of any use in constraining cosmological parameters, it is mandatory to be able to theoretically compute its expected value. This is analytically possible for MFs only in the case of Gaussian random fields, while deviation from non-Gaussianity (as the ones for the convergence field) can be dealt with in an approximated way through a perturbative series expansion. This method, however, does not take systematic effects into account that are introduced by imperfect map reconstruction from noisy shear data. In Vicinanza et al. (2019), some of us have successfully accounted for this through a semianalytical approach calibrated on simulations. Below, we first summarize the main steps and results, and then present a simplified but still reliable way to reduce the number of nuisance parameters.

4.1. Minkowski functionals for noiseless convergence fields

For a Gaussain random field, MFs can be exactly computed as (Adler 1981; Tomita 1986)

(11)

with ωn = πn/2 [Γ(n/2 + 1)]−1, so that it is ω0 = 1, ω1 = 2, and ω2 = π. Here, we assumed that the field has null mean, variance σ0, and variance of its covariant derivative σ1, and ℋn(ν) are Hermite polynomials.

When the field is only mildly non-Gaussian, a perturbative expansion can be used,

(12)

The deviation from the Gaussian prediction can be expanded in terms of σ0 = ⟨κ2⟩ as

(13)

with . To the lowest order in σ0, the coefficient of the correction term reads

(14)

where S(n) are generalized skewness quantities defined from the convergence field and its derivatives

(15)

(16)

(17)

The variance terms σn and the generalized skewness parameters S(n) can be expressed in terms of the polyspectra of the field. For the variances, it is Munshi et al. (2011)

(18)

where 𝒞(ℓ) is the lensing convergence power spectrum for sources at redshift zs, and 𝒲(ℓ) is the Fourier transform of the smoothing filter. The cosmological information is contained in 𝒞(ℓ), which is given by

(19)

with

(20)

where E(z) = H(z)/H0 is the dimensionless Hubble function, χ(z) is the comoving distance, r(z) is the comoving angular diameter distance, and PNL(k,  z) is the nonlinear matter power spectrum evaluated in k = ℓ/χ(z) because of the Limber approximation. We assume a spatially flat universe from now on. We used a Gaussian filter to smooth the map,

(21)

with σs the smoothing length. Generalized skewness quantities (that are connected with third-order moments) can be expressed as

(22)

where, adopting a compact notation, we obtain

(23)

with

(24)

and

where cp denotes cyclic permutation. In Eq. (23), the cosmological information is coded into the convergence bispectrum,

(25)

with BNL(k1,  k2,  k3,  z), the matter bispectrum, evaluated at ki = ℓi/χ(z) because of the Limber approximation. The contribution of each multipole to the sum in Eq. (23) is weighted by

(26)

with

(27)

and

(28)

The Wigner-3j symbols account for the fact that only triangular configurations (i.e., k1 + k2 + k3 = 0) contribute to the sum.

4.2. Observable Minkowski functionals

As we described above, Eqs. (12)–(14) refer to the case of a noiseless convergence field. However, there are different reasons why they cannot be straightforwardly used to predict the MFs, which are measured on actual convergence maps. First, κ is not a directly observed quantity, but it is rather reconstructed from the shear data so that multiplicative and additive biases are present. Second, the field is also shifted from its true value because of the noise. In Vicinanza et al. (2019), we have addressed this problem by postulating that at the lowest order, these effects can be described as

(29)

with mκ the multiplicative bias, and N the zero mean noise. Starting from Eq. (29) and assuming the noise is not correlated with the signal, we can propagate the effect of noise and bias on the variance of the field and its derivatives, and on the generalized skewness parameters. Finally, we have the following expressions for the observable MFs:

(30)

(31)

(32)

where, to shorten the notation, we introduced the variance ratios ℛi = σiN/σi, and defined the tilde skewness parameters as

(33)

(34)

(35)

with σ21 = σ2/σ1, and the label N denoting noise-related quantities. Equations (30)–(35) enable estimating the MFs of the observed convergence field in terms of the variances σn and generalized skewness parameters S(n) (with n = 0, 1, 2) of both the actual convergence field and the noise.

4.3. Validation and calibration

Equations (30)–(32) have been obtained under some assumptions, which, although reasonable, are nevertheless only approximations. It is therefore mandatory to validate them by fitting to measured MFs in the simulated dataset. This test will also give us the fiducial values of the nuisance parameters, entering them so that we refer to the full process as “calibration”. Vicinanza et al. (2019) have successfully calibrated these relations against the MICEv2 catalog data. To this end, they modeled the functions ℛn and as power laws of the redshift and smoothing scales, thus summing up to a total of 13 nuisance parameters pnuis.

We reconsidered this problem here and studied the definition of these quantities to determine a way to reduce the dimensionality of pnuis without significantly degrading the overall fit quality. To this end, we first considered the variance ratios given by

(36)

where the label ref denotes a quantity evaluated at some arbitrary chosen reference values of (θs,  z), which we fixed as (2′,  0.3). Based on their own definition, the two terms in square parentheses have a predictable scaling when the cosmological model is given. As a consequence, we can reduce the number of nuisance parameters to only three: the values of the ratios at the reference point. This is different from Vicinanza et al. (2019), where we instead modeled the dependence on θs as a power law, thus adding three more parameters to fix the slopes.

We now consider the quantities starting from the case n = 0, which we can conveniently rearrange as follows:

(37)

where we used Eqs. (22)–(24) and defined

with

(38)

(39)

where ℬN(ℓ,  ℓ1,  ℓ2) is the noise bispectrum. It is worth noting that the numerator and denominator depend on the smoothing scale θs only through the same weight function 𝒲(ℓ,  ℓ1,  ℓ2), so that we can argue that their ratio is weakly dependent on it. As a working assumption, we therefore considered β0 to be independent of θs and redefined it as

We scaled β0 with respect to to have a reference dimensionless value, but this choice is arbitrary. As a consequence, there is no reason to expect to be a small number.

Equation (37) finally reads

(40)

Proceeding in a similar way, we also obtain

(41)

(42)

so that now all the quantities entering have a predictable dependence on (θs,  z), and three additional nuisance parameters remain: .

We finally have the following nuisance parameters:

which is definitely fewer than in Vicinanza et al. (2019); here pnuis is a 7- rather than 13-dimensional vector. This reduction was possible by fixing the way in which the variance ratios ℛi and the noise-skewness-related quantities scale with (θs,  z). We therefore need to validate this calibration approach by fitting to the mock dataset that we constructed from the simulated convergence maps at different depths.

We used a straightforward fitting procedure, that is, we minimized a pseudo – χ2 merit function, defined as

(43)

with Dobs and Dth the observed and theoretically predicted MF dataset, and Covobs the corresponding covariance matrix as determined from the simulated maps.

It is worthwhile stressing that the vector of nuisance parameters pnuis changes according to which dataset is considered. For instance, as shown by Eq. (30), if V0 alone were included, pnuis would reduce to . We therefore repeated the fit for each MFs combination, with the consequence that the same nuisance parameter can have different fiducial values depending on which dataset is considered. Similarly, the errors on the parameters will be different, which will affect the estimate of the systematics covariance matrix we discuss below.

A cautionary remark is in order here. Compared to Paper I, we changed the calibration method in two major aspects. First, we made a single joint fit to the full dataset instead of first fitting to V0 alone and then to (V1, V2). Second, we now include the full covariance matrix Covobs in the χ2 function, hence taking care of the correlation among the components of the dataset. In Paper I, we only considered the diagonal elements because this procedure allowed us to better minimize the scatter of the residuals of single MFs. This is no longer the case with the revised calibration formulae we have introduced here, so that we prefer to adopt the present more statistically correct approach.

We performed the calibration for the different mock dataset by varying the limiting magnitude and the dataset used (i.e., whether we include only one MF or a combination of them). Compared to Vicinanza et al. (2019), the performance of the calibration is similar, although we note a small increase of when V0 is not used in combination with other MFs. A straightforward comparison is, however, not possible because of the radically different fitting procedure, the larger redshift range (up to 1.8 instead of 1.4), and a different source redshift distribution. We also note that a marked decrease in could be obtained when the lowest redshift bin were cut, which is at the edge of the redshift range recommended by FLASK authors. Cutting MFs at z = 0.6 would reduce the number of observables, thus decreasing the overall constraining power of this probe. Future lensing surveys, in contrast, will not be affected by this problem so that MFs at this low z will also be usable. We therefore preferred to retain these terms in the data vector at the cost of an increase of the rms of best-fit residuals with respect to what will likely be available when fully realistic mock data are used for calibration. We therefore expect our results to err on the side of conservativeness.

The Markov chain Monte Carlo (MCMC) method we used to explore the nuisance parameter space allows us to sample the joint posterior that we then used to propagate the errors on pnuis on the MFs. We thus obtained a covariance matrix, which represents the uncertainties we would have on the MFs even if they had been measured with infinite precision. In a sense, this is the uncertainty coming from our imperfect theoretical modeling of the MFs and the lack of knowledge of the exact nuisance parameters. In other words, this is what we refer to as the systematics covariance matrix, which we denote as Covsys. We stress that Covsys depends on the fitting so that it is different for each MFs dataset.

5. Fisher matrix forecasts

Equations (30)–(32) allow us to match theoretical and measured MFs, correcting for the overall effect of missing higher order terms, imperfect reconstruction from the shear field, and noise in the ellipticity data. As input, the cosmological model parameters

and the nuisance parameters

need to be specified, where we have changed to logarithmic units for because this quantity may change over the range of an order of magnitude. Fitting simulated datasets that mimic as closely as possible the actual data can help constraining pnuis, but it is a safer option to leave them free to vary to account for possible missing ingredients in the simulations. As a consequence, we do not expect MFs alone to be able to constrain the full set of parameters, so that in the following, we always consider the joint use of MFs and the standard cosmic shear tomography using the Fisher matrix formalism (Tegmark et al. 1997) to make forecasts.

This analysis has previously been presented in Vicinanza et al. (2019) for a survey mimicking the redshift distribution of the MICEv2 catalog and using a larger number of nuisance parameters. We address here a complementary issue. Planned future surveys will typically cover a wide area to a relatively shallow limiting magnitude, and a narrow region to a deeper limiting magnitude. We therefore investigated how the survey performances improve when the shear tomography is used and MFs measured on the wide area and MFs from the deeper region. When independence of the probes is assumed, the total Fisher matrix reads

(44)

where FWL is the Fisher matrix for shear tomography on the full survey area, FMFW and FMFD are those for MFs from the wide and shallow and deep and narrow survey regions, and P is the prior matrix. We placed priors on the nuisance parameters only so that P is a diagonal matrix with null values for the rows corresponding to cosmological parameters, and for the rows referring to the nuisance parameters. Varying εP allows us to investigate the accuracy to which the nuisance parameters should be known in order to improve the constraints on the cosmological parameters by a given factor. We quantified this by considering the FoM alone because this is the quantity of interest to distinguish among competing dark energy models.

A caveat is in order about Eq. (44). By summing the Fisher matrices from the different probes, we implicitly assumed that the three probes are not correlated. We therefore decided to evaluate the MFs in the wide and deep area separately so that they did not share any data and were therefore independent. In contrast, the shear tomography was evaluated over the full survey area so that the same data were used for tomography and MFs. It is worth noting, however, that the two probes are radically different, with shear tomography probing the local properties of the shear field, and MFs the topological property of the full convergence map. Moreover, they are affected by different systematic effects and retrieved from different measurement pipelines so that it might be argued that possible correlations (if any) are washed out by the estimate procedure. We therefore rely on Eq. (44) and caution that a clear demonstration of its validity is a still pending issue.

We refer to Vicinanza et al. (2019) for the full set of formulae to compute the MFs Fisher matrix, but we stress here two remarkable differences concerning the inverse covariance matrix. This is still estimated as (Hartlap et al. 2007)

(45)

with the multiplicative factor that corrects for the finite number of realizations Nf used to estimate the covariance of the Nd dimensional data vector. In Vicinanza et al. (2019), we set Nf = A/25 with A = 3500 deg2 the total area cut from the MICEv2 simulated field. This limited the cases we could consider because the multiplicative term is required to be positive. Here, however, through the use of FLASK, we simulated a full sky survey that after cuts to have well-separated patches provided Nf = 1108 subfields. This order-of-magnitude increase of Nf and the smaller number of nuisance parameters (hence the smaller Nd), makes the multiplicative factor close to unity for all the cases of interest.

Because the systematics covariance matrix was computed by propagating the errors on the nuisance parameters, setting a prior on pnuis also affects Covsys. In order to speed up the estimate, we first computed MCMC samples for each MFs dataset without any prior on pnuis. When a prior was added, we performed importance sampling on the chains according to suitably defined Gaussian weights, thus recomputing the systematics covariance matrix entering Eq. (45). The stronger the prior, the smaller the contribution of Covsys. However, care must be taken to avoid the unrealistic case of Covsys reducing to the null matrix. This cannot be possible because of the approximated nature of our calibration formulae. We verified, however, that as long as the prior is no smaller than ∼5%, Covsys remains larger than Covobs, which is what we expect, given the large survey area we considered.

5.1. Improving the figure of merit

Adding MFs to shear tomography increases the number of observables, but also the number of nuisance parameters. Qualitatively, it might be argued that the larger the number of probes, the stronger the constraints. On the other hand, the larger the number of parameters, the weaker the constraints. Moreover, as shown in Fig. 5, there are strong correlations among MFs of different order at the same (θs,  z), so that it is worthwhile wondering whether the use of a single MF is enough to improve the overall FoM. As a first test, we therefore investigated the ratio FoM(γ + Vn)/FoM(γ) between the FoM from shear only and shear + MFs as a function of the prior on the MF nuisance parameters. Hereafter, we also considered three different shear-only forecasts, which differ for the maximum multipole used in the forecasts. In particular, we set ℓmax = (1500,  3000,  5000) for the pessimistic, intermediate, and optimistic scenario.

We considered a 15 000 deg2 survey with a limiting magnitude magW = 24.5, which includes a 40 deg2 region that is observed at a deeper limiting magnitude magD. As a general result, we find that adding V0 only to the shear does not improve the FoM at all, regardless of which prior is set on the nuisance parameters and of the limiting magnitude of the deep field. The improvement is less than 10−4, so that we do not show the scaling of the FoM ratio with respect to εP. This is somewhat expected because V0 only depends on the variance σ0 of the convergence field. Because σ0 is a second-order quantity, it is expected that it does not add further information with respect to the quantitity that is already probed by the more detailed second-order statistics represented by the shear tomography.

This is not the case for the higher order MFs (V1,  V2) that probe the non-Gaussianity of the convergence field. Because the number of nuisance parameters increases by only two from V1 to V2, it is expected that higher order probes have a greater effect on the FoM. This is indeed what the comparison of the top and bottom panels in Fig. 7 shows. It might naively be thought that the FoM may be boosted strongly by the addition of a single MFs, either V1 or V2, given that the curves in the central and right panels reach ∼20 − 30%. Unfortunately, these values are obtained only when εP ∼ 5%, while there is a steep decline in the range (5, 15)% followed by a shallow convergence towards a unit FoM ratio. Considering that when no prior is used, the nuisance parameters are determined by the calibration procedure with roughly 60% error, it is easy to understand that the part of the curve with εP >  10 − 20% should be examined. In this regime, the FoM ratio is hardly improved by more than ∼5%. In particular, there is no appreciable dependence on the magD value as a likely consequence of the small contribution given by the FMFD to the sum in Eq. (44) when a single MF is used.

thumbnail Fig. 7.

Top: FoM ratio as function of the prior εP on the MFs nuisance parameters for different values of the limiting magnitude magD of the deep survey when V1 alone is added to the shear tomography with ℓmax = (1500, 3000, 5000) from left to right. Bottom: same as in the top panel, but for V2 alone added to the shear. In each panel the curves for different magD are so superimposed that they cannot be seen at all.

Open with DEXTER

We now discuss Fig. 8, which shows the improvement in the FoM when two MFs are added to the shear tomography. We again find that the FoM ratio can reach surprisingly high values for εP <  10%. However, significant improvement can still be obtained even for more realistic values of the priors, with the FoM ratio being higher than 1.05 for εP values as high as ∼40% for V12. It is interesting to note that the trend of the FoM ratio with εP is roughly the same, regardless of which ℓmax value is used for the estimate of the FoM from shear tomography only, while only the scale of the y-axis in the different panels changes. This suggests that the use of MFs can be tailored as a way to partially compensate for a cut on ℓmax. This shortening of the ℓ range can be of interest because the larger ℓ, the more shear tomography is pushed into the highly uncertain nonlinear regime so that using a smaller ℓmax is a safer option to avoid theoretical errors due to inaccurate nonlinearity modeling. For instance, it is

thumbnail Fig. 8.

Same as Fig. 7, but adding two MFs to shear tomography V01, V02, and V12 for the left, center, and right panels. Again, the dependence on magD is hard to appreciate, so that for most of the panels, it is impossible to see more than one curve. The only exception is the line referring to magD = 26.0 in the bottom panels. We plot here a smaller εP range to better show the behavior over the range for which adding MFs to the shear tomography indeed helps increasing the FoM by a significant amount.

Open with DEXTER

but we find that

while a ∼20% prior is enough to halve the FoM decrement due to going from ℓmax = 5000 to the safer ℓmax = 3000. Understanding which goal (i.e., better modeling nonlinearities versus improving MFs nuisance parameters priors) is easier to reach is a matter of open investigation.

Although the contribution of MFs is now more appreciable, we still find no significant dependence of the results on magD. The curves in Fig. 8 are still superimposed so that they cannot be distinguished. The only different case corresponds to magD = 26.0 in the V1 + V2 configuration. This could be related to some peculiarity in the calibration for this particular combination or might be an artifact of the importance sampling in the small εP regime. We are unable to understand which hypothesis is correct, but we remark that the difference only takes place in an unrealistic prior regime so that we neglected this.

This is no longer the case in Fig. 9, where we now use all three MFs for a joint analysis with shear tomography (for the three different ℓmax values). The only discrepant case is magD = 24.5 (green curve), which actually refers to a configuration without a deep area at all because the wide and the deep field have the same limiting magnitude. We again find that MFs can compensate for the FoM decrement due to the use of lower ℓmax. We can also slightly relax the prior that is required to obtain the same FoM as the optimistic shear-only scenario because we find

thumbnail Fig. 9.

Same as Fig. 7, but adding all the three MFs to shear tomography with ℓmax = (1500, 3000, 5000) from left to right. Curves for different magD values are again superimposed, with the only difference that the green curves refer to magD = 24.5.

Open with DEXTER

Still more interestingly, for εP = 15%, we obtain a combined FoM that is only ≃6% lower than the optimistic shear-only FoM, thus partially compensating for the 19% decrement we observed due to cutting at ℓmax = 3000 rather than ℓmax = 5000.

As a general result, we found almost no dependence of the FoM ratio on the limiting magnitude magD in the deep region. This is somewhat surprising because it might make us argue that there is no motivation for going deeper in magnitude. This is related to the approach undertaken in this paragraph, where we combined a wide survey with a deeper one that covers an area three orders of magnitude smaller. The advantage of going deeper will become more evident using the complementary approach explored in the next paragraph.

5.2. Optimizing an ideal survey

In the previous subsection, we have considered the case of a survey with total area Atot = 15 000 deg2, which includes a smaller portion Adeep = 40 deg2 imaged at a deeper magnitude magD. This choice is the same as the survey setup of the Euclid mission. We here revert the point of view and investigate how the shear + MFs FoM changes as a function of (Atot,  Adeep). We held fixed the total survey duration so that increasing Adeep is possible only at the cost of reducing the total survey area by a factor that depends on the chosen limiting magnitude magD (keeping fixed magW = 24.5 for the wide area). It is important to stress that now we scaled the results with respect to a reference FoM value, which is the shear tomography only FoM, setting ℓmax = 3000 and Atot = 15 000 deg2. We only considered the case where all the three MFs are used because this case provides the largest increase in the FoM.

This setup gives us the curves in Fig. 10, where we show the FoM ratio as a function of the prior on MF nuisance parameters for magD from 25.0 to 26.5 in steps of 0.5 and three different Adeep values. The curves in Fig. 10 have a rough aspect because they were obtained by interpolating over a grid in the (ε, Adeep) space. In order to save time, we did not use a grid that was fine enough to completely remove the numerical noise. As a first remarkable result, we note that the FoM ratio may also be lower than unity, that is, adding MFs reduces the overall FoM instead of increasing it. As counterintuitive as it may appear, this result is easily explained when we recall that we changed both Adeep and Atot. Because the shear-only FoM linearly scales with Atot, going deeper over a large area can decrease the FWL term in Eq. (44) by an amount that is not compensated for by the increase of the FMFD one. In these cases, the total FoM is smaller, which favors a wide and shallow rather than deep and narrow survey. This is also confirmed by the fact that for a fixed εP, the FoM ratio typically decreases with magD, regardless of which Adeep value is adopted.

thumbnail Fig. 10.

Left: FoM ratio as a function of the prior εP on the MFs nuisance parameters for the intermediate shear-only scenario for different values of (magD,  Atot), using all three MFs. We fix Adeep = 40 deg2. Center: same as in the left panel, but with Adeep = 80 deg2. Right: same as in the left panel, but with Adeep = 160 deg2. In each panel, blue, magenta, purple, and red lines refer to magD = (25.0, 25.5, 26.0, 26.5).

Open with DEXTER

It is worth noting, however, that increasing the region that is imaged at a deeper magnitude can be desirable for other motivations that are indirectly related to the FoM validation (e.g., a better control of systematic errors, thus making the forecasts more reliable) or to other aspects of the survey (such as the legacy outcome). It is therefore of interest to investigate whether it is possible to change (Atot,  Adeep) without affecting the total FoM. We therefore solved

with respect to Adeep for given εP, fixing the total survey area in such a way that the survey duration remained unchanged. The results are shown in Fig. 11 for different values of magD.

thumbnail Fig. 11.

Left: deep survey area needed to obtain the same FoM as for the reference shear tomography only survey as a function of the prior on the MFs nuisance parameters for different limiting magnitude. Right: same as in the left panel, but considering the total survey area.

Open with DEXTER

The curves in this plot may be used to optimize an ideal survey by changing the areas of the deep and wide regions and holding the total duration fixed. The answer depends on how well the MFs nuisance parameters are known. For instance, a 20% prior on pnuis allows us to obtain the same reference FoM either for a survey with total area Atot = 13 466 deg2 with Adeep = 437 deg2 at magD = 25.0, or by reducing Adeep to 36 deg2 and Atot to 13 544 deg2, but with a significantly deeper magnitude magD = 26.5 (which may dramatically increase the legacy products).

Alternatively, Fig. 11 may be used to set requirements on εP that should be fulfilled when the deep area at a given magD is to be increased. For instance, if we were to double the Euclid deep area, that is, we set Adeep = 80 deg2. In order to preserve the survey time duration, the total area should be reduced to

while the prior εP must be

for magD = {25.0,  25.5,  26.0,  26.5} in order to preserve the same shear tomography + MFs FoM. Overall, Fig. 11 shows that while it is indeed possible to reduce Atot to enlarge Adeep, the price to pay can be quite demanding. Atot quickly returns to its reference value as εP increases. A detailed analysis of the accuracy to which the MFs nuisance parameters may be constrained based on simulations is therefore mandatory, but this is beyond the aim of our paper.

5.3. Uncertainty on the FoM ratio

The results presented in the two previous paragraphs implicitly assume that the FoM is computed with no errors so that the ratio between the FoMs with or without the use of MFs can be reliably used to compare different setups. This assumption is motivated by the consideration that the FoM is estimated from the Fisher matrix, which is a theoretical quantity, so that provided all the input ingredients are correct, it is known with infinite accuracy. However, the question remains what happens if for some unspecified reason, the Fisher matrix elements Fαβ are incorrectly estimated. Ideally, Fαβ can be radically different if one changes the assumed cosmological model or radically change the observational setup. However, these deviations should not be considered uncertainties to be propagated on Fαβ, but rather the Fisher matrix would refer to a different experiment and/or model so that it must deviate from the reference case. We considered as uncertainties either numerical errors or small discrepancies between the assumed survey setup and the final actual setup. We therefore studied how these uncertainties propagate on the FoM ratio we have considered so far.

To this end, we assumed that a given Fisher matrix element Fαα has been estimated with an accuracy δFαα. It has been shown (Euclid Collaboration 2019) that the FoM is then known with an accuracy given by

(46)

with (α, β) setting the column and row corresponding to the DE parameters (w0, wa). In our case, the total Fisher matrix is the sum of three terms,

which refer, respectively, to cosmic shear, MF on the wide area, MF on the deep area. Considering the three probes as independent, a naive propagation of errors gives

(47)

with . We can then use the other naive relation

(48)

to obtain an expression for εαβ = δFαβ/Fαβ and plug the result and Eq. (48) into Eq. (46) to derive the relative error on the FoM from the joint use of WL and MFs. Setting to zero the MFs terms gives the error on the FoM from WL only. We can finally write

(49)

where we defined ℛFoM = FoM(γ + Vn)/FoM(γ) to denote the ratio among the FoM from WL+ MFs and WL only, respectively. The relative errors δFoM(X)/FoM(X) can be computed as described above and will lead to a lengthy but simple algebraic formula (not reported here for sake of brevity), which provides the error on the FoM ratio as a function of the error on the WL only FoM and the relative uncertainties (εαα, εαβ, εββ) of the MFW and MFD Fisher matrices.

In Fig. 12 we plot δFoM/ℛFoM for the case of the optimistic cosmic shear scenario combined with MFs from wide and deep areas (for maglim = 25.5 for the deep region), also adding a 10% prior on MFs nuisance parameters. In order to reduce the number of parameters, we took the relative uncertainty on the WL Fisher matrix to be the same for (α, β) combination, and show the results as a function of , assuming that the error on the other elements of the MFW and MFD Fisher matrices are the same. Finally, for MFs, we used (V0, V1, V2) data. Dropping these assumptions does not qualitatively change the results, with only a minor quantitative effect.

thumbnail Fig. 12.

Error on the FoM ratio ℛ as a function of the relative uncertainty on the MFW Fisher matrix elements. We set a 10% prior on MF nuisance parameters, and considered the optimistic scenario for shear only. Blue, purple, and red lines refer to giving δFoM(WL)/FoM(WL) = (0.6, 2.2, 6.4)%, respectively.

Open with DEXTER

This figure offers a qualitative way to set a requirement on the accuracy with which the MFs Fisher matrix elements have to be determined so that the estimated value of ℛFoM is reliable. For instance, the rightmost panel in Fig. 9 shows ℛFoM ≃ 1.15 for the adopted prior on MFs. When we require that ℛFoM − δFoM ≃ 1 (i.e., we require that the FoM improvement is larger than 1 at 1σ), we need to have δFoM/ℛFoM <  13%. Figure 12 then shows that this can be achieved when . Although a detailed propagation of different errors on the input quantities has not been done, the margin is large enough for us to be confident that it can be fulfilled, thus making our estimate of the FoM ratios quite reliable.

6. Conclusions

The greater sample size and the higher data quality promised by Stage IV lensing surveys enable us to reach higher than second-order statistics to probe the properties of the convergence field. Standard second-order probes such as a shear tomography power spectrum and a two-point correlation function only trace the Gaussian properties of the field, while higher orders allow us to probe its non-Gaussianity and thus open up the way to a better field description and hence stronger constraints on the underlying cosmological model. MFs stand out as promising candidates because they depend on the generalized skewness parameters that probe the higher order statistical properties of the field and its first derivative. In Vicinanza et al. (2019), we matched the theoretically predicted MFs based on a perturbed series expansion to the actually measured MFs in a convergence map reconstructed by noisy shear data.

The present work differs from our previous paper in a number of aspects, which makes a straightforward comparison not quite possible. First, we developed a novel calibration strategy that allowed us to reduce the number of nuisance parameters. To this aim, we derived under reasonable assumptions the scaling of the noise-to-signal variance ratios and of the functions related to the skewness of the noise field. This derivation enabled us to halve the dimension of the nuisance parameters vector pnuis to 7 instead of the original 13. This significant decrease does not spoil the quality of the matching between theory and data, with the rms of best-fit residuals being almost the same as for the original recipe. In order to validate the scaling assumptions and determine fiducial values of the nuisance parameters, we performed a joint fit to the full MF dataset, thus taking into account the covariance among the MFs. This is different from Paper I, where we considered only the diagonal elements of the covariance matrix. This more statistically correct approach also led to us change the estimate of the systematics covariance matrix Covsys, which was now obtained by propagating the uncertainties in the determination of nuisance parameters on the final estimate of the theoretical MFs. As a consequence, if a prior is added on the pnuis, Covsys is accordingly changed, which reduces the effect on the overall error budget, as expected. As a further improvement, we also validated this calibration procedure against data with a different source redshift distribution and MFs S/N ratio, considering datasets at varying limiting magnitude maglim.

Two points remains still remain to be addressed. First, the validation was carried out based on lognormal simulations generated with FLASK for a fixed set of cosmological parameters. Although the fiducial values used here are different from those in Vicinanza et al. (2019), it is critical to verify that the proposed calibration procedure still holds in radically alternative cosmologies. By this, we do not mean that the nuisance parameters are the same, but that the set of Eqs. (30)–(32) still allows us to match theory and data without dramatically increasing the rms scatter of the best-fit residuals because they enter the estimate of the total covariance matrix. FLASK is an ideal tool for this analysis because it allows us to quickly generate convergence maps, taking as input only the matter power spectrum for the given model. We therefore plan to carry out an investigation of this question by also varying the number of maps and the noise properties. As a further step toward realistic mocks, we also plan to change the angular selection function in order to investigate the effect of the mask on the MFs measurement and the validity of the calibration procedure in this circumstance. The effect of masking cannot be framed within the derivation of Eqs. (30)–(32), so that ad hoc corrections might be required.

As a second step forward with respect to the first presentation of our approach to MFs in Vicinanza et al. (2019), we here considered the more realistic case of a wide area survey imaged at a limiting magnitude magW that contains a deep and narrow region with a larger limiting magnitude magD. A joint analysis of shear tomography and MFs (with contributions from the wide and shallow and deep and narrow areas) may boost the total FoM. In particular, this allows us to reduce the maximum multipole ℓmax of shear tomography, partially compensating for the loss in the shear-only FoM thanks to the MFs contribution. Although we carried on this analysis for a Euclid-like survey, we also showed the requirements that should be set on the accuracy to which the MFs nuisance parameters have to be known in order to obtain the same FoM as the reference survey, but different values of the deep region area. When the survey duration is fixed, an increase of Adeep comes at the cost of reducing Atot. MFs can then compensate for the loss in FoM, opening the way to a different setup, which can help to better control systematics and augment side products of great interest for the legacy science.

These results are interesting in themselves but should be taken “cum grano salis”. First, we have stressed that MFs complement and supplement shear tomography only if severe constraints on the nuisance parameters are available. It is a matter of open investigation to understand whether pnuis can indeed be constrained to the required accuracy. To this end, it needs to be investigated how the error on the calibration procedure scales with the number of mock datasets and moreover, whether the nuisance parameter accuracy scales with the noise properties. This will eventually set requirements on this quantity as well. We plan to address this question in a forthcoming work relying on FLASK data under different cosmological scenarios to also check whether the full method works in all possible configurations (cosmology, noise, number of mock datasets, etc.).

Another question to be answered is what is still missing in our framework. First, we have argued that the use of MFs enables shortening the shear tomography multipole range, which is thus less strongly dependent on an accurate modeling of the matter power spectrum in the highly nonlinear regime. However, this implicitly assumes that MFs are less strongly dependent on nonlinearities. Whether this is indeed the case is an open question; the hardest quantity to model is the matter bispectrum. However, this typically enters through a summation, which is weighted by the product of three exponential functions in ℓ. High-ℓ terms are therefore strongly suppressed, making MFs likely less sensitive to the exact nonlinear recipe and to the effect of baryons. That this is indeed the case will be the subject of a forthcoming publication, where we will compare whether the predicted MFs change when they are evaluated for the same cosmology but different approaches to modeling the effect of nonlinearities and baryons on the matter power spectrum and bispectrum.

A final missing ingredient is the intrinsic alignment (IA), which in the weak regime, linearly adds to the lensing shear so that the reconstructed convergence field is a biased representation of the actual field. It is hard to qualitatively understand whether this has an effect on the MF estimate. On one hand, IA quickly becomes subdominant at high redshift so that a possible solution might be to cut the redshift range over which MFs are measured. Moreover, IA is a local effect that should not alter the global topology of the map, hence again not affect MFs. However, IA increases the correlation among close redshift bins and therefore might also increase the correlation among MFs at different z, which decrease the MFs constraining power. Moreover, it is possible that IA works as an additional noise with its own properties (variance and generalized skewness), thus spoiling the accuracy of the matching procedure between theory and data we have developed here. Although lensing simulations including the effect of IA are not available at the moment, it might be investigated whether IA can be included in FLASK using the option of generating the convergence field directly from a tomography spectrum that includes IA.

To summarize, the present paper represents the second step along a path toward making MFs a common tool to be added to the standard second-order shear statistics. As hard as the journey may be, we are confident that the final goal will be rewarding enough to compensate for all the efforts to reach it.

Acknowledgments

CP and VFC are funded by Italian Space Agency (ASI) through contract Euclid – IC (I/031/10/0) and acknowledge financial contribution from the agreement ASI/INAF/I/023/12/0. We acknowledge the support from the grant MIUR PRIN 2015 Cosmology and Fundamental Physics: illuminating the Dark Universe with Euclid.

References

  1. Adler, R. J. 1981, The Geometry of Random Fields (Chichester, UK: Wiley) [Google Scholar]
  2. Bartelmann, M., & Maturi, M. 2017, Scholarpedia, 12, 32440 [NASA ADS] [CrossRef] [Google Scholar]
  3. Blas, D., Lesgourgues, J., & Tram, T. 2011, JCAP, 2011, 034 [NASA ADS] [CrossRef] [Google Scholar]
  4. Coe, D., Benítez, N., Sánchez, S. F., et al. 2006, AJ, 132, 926 [NASA ADS] [CrossRef] [Google Scholar]
  5. Dio, E. D., Montanari, F., Lesgourgues, J., & Durrer, R. 2013, JCAP, 2013, 044 [NASA ADS] [CrossRef] [Google Scholar]
  6. Eriksen, H. K., Novikov, D. I., Lilje, P. B., Banday, A. J., & Górski, K. M. 2004, ApJ, 612, 64 [NASA ADS] [CrossRef] [Google Scholar]
  7. Euclid Collaboration (Blanchard, A., et al.) 2019, Euclid Preparation: VII. Forecast Validation for Euclid Cosmological Probes [Google Scholar]
  8. Green, J., Schechter, P., Baltay, C., et al. 2012, ArXiv e-prints [arXiv:1208.4012] [Google Scholar]
  9. Hamana, T., Takada, M., & Yoshida, N. 2004, MNRAS, 350, 893 [NASA ADS] [CrossRef] [Google Scholar]
  10. Hartlap, J., Simon, P., & Schneider, P. 2007, A&A, 464, 399 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  11. Hikage, C., Matsubara, T., Coles, P., et al. 2008, MNRAS, 389, 1439 [NASA ADS] [CrossRef] [Google Scholar]
  12. Hoekstra, H., Viola, M., & Herbonnet, R. 2017, MNRAS, 468, 3295 [NASA ADS] [CrossRef] [Google Scholar]
  13. Jullo, E., Pires, S., Jauzac, M., & Kneib, J.-P. 2013, MNRAS, 437, 3969 [NASA ADS] [CrossRef] [Google Scholar]
  14. Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
  15. Kilbinger, M. 2015, Rep. Prog. Phys., 78, 086901 [NASA ADS] [CrossRef] [Google Scholar]
  16. Komatsu, E., Kogut, A., Nolta, M. R., et al. 2003, ApJS, 148, 119 [NASA ADS] [CrossRef] [Google Scholar]
  17. Kratochvil, J. M., Lim, E. A., Wang, S., et al. 2011, in American Astronomical Society Meeting Abstracts #, BAAS, 43, 225.02 [Google Scholar]
  18. Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, ArXiv e-prints [arXiv:1110.3193] [Google Scholar]
  19. LSST Science Collaboration (Abell, P. A., et al.) 2009, ArXiv e-prints [arXiv:0912.0201] [Google Scholar]
  20. Matsubara, T. 2010, Phys. Rev. D, 81, 083505 [NASA ADS] [CrossRef] [Google Scholar]
  21. Matsubara, T., & Jain, B. 2001, ApJ, 552, L89 [NASA ADS] [CrossRef] [Google Scholar]
  22. Munshi, D., Valageas, P., van Waerbeke, L., & Heavens, A. 2008, Phys. Rep., 462, 67 [NASA ADS] [CrossRef] [Google Scholar]
  23. Munshi, D., van Waerbeke, L., Smidt, J., & Coles, P. 2011, MNRAS, 419, 536 [NASA ADS] [CrossRef] [Google Scholar]
  24. Petri, A., Haiman, Z., Hui, L., May, M., & Kratochvil, J. M. 2013, Phys. Rev. D, 88, 123002 [NASA ADS] [CrossRef] [Google Scholar]
  25. Pires, S., Starck, J.-L., Amara, A., et al. 2009, MNRAS, 395, 1265 [NASA ADS] [CrossRef] [Google Scholar]
  26. Pratten, G., & Munshi, D. 2012, MNRAS, 423, 3209 [NASA ADS] [CrossRef] [Google Scholar]
  27. Rix, H.-W., Barden, M., Beckwith, S. V. W., et al. 2004, ApJS, 152, 163 [NASA ADS] [CrossRef] [Google Scholar]
  28. Sato, J., Takada, M., Jing, Y. P., & Futamase, T. 2001, ApJ, 551, L5 [NASA ADS] [CrossRef] [Google Scholar]
  29. Shirasaki, M., & Yoshida, N. 2014, ApJ, 786, 43 [NASA ADS] [CrossRef] [Google Scholar]
  30. Smith, R. E., Peacock, J. A., Jenkins, A., et al. 2003, MNRAS, 341, 1311 [NASA ADS] [CrossRef] [Google Scholar]
  31. Takahashi, R., Sato, M., Nishimichi, T., Taruya, A., & Oguri, M. 2012, ApJ, 761, 152 [NASA ADS] [CrossRef] [Google Scholar]
  32. Taruya, A., Takada, M., Hamana, T., Kayo, I., & Futamase, T. 2002, ApJ, 571, 638 [NASA ADS] [CrossRef] [Google Scholar]
  33. Tegmark, M., Taylor, A. N., & Heavens, A. F. 1997, ApJ, 480, 22 [NASA ADS] [CrossRef] [Google Scholar]
  34. Tomita, H. 1986, Prog. Theor. Phys., 76, 952 [NASA ADS] [CrossRef] [Google Scholar]
  35. Vicinanza, M., Cardone, V. F., Maoli, R., et al. 2019, Phys. Rev. D, 99, 043534 [NASA ADS] [CrossRef] [Google Scholar]
  36. Xavier, H. S., Abdalla, F. B., & Joachimi, B. 2016, MNRAS, 459, 3693 [NASA ADS] [CrossRef] [Google Scholar]

All Tables

Table 1.

Limiting magnitude, total source number density (in gal arcmin−2), and median redshift.

All Figures

thumbnail Fig. 1.

Redshift selection functions used as input for FLASK to simulate different survey depths corresponding to different limiting magnitudes.

Open with DEXTER
In the text
thumbnail Fig. 2.

Top: original convergence map at z = 0.9 for a limiting magnitude maglim = 24.5, simulated with FLASK. Bottom: same map but reconstructed with the KS method.

Open with DEXTER
In the text
thumbnail Fig. 3.

Numerical (blue dots) vs expected (solid black line) MFs from 500 realizations of a Gaussian random field.

Open with DEXTER
In the text
thumbnail Fig. 4.

Left: MFs as measured from maps with maglim = 25.5 as a function of the threshold ν for different values of the redshift bin center z and fixed smoothing scale (θs = 6′). Right: same as in the left panel, but for different values of the smoothing scale θs and fixed redshift bin center (z = 1.2).

Open with DEXTER
In the text
thumbnail Fig. 5.

Normalized covariance matrix for the MF data vector D defined in the text for the case with maglim = 25.5.

Open with DEXTER
In the text
thumbnail Fig. 6.

Left: MFs for ν = 2 and different values of the limiting magnitude maglim as a function of the redshift bin center z, with fixed smoothing scale (θs = 6′). Right: same as in the left panel, but as a function of the smoothing angle θs with fixed redshift bin center (z = 1.2).

Open with DEXTER
In the text
thumbnail Fig. 7.

Top: FoM ratio as function of the prior εP on the MFs nuisance parameters for different values of the limiting magnitude magD of the deep survey when V1 alone is added to the shear tomography with ℓmax = (1500, 3000, 5000) from left to right. Bottom: same as in the top panel, but for V2 alone added to the shear. In each panel the curves for different magD are so superimposed that they cannot be seen at all.

Open with DEXTER
In the text
thumbnail Fig. 8.

Same as Fig. 7, but adding two MFs to shear tomography V01, V02, and V12 for the left, center, and right panels. Again, the dependence on magD is hard to appreciate, so that for most of the panels, it is impossible to see more than one curve. The only exception is the line referring to magD = 26.0 in the bottom panels. We plot here a smaller εP range to better show the behavior over the range for which adding MFs to the shear tomography indeed helps increasing the FoM by a significant amount.

Open with DEXTER
In the text
thumbnail Fig. 9.

Same as Fig. 7, but adding all the three MFs to shear tomography with ℓmax = (1500, 3000, 5000) from left to right. Curves for different magD values are again superimposed, with the only difference that the green curves refer to magD = 24.5.

Open with DEXTER
In the text
thumbnail Fig. 10.

Left: FoM ratio as a function of the prior εP on the MFs nuisance parameters for the intermediate shear-only scenario for different values of (magD,  Atot), using all three MFs. We fix Adeep = 40 deg2. Center: same as in the left panel, but with Adeep = 80 deg2. Right: same as in the left panel, but with Adeep = 160 deg2. In each panel, blue, magenta, purple, and red lines refer to magD = (25.0, 25.5, 26.0, 26.5).

Open with DEXTER
In the text
thumbnail Fig. 11.

Left: deep survey area needed to obtain the same FoM as for the reference shear tomography only survey as a function of the prior on the MFs nuisance parameters for different limiting magnitude. Right: same as in the left panel, but considering the total survey area.

Open with DEXTER
In the text
thumbnail Fig. 12.

Error on the FoM ratio ℛ as a function of the relative uncertainty on the MFW Fisher matrix elements. We set a 10% prior on MF nuisance parameters, and considered the optimistic scenario for shear only. Blue, purple, and red lines refer to giving δFoM(WL)/FoM(WL) = (0.6, 2.2, 6.4)%, respectively.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.