Issue 
A&A
Volume 633, January 2020



Article Number  A71  
Number of page(s)  15  
Section  Cosmology (including clusters of galaxies)  
DOI  https://doi.org/10.1051/00046361/201935988  
Published online  13 January 2020 
Going deep with Minkowski functionals of convergence maps
^{1}
I.N.A.F. – Osservatorio Astronomico di Roma, Via Frascati 33, 00040 Monte Porzio Catone, Roma, Italy
email: carolina.parroni@inaf.it
^{2}
LERMA – Observatoire de Paris, PSL Research University, CNRS, Sorbonne Univ., UPMC Univ. Paris 06, 75014 Paris, France
^{3}
University of Paris Denis Diderot, University of Paris Sorbonne Cité (PSC), 75205 Paris Cedex 13, France
^{4}
Istituto Nazionale di Fisica Nucleare – Sezione di Roma 1, Piazzale Aldo Moro 00185, Roma, Italy
^{5}
Dipartimento di Fisica, Università di Roma “La Sapienza”, Piazzale Aldo Moro 00185, Roma, Italy
Received:
30
May
2019
Accepted:
14
November
2019
Aims. Stage IV lensing surveys promise to make an unprecedented amount of excellent data available. This will represent a huge leap in terms of quantity and quality and will open the way for the use of novel tools that surpass the standard secondorder statistics for probing the highorder properties of the convergence field. Motivated by these considerations, some of us have started a longterm project aiming at using Minkowski functionals (MFs) as complementary and supplementary probes to increase the lensing figure of merit (FoM).
Methods. As a second step on this path, we discuss the use of MFs for a survey consisting of a wide total area A_{tot} that is imaged at a limiting magnitude mag_{W} and contains a subset of area A_{deep}, where observations are pushed to a deeper limiting magnitude mag_{D}. We present an updated procedure to match the theoretically predicted MFs to the measured MFs, and take the effect of map reconstruction from noisy shear data into account. We validate this renewed method against simulated datasets with different source redshift distributions and total number density, setting these quantities in accordance with the depth of the survey. We can then rely on a Fisher matrix analysis to forecast the improvement in the FoM that is due to the joint use of shear tomography and MFs under different assumptions on (A_{tot}, A_{deep}, and mag_{D}), and the prior on the MFs nuisance parameters.
Results. We find that MFs can provide valuable help in increasing the FoM of the lensing survey when the nuisance parameters are known with nonnegligible precision. The possibility of compensating for the loss of FoM through a cut in the multipole range that is probed by shear tomography is even more interesting. This makes the results more robust against uncertainties in the modeling of nonlinearities. This makes MFs a promising tool for increasing the FoM and also protects the constraints on the cosmological parameters mainly from theoretical systematic effects.
Key words: gravitational lensing: weak / cosmology: theory / methods: statistical
© ESO 2020
1. Introduction
The concordance Λ cold dark matter (ΛCDM) cosmological model, which is dominated by the dark energy that drives cosmic speed up and by the dark matter that is responsible for the clustering, assumes that the structures we observe today formed from gravitational instability and successive growth of the primordial fluctuations that were generated during the inflation epoch. Although the density field may be approximated as Gaussian on large scales, the nonGaussianity on small scales carries additional information that can break some degeneracy among model parameters.
Weak lensing (WL) has proven to be an efficient tool for accessing this information. In any metric theory, light propagates along the geodesics of the metrics, which are determined by the matter distribution along the line of sight. As a consequence, lensing thus probes both the background expansion and the growth of structures. Therefore it is able to place strong constraints on the dark energy equation of state and distinguish between general relativity and modified gravity. In the WL regime, lensing distorts the image of the emitting source, but this cosmic shear effect is so small that it can only be detected statistically through the analysis of a large sample of galaxies. Up to now, secondorder statistics have been used with the twopoint correlation function, and its Fourier counterpart, the power spectrum, has been considered with remarkable results (see, e.g., Munshi et al. 2008; Kilbinger 2015; Bartelmann & Maturi 2017, and references therein).
The unprecedented amount of highquality data that Stage IV lensing surveys are expected to deliver will make it possible to deepen the analysis of the density field by probing its nonGaussianity. To this end, statistics higher than second order is needed. Among the different possibilities, Minkowski functionals (MFs) have already proven their reliability in the context of cosmic microwave background (CMB) studies (Komatsu et al. 2003; Eriksen et al. 2004; Hikage et al. 2008; Matsubara 2010) and WL convergence maps (Matsubara & Jain 2001; Sato et al. 2001; Taruya et al. 2002; Matsubara 2010; Kratochvil et al. 2011; Pratten & Munshi 2012; Petri et al. 2013; Shirasaki & Yoshida 2014). MFs are topological descriptors of the convergence field that depend on the complete set of higher order terms and multipoint correlation functions. In particular, nonGaussianity manifests itself in deviations from the predictions for Gaussian random fields. Even when the perturbative expansion is cut to the lowest order, these deviations are related to the convergence bispectrum (Fourier counterpart of the threepoint correlation function) through the generalized skewness parameters. The need to extend to large scales, however, asks for a detailed description of the nonlinearities that affect the matter power spectrum and bispectrum. Moreover, in any practical application, the infinite series that determines MFs deviations from the Gaussian case is truncated at the lowest order, thus introducing a mismatch with the observed MFs even in the idealized case of noiseless maps. Needless to say, noise and the imperfect reconstruction of the convergence map from shear data make the theoretical predictions of MFs still more daunting.
Motivated by these considerations, Vicinanza et al. (2019) have first presented a calibration procedure to correct theoretical predictions for noiseless maps so that they match the MFs measured in reconstructed noisy convergence maps. We here propose a modification of their approach that reduces the number of nuisance parameters, starting from simplifying yet reasonable approximations. This offers us the possibility to improve the constraints on the cosmological parameters so that we repeat their Fisher matrix forecast analysis. We also investigate which survey strategy (e.g., wide and shallow or deep and narrow) is better suited to optimize the scientific return of MFs. Nextgenerations surveys from the ground, such as the Large Synoptic Survey Telescope (LSST; LSST Science Collaboration 2009), or onboard satellites, such as the ESA Euclid (Laureijs et al. 2011) and the NASA Wide Field Infrared Survey Telescope (WFIRST; Green et al. 2012) missions, will nevertheless rely on secondorder statistics. This makes it mandatory that the optimization is carried out considering the combination of cosmic shear tomography and MFs rather than single probes alone. We therefore consider several realistic combinations of area coverage and survey depth (keeping the survey observation time fixed) to generate simulated lognormal convergence fields. These are taken as input for estimating a reliable MFs dataset with the corresponding covariance matrix.
The paper is organized as follows: in Sect. 2 we describe the set of simulated convergence maps that we obtained, starting from the catalog generation for different survey depths to the mapreconstruction method we used. In Sect. 3 we introduce MFs and describe the measurements we performed on the simulated convergence maps, showing the results for some cases of interest. In Sect. 4 we describe MFs from a theoretical point of view, show their connection to cosmology, and present our new calibration procedure. In Sect. 5 we discuss the results we obtained from our Fisher matrix analysis in terms of the figure of merit (FoM) improvement and survey optimization. In Sect. 6 we draw our conclusions.
2. Simulation of convergence maps
In Vicinanza et al. (2019), some of us have developed a calibration procedure to match the theoretical predictions for MFs measured in noiseless convergence fields with those estimated in reconstructed maps from noisy shear data. This was validated using MICEv2.0 simulations, which cover a limited redshift range and model galaxies up to a limiting magnitude (mag_{lim} = 24.5) that is shallower than those we are interested in here. We therefore need to produce a different set of simulated convergence maps to probe the extended redshift range that is probed at deeper magnitude. Moreover, we wish to mimic as closely as possible what is expected for the Euclid satellite mission, which means that we need to input the same source redshift distribution. To this end, we use FLASK and the setting we describe in the following two subsections.
2.1. FLASK simulations
The fullsky lognormal astrofields simulation kit (FLASK; Xavier et al. 2016) is a public code designed to create two or threedimensional random realizations of different astrophysical fields, including WL convergence and shear, reproducing the expected crosscorrelations between the input fields. These realizations follow a multivariate lognormal distribution, which, compared to a multivariate Gaussian distribution, results in a better approximation to the density and convergence fields and avoids nonphysical negative density values, for example. In this study we are interested in capturing the nonGaussian features that are contained in the convergence field, therefore the lognormal distribution moreover represents the simpler approximation that can convey this information.
Its computational speed and flexibility make FLASK strongly preferable with respect to full raytracing or full Nbody simulations. This is a key aspect because we need to simulate a large field area (to be split into very many patches) and vary the source redshift distribution according to the limiting magnitude. We briefly outline the FLASK inner workings and refer to Xavier et al. (2016) for further details.
The code takes as input angular auto and crosspower spectra calculated at a number of redshift slices provided by the user. These are then transformed into the real space to compute the associated Gaussian correlation functions that are to be transformed back to the harmonic space. Choleski decomposition is then used to generate Gaussian multipoles, which are the input for the creation of a HEALPix map whose pixels are exponentiated to obtain the associated lognormal fields. The user can then sample these fields according to its desired angular and radial selection functions, thus mimicking the specifics of its desired survey. A catalog is finally generated by assigning to each pixel a random angular position that is sampled within the pixel boundaries, and a redshift within its redshift slice.
A caveat is in order when correlated density and convergence fields are simulated. When the density field is modeled as lognormal, FLASK computes the convergence through an approximated lineofsight integration obtained as a weighted Riemann sum of the simulated density in redshift bins. As a consequence of the small number of bins in the sum at low redshift, the resulting convergence field is not exactly lognormal. However, the corresponding power spectra reproduce the theoretical spectra within 3% for z > 0.5, while the precision quickly degrades at lower z. We therefore imposed a conservative cut z > 0.55 to select the sources that we included in our analysis.
We used CLASS (Blas et al. 2011; Dio et al. 2013) to compute the input power spectra for 25 tophat equispaced redshift bins over the range 0.0 ≤ z ≤ 2.5 for a flat ΛCDM model with fiducial cosmological parameters
where Ω_{M} (Ω_{b}) is the presentday value of the total matter (baryons only) density parameter, h is the Hubble constant (in units of 100 kms^{−1} Mpc^{−1}), n_{s} is the slope of the primordial power spectrum, and σ_{8} is the variance of the linear power spectrum smoothed over a tophat window with size R = 8 h^{−1} Mpc. Nonlinearities at large k are corrected for using the Halofit recipe (Smith et al. 2003; Takahashi et al. 2012). We also modified some keyword setting in FLASK with respect to the default keywords, setting LRANGE = 1 − 6000, SHEAR_LMAX = 2000, and NSIDE = 2048. We finally used a customized angular selection function to split the fullsky simulation into more manageable set of subfields.
2.2. Survey depth and number density
FLASK allows the user to input their own radial selection function so that the total source number density and their redshift distribution match those of a given survey. Because we are interested in a Euclidlike survey, we set
with n_{g} the number of galaxies per arcmin^{2}, and with z_{m} the median redshift. n_{g} (in gal arcmin^{−2}) and z_{m} are a function of the limiting magnitude mag_{lim}, with (n_{g}, z_{m}) = (30, 0.9) for mag_{lim} = 24.5 for the wide area Euclid survey. We therefore need to model their scaling with mag_{lim}, which we qualitatively did as follows. First, we note that Hoekstra et al. (2017) investigated the effect of undetected galaxies on estimating the shape measurement bias. To this end, they modeled the dependence of the number of galaxies at a given mag_{lim} as a power law, which approximates the number counts from the GEMS survey well (Rix et al. 2004) in the F606W band and from the Hubble UltraDeep Field (HUDF; Coe et al. 2006) in the F775W band. We integrated this power law up to the desired limiting magnitude and obtained the slope of the n_{g}–mag_{lim} relation, while its amplitude was set so that it was n_{g}(mag_{lim} = 24.5) = 30, as for Euclid. This rescaling is required because neither the F606W nor the F775W bands match the wide RIZ filter used by the Euclid imaging instrument. A similar rescaling was also used for the z_{m}–mag_{lim} relation, whose behavior we obtained by interpolating the values in Table 9 of Coe et al. (2006) based on HUDF data. The values of (n_{g}, z_{m}) thus obtained for the five different limiting magnitudes we considered are given in Table 1, while the corresponding redshift distributions n(z) are shown in Fig. 1. We recall, however, that the calculations we made are based on the iband magnitude, while Euclid will provide imagining data in the broad RIZ band, which has not been used before for observations. Therefore, we do not expect our approximation to exactly reproduce the Euclid redshift distribution, but it will nevertheless allow us to illustrate the different results obtained by changing the survey area and depth.
Limiting magnitude, total source number density (in gal arcmin^{−2}), and median redshift.
Fig. 1. Redshift selection functions used as input for FLASK to simulate different survey depths corresponding to different limiting magnitudes. 

Open with DEXTER 
2.3. Map reconstruction
Running FLASK with the input parameters set as detailed above provided catalogs with right ascension, declination, redshift z, convergence κ, and shear components (γ_{1}, γ_{2}) for all the objects in the catalog. A gnomonic projection was then used to project onto the plane of the sky under a flat sky approximation, which holds for the 5 × 5 deg^{2} subfields we used. We also left a gap of ∼1° between two consecutive patches so that we were able to consider them as independent realization. The objects in each catalog were then split into redshift bins with equal width Δz = 0.05 and centered in z from 0.5 to 1.8 in steps of 0.3, with the number density set according to the chosen limiting magnitude. We thus obtained 1108 independent convergence and shear maps for each redshift bin and limiting magnitude.
After smoothing the maps to 1′ resolution, we addedGaussian noise to each pixel and fixed the variance as (Hamana et al. 2004)
with σ_{e} = 0.3 the intrinsic ellipticity, A_{pix} the pixel area, and n_{g} the number density of galaxies. Because n_{g} increases with mag_{lim}, deeper maps will be less noisy, as expected. In order to mimic an actual data analysis, we reconstructed the convergence maps from the noisy shear data using different methods. After comparing with simulated convergence maps, we finally opted for a variant of the popular KS method (Kaiser & Squires 1993) modified to account for the effect of systematic effects such as projection effects and masking (Pires et al. 2009; Jullo et al. 2013). Figure 2 shows as an example a convergence map at z = 0.9 for mag_{lim} = 24.5. Hereafter, whenever we mention convergence maps, we always refer to the set of reconstructed maps.
Fig. 2. Top: original convergence map at z = 0.9 for a limiting magnitude mag_{lim} = 24.5, simulated with FLASK. Bottom: same map but reconstructed with the KS method. 

Open with DEXTER 
3. Minkowki functionals: measurement
We considered a smooth twodimensional random field k(x, y) with zero mean and variance . We first defined the excursion set Q_{ν} as the region where the normalized field k/σ_{0} is larger than a given threshold ν. We can then define the three MFs as
where A is the map area, ∂Q_{ν} the excursion set boundary, da and dl are the surface and line element along ∂Q_{ν}, and 𝒦 its curvature. (V_{0}, V_{1}, and V_{2}) are the area, the perimeter, and the genus characteristics (i.e., the number of connected regions above a given ν minus that of connected regions below ν) of the excursion set Q_{ν}. MFs can be redefined in a more convenient way as
where we explicitly considered the case of the convergence field κ(x, y) and expressed the threshold as a multiple of its variance σ_{0}. In Eqs. (6)–(8), it is κ_{i} = ∂κ/∂x_{i}, and κ_{ij} = ∂^{2}κ/∂x_{i}∂x_{j} with (i, j) = (x, y), that is, MFs are computed in terms of the field and its derivatives. With these definitions, it is then straightforward to implement an algorithm to measure the MFs from the map. However, numerical issues need to be solved that come from the conversion of integrals into discrete sums, derivatives into finite differences, and the Diracδ into discrete ν binning. In order to validate our pipeline, we realized 500 random Gaussian maps that we input to our code for measuring the MFs. We then took the mean as the final estimate and the standard deviation as uncertainty, and finally obtained the results in Fig. 3, where the solid black line is the theoretical prediction (see below). The measured (V_{0}, V_{1}) deviate from the theoretical expectation at ν = 2, the threshold use for the rest of the analysis, by less than 1%, while the discrepancy is slightly larger (up to 1%) for V_{2}. This is related to the way the theoretical value is computed because it relies on the values of (σ_{0}, σ_{1}), defined below, which are themselves measured on the maps. We therefore do not ascribe this larger difference to a missing ingredient in the theoretical estimate and consider our measurement pipeline for all MFs reliable.
Fig. 3. Numerical (blue dots) vs expected (solid black line) MFs from 500 realizations of a Gaussian random field. 

Open with DEXTER 
We then measured MFs in the 1108 simulated convergence maps by varying the redshift bin centers z, the limiting magnitude mag_{lim}, and the scale θ_{s} of the Gaussian filter we used to smooth the maps before the MF estimation. In particular, we considered
Figure 4 shows the three MFs as a function of the signaltonoise ratio (S/N) ν for the illustrative case of a survey with mag_{lim} = 25.5 (other cases are qualitatively similar). For a fixed smoothing scale (left panels), the overall scaling with ν is the same; the redshift value only enters to determine the MF amplitude. In particular, differences in V_{0} are typically quite small and are no larger than ∼2%, and they increase to ∼8% (∼20%) for V_{1} (V_{2}). A similar argument holds for the dependence on the smoothing angle for fixed θ_{s}, with differences that can now be easily appreciated, as shown by the results in the right panel. These results suggest that the behavior of MFs with ν does not carry the relevant information but rather the dependence on the redshift and the smoothing angle. We therefore set ν = 2 in the remaining analysis and refer to the next section for the reason of this particular value.
Fig. 4. Left: MFs as measured from maps with mag_{lim} = 25.5 as a function of the threshold ν for different values of the redshift bin center z and fixed smoothing scale (θ_{s} = 6′). Right: same as in the left panel, but for different values of the smoothing scale θ_{s} and fixed redshift bin center (z = 1.2). 

Open with DEXTER 
Our observed MF data vector is
with
where the values were computed as the mean over the 1108 convergence maps realized for each given mag_{lim}. The covariance matrix can then be estimated as
where N_{maps} = 1108 is the total number of convergence maps, is the ith component of the data vector, calculated on the kth map, and is the same component averaged over all maps. In Fig. 5 we show the normalized covariance matrix obtained in the case mag_{lim} = 25.5 as an example. We note that for V_{0} the correlation increases with the smoothing scale, but seems quite insensitive to the redshift. On the other hand, the correlation for V_{1} and V_{2} for small values of θ_{s} and z is higher. We observe the same pattern for the cross correlation between V_{1} and V_{2} and, while V_{0} and V_{1} appear correlated for larger θ_{s}, V_{0} and V_{2} are anticorrelated. The strong correlations that we find among some elements of the data vector suggest that the dimension of D can be reduced without loss of appreciable information. We therefore investigated this possibility as well. We focused on the case ν = 2 to investigate the change in MFs as a function of the limiting magnitude. This is shown in Fig. 6, where we plot MFs as a function of z and θ_{s} for five different mag_{lim}. For a fixed smoothing angle, the difference among the MF amplitudes at different mag_{lim} tends to decrease, with z being no larger than ∼9%. The only remarkable exception is the case with mag_{lim} = 24.5, which gives a ∼35% difference in the V_{2} amplitude at large z. However, this is a consequence of the small number of galaxies in the high z bins for a survey as shallow as the one with this limiting magnitude. As a consequence, the map reconstruction becomes more noisy and less reliable, and this needs to be corrected for, as we demonstrate in the next section. The right panels show that the dependence on mag_{lim} is increasingly less important as the smoothing angle θ_{s} increases. This is expected becaues the larger θ_{s}, the more Gaussian the convergence field, so that the MFs depend on ν alone. As a consequence, we obtain the unfortunate result that the scales with the most information are those at small θ_{s}, which at the same time are the noisiest. The next section discusses how the effect of noise can be mitgated through a suitable calibration procedure.
Fig. 5. Normalized covariance matrix for the MF data vector D defined in the text for the case with mag_{lim} = 25.5. 

Open with DEXTER 
Fig. 6. Left: MFs for ν = 2 and different values of the limiting magnitude mag_{lim} as a function of the redshift bin center z, with fixed smoothing scale (θ_{s} = 6′). Right: same as in the left panel, but as a function of the smoothing angle θ_{s} with fixed redshift bin center (z = 1.2). 

Open with DEXTER 
4. Minkowski functionals: theoretical predictions
In order for an observable to be of any use in constraining cosmological parameters, it is mandatory to be able to theoretically compute its expected value. This is analytically possible for MFs only in the case of Gaussian random fields, while deviation from nonGaussianity (as the ones for the convergence field) can be dealt with in an approximated way through a perturbative series expansion. This method, however, does not take systematic effects into account that are introduced by imperfect map reconstruction from noisy shear data. In Vicinanza et al. (2019), some of us have successfully accounted for this through a semianalytical approach calibrated on simulations. Below, we first summarize the main steps and results, and then present a simplified but still reliable way to reduce the number of nuisance parameters.
4.1. Minkowski functionals for noiseless convergence fields
For a Gaussain random field, MFs can be exactly computed as (Adler 1981; Tomita 1986)
with ω_{n} = π^{n/2} [Γ(n/2 + 1)]^{−1}, so that it is ω_{0} = 1, ω_{1} = 2, and ω_{2} = π. Here, we assumed that the field has null mean, variance σ_{0}, and variance of its covariant derivative σ_{1}, and ℋ_{n}(ν) are Hermite polynomials.
When the field is only mildly nonGaussian, a perturbative expansion can be used,
The deviation from the Gaussian prediction can be expanded in terms of σ_{0} = ⟨κ^{2}⟩ as
with . To the lowest order in σ_{0}, the coefficient of the correction term reads
where S^{(n)} are generalized skewness quantities defined from the convergence field and its derivatives
The variance terms σ_{n} and the generalized skewness parameters S^{(n)} can be expressed in terms of the polyspectra of the field. For the variances, it is Munshi et al. (2011)
where 𝒞(ℓ) is the lensing convergence power spectrum for sources at redshift z_{s}, and 𝒲(ℓ) is the Fourier transform of the smoothing filter. The cosmological information is contained in 𝒞(ℓ), which is given by
with
where E(z) = H(z)/H_{0} is the dimensionless Hubble function, χ(z) is the comoving distance, r(z) is the comoving angular diameter distance, and P_{NL}(k, z) is the nonlinear matter power spectrum evaluated in k = ℓ/χ(z) because of the Limber approximation. We assume a spatially flat universe from now on. We used a Gaussian filter to smooth the map,
with σ_{s} the smoothing length. Generalized skewness quantities (that are connected with thirdorder moments) can be expressed as
where, adopting a compact notation, we obtain
with
and
where cp denotes cyclic permutation. In Eq. (23), the cosmological information is coded into the convergence bispectrum,
with B_{NL}(k_{1}, k_{2}, k_{3}, z), the matter bispectrum, evaluated at k_{i} = ℓ_{i}/χ(z) because of the Limber approximation. The contribution of each multipole to the sum in Eq. (23) is weighted by
with
and
The Wigner3j symbols account for the fact that only triangular configurations (i.e., k_{1} + k_{2} + k_{3} = 0) contribute to the sum.
4.2. Observable Minkowski functionals
As we described above, Eqs. (12)–(14) refer to the case of a noiseless convergence field. However, there are different reasons why they cannot be straightforwardly used to predict the MFs, which are measured on actual convergence maps. First, κ is not a directly observed quantity, but it is rather reconstructed from the shear data so that multiplicative and additive biases are present. Second, the field is also shifted from its true value because of the noise. In Vicinanza et al. (2019), we have addressed this problem by postulating that at the lowest order, these effects can be described as
with m_{κ} the multiplicative bias, and N the zero mean noise. Starting from Eq. (29) and assuming the noise is not correlated with the signal, we can propagate the effect of noise and bias on the variance of the field and its derivatives, and on the generalized skewness parameters. Finally, we have the following expressions for the observable MFs:
where, to shorten the notation, we introduced the variance ratios ℛ_{i} = σ_{iN}/σ_{i}, and defined the tilde skewness parameters as
with σ_{21} = σ_{2}/σ_{1}, and the label N denoting noiserelated quantities. Equations (30)–(35) enable estimating the MFs of the observed convergence field in terms of the variances σ_{n} and generalized skewness parameters S^{(n)} (with n = 0, 1, 2) of both the actual convergence field and the noise.
4.3. Validation and calibration
Equations (30)–(32) have been obtained under some assumptions, which, although reasonable, are nevertheless only approximations. It is therefore mandatory to validate them by fitting to measured MFs in the simulated dataset. This test will also give us the fiducial values of the nuisance parameters, entering them so that we refer to the full process as “calibration”. Vicinanza et al. (2019) have successfully calibrated these relations against the MICEv2 catalog data. To this end, they modeled the functions ℛ_{n} and as power laws of the redshift and smoothing scales, thus summing up to a total of 13 nuisance parameters p_{nuis}.
We reconsidered this problem here and studied the definition of these quantities to determine a way to reduce the dimensionality of p_{nuis} without significantly degrading the overall fit quality. To this end, we first considered the variance ratios given by
where the label ref denotes a quantity evaluated at some arbitrary chosen reference values of (θ_{s}, z), which we fixed as (2′, 0.3). Based on their own definition, the two terms in square parentheses have a predictable scaling when the cosmological model is given. As a consequence, we can reduce the number of nuisance parameters to only three: the values of the ratios at the reference point. This is different from Vicinanza et al. (2019), where we instead modeled the dependence on θ_{s} as a power law, thus adding three more parameters to fix the slopes.
We now consider the quantities starting from the case n = 0, which we can conveniently rearrange as follows:
where we used Eqs. (22)–(24) and defined
with
where ℬ_{N}(ℓ, ℓ_{1}, ℓ_{2}) is the noise bispectrum. It is worth noting that the numerator and denominator depend on the smoothing scale θ_{s} only through the same weight function 𝒲(ℓ, ℓ_{1}, ℓ_{2}), so that we can argue that their ratio is weakly dependent on it. As a working assumption, we therefore considered β_{0} to be independent of θ_{s} and redefined it as
We scaled β_{0} with respect to to have a reference dimensionless value, but this choice is arbitrary. As a consequence, there is no reason to expect to be a small number.
Equation (37) finally reads
Proceeding in a similar way, we also obtain
so that now all the quantities entering have a predictable dependence on (θ_{s}, z), and three additional nuisance parameters remain: .
We finally have the following nuisance parameters:
which is definitely fewer than in Vicinanza et al. (2019); here p_{nuis} is a 7 rather than 13dimensional vector. This reduction was possible by fixing the way in which the variance ratios ℛ_{i} and the noiseskewnessrelated quantities scale with (θ_{s}, z). We therefore need to validate this calibration approach by fitting to the mock dataset that we constructed from the simulated convergence maps at different depths.
We used a straightforward fitting procedure, that is, we minimized a pseudo – χ^{2} merit function, defined as
with D_{obs} and D_{th} the observed and theoretically predicted MF dataset, and Cov^{obs} the corresponding covariance matrix as determined from the simulated maps.
It is worthwhile stressing that the vector of nuisance parameters p_{nuis} changes according to which dataset is considered. For instance, as shown by Eq. (30), if V_{0} alone were included, p_{nuis} would reduce to . We therefore repeated the fit for each MFs combination, with the consequence that the same nuisance parameter can have different fiducial values depending on which dataset is considered. Similarly, the errors on the parameters will be different, which will affect the estimate of the systematics covariance matrix we discuss below.
A cautionary remark is in order here. Compared to Paper I, we changed the calibration method in two major aspects. First, we made a single joint fit to the full dataset instead of first fitting to V_{0} alone and then to (V_{1}, V_{2}). Second, we now include the full covariance matrix Cov^{obs} in the χ^{2} function, hence taking care of the correlation among the components of the dataset. In Paper I, we only considered the diagonal elements because this procedure allowed us to better minimize the scatter of the residuals of single MFs. This is no longer the case with the revised calibration formulae we have introduced here, so that we prefer to adopt the present more statistically correct approach.
We performed the calibration for the different mock dataset by varying the limiting magnitude and the dataset used (i.e., whether we include only one MF or a combination of them). Compared to Vicinanza et al. (2019), the performance of the calibration is similar, although we note a small increase of when V_{0} is not used in combination with other MFs. A straightforward comparison is, however, not possible because of the radically different fitting procedure, the larger redshift range (up to 1.8 instead of 1.4), and a different source redshift distribution. We also note that a marked decrease in could be obtained when the lowest redshift bin were cut, which is at the edge of the redshift range recommended by FLASK authors. Cutting MFs at z = 0.6 would reduce the number of observables, thus decreasing the overall constraining power of this probe. Future lensing surveys, in contrast, will not be affected by this problem so that MFs at this low z will also be usable. We therefore preferred to retain these terms in the data vector at the cost of an increase of the rms of bestfit residuals with respect to what will likely be available when fully realistic mock data are used for calibration. We therefore expect our results to err on the side of conservativeness.
The Markov chain Monte Carlo (MCMC) method we used to explore the nuisance parameter space allows us to sample the joint posterior that we then used to propagate the errors on p_{nuis} on the MFs. We thus obtained a covariance matrix, which represents the uncertainties we would have on the MFs even if they had been measured with infinite precision. In a sense, this is the uncertainty coming from our imperfect theoretical modeling of the MFs and the lack of knowledge of the exact nuisance parameters. In other words, this is what we refer to as the systematics covariance matrix, which we denote as Cov^{sys}. We stress that Cov^{sys} depends on the fitting so that it is different for each MFs dataset.
5. Fisher matrix forecasts
Equations (30)–(32) allow us to match theoretical and measured MFs, correcting for the overall effect of missing higher order terms, imperfect reconstruction from the shear field, and noise in the ellipticity data. As input, the cosmological model parameters
and the nuisance parameters
need to be specified, where we have changed to logarithmic units for because this quantity may change over the range of an order of magnitude. Fitting simulated datasets that mimic as closely as possible the actual data can help constraining p_{nuis}, but it is a safer option to leave them free to vary to account for possible missing ingredients in the simulations. As a consequence, we do not expect MFs alone to be able to constrain the full set of parameters, so that in the following, we always consider the joint use of MFs and the standard cosmic shear tomography using the Fisher matrix formalism (Tegmark et al. 1997) to make forecasts.
This analysis has previously been presented in Vicinanza et al. (2019) for a survey mimicking the redshift distribution of the MICEv2 catalog and using a larger number of nuisance parameters. We address here a complementary issue. Planned future surveys will typically cover a wide area to a relatively shallow limiting magnitude, and a narrow region to a deeper limiting magnitude. We therefore investigated how the survey performances improve when the shear tomography is used and MFs measured on the wide area and MFs from the deeper region. When independence of the probes is assumed, the total Fisher matrix reads
where F_{WL} is the Fisher matrix for shear tomography on the full survey area, F_{MFW} and F_{MFD} are those for MFs from the wide and shallow and deep and narrow survey regions, and P is the prior matrix. We placed priors on the nuisance parameters only so that P is a diagonal matrix with null values for the rows corresponding to cosmological parameters, and for the rows referring to the nuisance parameters. Varying ε_{P} allows us to investigate the accuracy to which the nuisance parameters should be known in order to improve the constraints on the cosmological parameters by a given factor. We quantified this by considering the FoM alone because this is the quantity of interest to distinguish among competing dark energy models.
A caveat is in order about Eq. (44). By summing the Fisher matrices from the different probes, we implicitly assumed that the three probes are not correlated. We therefore decided to evaluate the MFs in the wide and deep area separately so that they did not share any data and were therefore independent. In contrast, the shear tomography was evaluated over the full survey area so that the same data were used for tomography and MFs. It is worth noting, however, that the two probes are radically different, with shear tomography probing the local properties of the shear field, and MFs the topological property of the full convergence map. Moreover, they are affected by different systematic effects and retrieved from different measurement pipelines so that it might be argued that possible correlations (if any) are washed out by the estimate procedure. We therefore rely on Eq. (44) and caution that a clear demonstration of its validity is a still pending issue.
We refer to Vicinanza et al. (2019) for the full set of formulae to compute the MFs Fisher matrix, but we stress here two remarkable differences concerning the inverse covariance matrix. This is still estimated as (Hartlap et al. 2007)
with the multiplicative factor that corrects for the finite number of realizations N_{f} used to estimate the covariance of the N_{d} dimensional data vector. In Vicinanza et al. (2019), we set N_{f} = A/25 with A = 3500 deg^{2} the total area cut from the MICEv2 simulated field. This limited the cases we could consider because the multiplicative term is required to be positive. Here, however, through the use of FLASK, we simulated a full sky survey that after cuts to have wellseparated patches provided N_{f} = 1108 subfields. This orderofmagnitude increase of N_{f} and the smaller number of nuisance parameters (hence the smaller N_{d}), makes the multiplicative factor close to unity for all the cases of interest.
Because the systematics covariance matrix was computed by propagating the errors on the nuisance parameters, setting a prior on p_{nuis} also affects Cov^{sys}. In order to speed up the estimate, we first computed MCMC samples for each MFs dataset without any prior on p_{nuis}. When a prior was added, we performed importance sampling on the chains according to suitably defined Gaussian weights, thus recomputing the systematics covariance matrix entering Eq. (45). The stronger the prior, the smaller the contribution of Cov^{sys}. However, care must be taken to avoid the unrealistic case of Cov^{sys} reducing to the null matrix. This cannot be possible because of the approximated nature of our calibration formulae. We verified, however, that as long as the prior is no smaller than ∼5%, Cov^{sys} remains larger than Cov^{obs}, which is what we expect, given the large survey area we considered.
5.1. Improving the figure of merit
Adding MFs to shear tomography increases the number of observables, but also the number of nuisance parameters. Qualitatively, it might be argued that the larger the number of probes, the stronger the constraints. On the other hand, the larger the number of parameters, the weaker the constraints. Moreover, as shown in Fig. 5, there are strong correlations among MFs of different order at the same (θ_{s}, z), so that it is worthwhile wondering whether the use of a single MF is enough to improve the overall FoM. As a first test, we therefore investigated the ratio FoM(γ + V_{n})/FoM(γ) between the FoM from shear only and shear + MFs as a function of the prior on the MF nuisance parameters. Hereafter, we also considered three different shearonly forecasts, which differ for the maximum multipole used in the forecasts. In particular, we set ℓ_{max} = (1500, 3000, 5000) for the pessimistic, intermediate, and optimistic scenario.
We considered a 15 000 deg^{2} survey with a limiting magnitude mag_{W} = 24.5, which includes a 40 deg^{2} region that is observed at a deeper limiting magnitude mag_{D}. As a general result, we find that adding V_{0} only to the shear does not improve the FoM at all, regardless of which prior is set on the nuisance parameters and of the limiting magnitude of the deep field. The improvement is less than 10^{−4}, so that we do not show the scaling of the FoM ratio with respect to ε_{P}. This is somewhat expected because V_{0} only depends on the variance σ_{0} of the convergence field. Because σ_{0} is a secondorder quantity, it is expected that it does not add further information with respect to the quantitity that is already probed by the more detailed secondorder statistics represented by the shear tomography.
This is not the case for the higher order MFs (V_{1}, V_{2}) that probe the nonGaussianity of the convergence field. Because the number of nuisance parameters increases by only two from V_{1} to V_{2}, it is expected that higher order probes have a greater effect on the FoM. This is indeed what the comparison of the top and bottom panels in Fig. 7 shows. It might naively be thought that the FoM may be boosted strongly by the addition of a single MFs, either V_{1} or V_{2}, given that the curves in the central and right panels reach ∼20 − 30%. Unfortunately, these values are obtained only when ε_{P} ∼ 5%, while there is a steep decline in the range (5, 15)% followed by a shallow convergence towards a unit FoM ratio. Considering that when no prior is used, the nuisance parameters are determined by the calibration procedure with roughly 60% error, it is easy to understand that the part of the curve with ε_{P} > 10 − 20% should be examined. In this regime, the FoM ratio is hardly improved by more than ∼5%. In particular, there is no appreciable dependence on the mag_{D} value as a likely consequence of the small contribution given by the F_{MFD} to the sum in Eq. (44) when a single MF is used.
Fig. 7. Top: FoM ratio as function of the prior ε_{P} on the MFs nuisance parameters for different values of the limiting magnitude mag_{D} of the deep survey when V_{1} alone is added to the shear tomography with ℓ_{max} = (1500, 3000, 5000) from left to right. Bottom: same as in the top panel, but for V_{2} alone added to the shear. In each panel the curves for different mag_{D} are so superimposed that they cannot be seen at all. 

Open with DEXTER 
We now discuss Fig. 8, which shows the improvement in the FoM when two MFs are added to the shear tomography. We again find that the FoM ratio can reach surprisingly high values for ε_{P} < 10%. However, significant improvement can still be obtained even for more realistic values of the priors, with the FoM ratio being higher than 1.05 for ε_{P} values as high as ∼40% for V_{12}. It is interesting to note that the trend of the FoM ratio with ε_{P} is roughly the same, regardless of which ℓ_{max} value is used for the estimate of the FoM from shear tomography only, while only the scale of the yaxis in the different panels changes. This suggests that the use of MFs can be tailored as a way to partially compensate for a cut on ℓ_{max}. This shortening of the ℓ range can be of interest because the larger ℓ, the more shear tomography is pushed into the highly uncertain nonlinear regime so that using a smaller ℓ_{max} is a safer option to avoid theoretical errors due to inaccurate nonlinearity modeling. For instance, it is
Fig. 8. Same as Fig. 7, but adding two MFs to shear tomography V_{01}, V_{02}, and V_{12} for the left, center, and right panels. Again, the dependence on mag_{D} is hard to appreciate, so that for most of the panels, it is impossible to see more than one curve. The only exception is the line referring to mag_{D} = 26.0 in the bottom panels. We plot here a smaller ε_{P} range to better show the behavior over the range for which adding MFs to the shear tomography indeed helps increasing the FoM by a significant amount. 

Open with DEXTER 
but we find that
while a ∼20% prior is enough to halve the FoM decrement due to going from ℓ_{max} = 5000 to the safer ℓ_{max} = 3000. Understanding which goal (i.e., better modeling nonlinearities versus improving MFs nuisance parameters priors) is easier to reach is a matter of open investigation.
Although the contribution of MFs is now more appreciable, we still find no significant dependence of the results on mag_{D}. The curves in Fig. 8 are still superimposed so that they cannot be distinguished. The only different case corresponds to mag_{D} = 26.0 in the V_{1} + V_{2} configuration. This could be related to some peculiarity in the calibration for this particular combination or might be an artifact of the importance sampling in the small ε_{P} regime. We are unable to understand which hypothesis is correct, but we remark that the difference only takes place in an unrealistic prior regime so that we neglected this.
This is no longer the case in Fig. 9, where we now use all three MFs for a joint analysis with shear tomography (for the three different ℓ_{max} values). The only discrepant case is mag_{D} = 24.5 (green curve), which actually refers to a configuration without a deep area at all because the wide and the deep field have the same limiting magnitude. We again find that MFs can compensate for the FoM decrement due to the use of lower ℓ_{max}. We can also slightly relax the prior that is required to obtain the same FoM as the optimistic shearonly scenario because we find
Fig. 9. Same as Fig. 7, but adding all the three MFs to shear tomography with ℓ_{max} = (1500, 3000, 5000) from left to right. Curves for different mag_{D} values are again superimposed, with the only difference that the green curves refer to mag_{D} = 24.5. 

Open with DEXTER 
Still more interestingly, for ε_{P} = 15%, we obtain a combined FoM that is only ≃6% lower than the optimistic shearonly FoM, thus partially compensating for the 19% decrement we observed due to cutting at ℓ_{max} = 3000 rather than ℓ_{max} = 5000.
As a general result, we found almost no dependence of the FoM ratio on the limiting magnitude mag_{D} in the deep region. This is somewhat surprising because it might make us argue that there is no motivation for going deeper in magnitude. This is related to the approach undertaken in this paragraph, where we combined a wide survey with a deeper one that covers an area three orders of magnitude smaller. The advantage of going deeper will become more evident using the complementary approach explored in the next paragraph.
5.2. Optimizing an ideal survey
In the previous subsection, we have considered the case of a survey with total area A_{tot} = 15 000 deg^{2}, which includes a smaller portion A_{deep} = 40 deg^{2} imaged at a deeper magnitude mag_{D}. This choice is the same as the survey setup of the Euclid mission. We here revert the point of view and investigate how the shear + MFs FoM changes as a function of (A_{tot}, A_{deep}). We held fixed the total survey duration so that increasing A_{deep} is possible only at the cost of reducing the total survey area by a factor that depends on the chosen limiting magnitude mag_{D} (keeping fixed mag_{W} = 24.5 for the wide area). It is important to stress that now we scaled the results with respect to a reference FoM value, which is the shear tomography only FoM, setting ℓ_{max} = 3000 and A_{tot} = 15 000 deg^{2}. We only considered the case where all the three MFs are used because this case provides the largest increase in the FoM.
This setup gives us the curves in Fig. 10, where we show the FoM ratio as a function of the prior on MF nuisance parameters for mag_{D} from 25.0 to 26.5 in steps of 0.5 and three different A_{deep} values. The curves in Fig. 10 have a rough aspect because they were obtained by interpolating over a grid in the (ε, A_{deep}) space. In order to save time, we did not use a grid that was fine enough to completely remove the numerical noise. As a first remarkable result, we note that the FoM ratio may also be lower than unity, that is, adding MFs reduces the overall FoM instead of increasing it. As counterintuitive as it may appear, this result is easily explained when we recall that we changed both A_{deep} and A_{tot}. Because the shearonly FoM linearly scales with A_{tot}, going deeper over a large area can decrease the F_{WL} term in Eq. (44) by an amount that is not compensated for by the increase of the F_{MFD} one. In these cases, the total FoM is smaller, which favors a wide and shallow rather than deep and narrow survey. This is also confirmed by the fact that for a fixed ε_{P}, the FoM ratio typically decreases with mag_{D}, regardless of which A_{deep} value is adopted.
Fig. 10. Left: FoM ratio as a function of the prior ε_{P} on the MFs nuisance parameters for the intermediate shearonly scenario for different values of (mag_{D}, A_{tot}), using all three MFs. We fix A_{deep} = 40 deg^{2}. Center: same as in the left panel, but with A_{deep} = 80 deg^{2}. Right: same as in the left panel, but with A_{deep} = 160 deg^{2}. In each panel, blue, magenta, purple, and red lines refer to mag_{D} = (25.0, 25.5, 26.0, 26.5). 

Open with DEXTER 
It is worth noting, however, that increasing the region that is imaged at a deeper magnitude can be desirable for other motivations that are indirectly related to the FoM validation (e.g., a better control of systematic errors, thus making the forecasts more reliable) or to other aspects of the survey (such as the legacy outcome). It is therefore of interest to investigate whether it is possible to change (A_{tot}, A_{deep}) without affecting the total FoM. We therefore solved
with respect to A_{deep} for given ε_{P}, fixing the total survey area in such a way that the survey duration remained unchanged. The results are shown in Fig. 11 for different values of mag_{D}.
Fig. 11. Left: deep survey area needed to obtain the same FoM as for the reference shear tomography only survey as a function of the prior on the MFs nuisance parameters for different limiting magnitude. Right: same as in the left panel, but considering the total survey area. 

Open with DEXTER 
The curves in this plot may be used to optimize an ideal survey by changing the areas of the deep and wide regions and holding the total duration fixed. The answer depends on how well the MFs nuisance parameters are known. For instance, a 20% prior on p_{nuis} allows us to obtain the same reference FoM either for a survey with total area A_{tot} = 13 466 deg^{2} with A_{deep} = 437 deg^{2} at mag_{D} = 25.0, or by reducing A_{deep} to 36 deg^{2} and A_{tot} to 13 544 deg^{2}, but with a significantly deeper magnitude mag_{D} = 26.5 (which may dramatically increase the legacy products).
Alternatively, Fig. 11 may be used to set requirements on ε_{P} that should be fulfilled when the deep area at a given mag_{D} is to be increased. For instance, if we were to double the Euclid deep area, that is, we set A_{deep} = 80 deg^{2}. In order to preserve the survey time duration, the total area should be reduced to
while the prior ε_{P} must be
for mag_{D} = {25.0, 25.5, 26.0, 26.5} in order to preserve the same shear tomography + MFs FoM. Overall, Fig. 11 shows that while it is indeed possible to reduce A_{tot} to enlarge A_{deep}, the price to pay can be quite demanding. A_{tot} quickly returns to its reference value as ε_{P} increases. A detailed analysis of the accuracy to which the MFs nuisance parameters may be constrained based on simulations is therefore mandatory, but this is beyond the aim of our paper.
5.3. Uncertainty on the FoM ratio
The results presented in the two previous paragraphs implicitly assume that the FoM is computed with no errors so that the ratio between the FoMs with or without the use of MFs can be reliably used to compare different setups. This assumption is motivated by the consideration that the FoM is estimated from the Fisher matrix, which is a theoretical quantity, so that provided all the input ingredients are correct, it is known with infinite accuracy. However, the question remains what happens if for some unspecified reason, the Fisher matrix elements F_{αβ} are incorrectly estimated. Ideally, F_{αβ} can be radically different if one changes the assumed cosmological model or radically change the observational setup. However, these deviations should not be considered uncertainties to be propagated on F_{αβ}, but rather the Fisher matrix would refer to a different experiment and/or model so that it must deviate from the reference case. We considered as uncertainties either numerical errors or small discrepancies between the assumed survey setup and the final actual setup. We therefore studied how these uncertainties propagate on the FoM ratio we have considered so far.
To this end, we assumed that a given Fisher matrix element F_{αα} has been estimated with an accuracy δF_{αα}. It has been shown (Euclid Collaboration 2019) that the FoM is then known with an accuracy given by
with (α, β) setting the column and row corresponding to the DE parameters (w_{0}, w_{a}). In our case, the total Fisher matrix is the sum of three terms,
which refer, respectively, to cosmic shear, MF on the wide area, MF on the deep area. Considering the three probes as independent, a naive propagation of errors gives
with . We can then use the other naive relation
to obtain an expression for ε_{αβ} = δF_{αβ}/F_{αβ} and plug the result and Eq. (48) into Eq. (46) to derive the relative error on the FoM from the joint use of WL and MFs. Setting to zero the MFs terms gives the error on the FoM from WL only. We can finally write
where we defined ℛ_{FoM} = FoM(γ + V_{n})/FoM(γ) to denote the ratio among the FoM from WL+ MFs and WL only, respectively. The relative errors δFoM(X)/FoM(X) can be computed as described above and will lead to a lengthy but simple algebraic formula (not reported here for sake of brevity), which provides the error on the FoM ratio as a function of the error on the WL only FoM and the relative uncertainties (ε_{αα}, ε_{αβ}, ε_{ββ}) of the MFW and MFD Fisher matrices.
In Fig. 12 we plot δℛ_{FoM}/ℛ_{FoM} for the case of the optimistic cosmic shear scenario combined with MFs from wide and deep areas (for mag_{lim} = 25.5 for the deep region), also adding a 10% prior on MFs nuisance parameters. In order to reduce the number of parameters, we took the relative uncertainty on the WL Fisher matrix to be the same for (α, β) combination, and show the results as a function of , assuming that the error on the other elements of the MFW and MFD Fisher matrices are the same. Finally, for MFs, we used (V_{0}, V_{1}, V_{2}) data. Dropping these assumptions does not qualitatively change the results, with only a minor quantitative effect.
Fig. 12. Error on the FoM ratio ℛ as a function of the relative uncertainty on the MFW Fisher matrix elements. We set a 10% prior on MF nuisance parameters, and considered the optimistic scenario for shear only. Blue, purple, and red lines refer to giving δFoM(WL)/FoM(WL) = (0.6, 2.2, 6.4)%, respectively. 

Open with DEXTER 
This figure offers a qualitative way to set a requirement on the accuracy with which the MFs Fisher matrix elements have to be determined so that the estimated value of ℛ_{FoM} is reliable. For instance, the rightmost panel in Fig. 9 shows ℛ_{FoM} ≃ 1.15 for the adopted prior on MFs. When we require that ℛ_{FoM} − δℛ_{FoM} ≃ 1 (i.e., we require that the FoM improvement is larger than 1 at 1σ), we need to have δℛ_{FoM}/ℛ_{FoM} < 13%. Figure 12 then shows that this can be achieved when . Although a detailed propagation of different errors on the input quantities has not been done, the margin is large enough for us to be confident that it can be fulfilled, thus making our estimate of the FoM ratios quite reliable.
6. Conclusions
The greater sample size and the higher data quality promised by Stage IV lensing surveys enable us to reach higher than secondorder statistics to probe the properties of the convergence field. Standard secondorder probes such as a shear tomography power spectrum and a twopoint correlation function only trace the Gaussian properties of the field, while higher orders allow us to probe its nonGaussianity and thus open up the way to a better field description and hence stronger constraints on the underlying cosmological model. MFs stand out as promising candidates because they depend on the generalized skewness parameters that probe the higher order statistical properties of the field and its first derivative. In Vicinanza et al. (2019), we matched the theoretically predicted MFs based on a perturbed series expansion to the actually measured MFs in a convergence map reconstructed by noisy shear data.
The present work differs from our previous paper in a number of aspects, which makes a straightforward comparison not quite possible. First, we developed a novel calibration strategy that allowed us to reduce the number of nuisance parameters. To this aim, we derived under reasonable assumptions the scaling of the noisetosignal variance ratios and of the functions related to the skewness of the noise field. This derivation enabled us to halve the dimension of the nuisance parameters vector p_{nuis} to 7 instead of the original 13. This significant decrease does not spoil the quality of the matching between theory and data, with the rms of bestfit residuals being almost the same as for the original recipe. In order to validate the scaling assumptions and determine fiducial values of the nuisance parameters, we performed a joint fit to the full MF dataset, thus taking into account the covariance among the MFs. This is different from Paper I, where we considered only the diagonal elements of the covariance matrix. This more statistically correct approach also led to us change the estimate of the systematics covariance matrix Cov^{sys}, which was now obtained by propagating the uncertainties in the determination of nuisance parameters on the final estimate of the theoretical MFs. As a consequence, if a prior is added on the p_{nuis}, Cov^{sys} is accordingly changed, which reduces the effect on the overall error budget, as expected. As a further improvement, we also validated this calibration procedure against data with a different source redshift distribution and MFs S/N ratio, considering datasets at varying limiting magnitude mag_{lim}.
Two points remains still remain to be addressed. First, the validation was carried out based on lognormal simulations generated with FLASK for a fixed set of cosmological parameters. Although the fiducial values used here are different from those in Vicinanza et al. (2019), it is critical to verify that the proposed calibration procedure still holds in radically alternative cosmologies. By this, we do not mean that the nuisance parameters are the same, but that the set of Eqs. (30)–(32) still allows us to match theory and data without dramatically increasing the rms scatter of the bestfit residuals because they enter the estimate of the total covariance matrix. FLASK is an ideal tool for this analysis because it allows us to quickly generate convergence maps, taking as input only the matter power spectrum for the given model. We therefore plan to carry out an investigation of this question by also varying the number of maps and the noise properties. As a further step toward realistic mocks, we also plan to change the angular selection function in order to investigate the effect of the mask on the MFs measurement and the validity of the calibration procedure in this circumstance. The effect of masking cannot be framed within the derivation of Eqs. (30)–(32), so that ad hoc corrections might be required.
As a second step forward with respect to the first presentation of our approach to MFs in Vicinanza et al. (2019), we here considered the more realistic case of a wide area survey imaged at a limiting magnitude mag_{W} that contains a deep and narrow region with a larger limiting magnitude mag_{D}. A joint analysis of shear tomography and MFs (with contributions from the wide and shallow and deep and narrow areas) may boost the total FoM. In particular, this allows us to reduce the maximum multipole ℓ_{max} of shear tomography, partially compensating for the loss in the shearonly FoM thanks to the MFs contribution. Although we carried on this analysis for a Euclidlike survey, we also showed the requirements that should be set on the accuracy to which the MFs nuisance parameters have to be known in order to obtain the same FoM as the reference survey, but different values of the deep region area. When the survey duration is fixed, an increase of A_{deep} comes at the cost of reducing A_{tot}. MFs can then compensate for the loss in FoM, opening the way to a different setup, which can help to better control systematics and augment side products of great interest for the legacy science.
These results are interesting in themselves but should be taken “cum grano salis”. First, we have stressed that MFs complement and supplement shear tomography only if severe constraints on the nuisance parameters are available. It is a matter of open investigation to understand whether p_{nuis} can indeed be constrained to the required accuracy. To this end, it needs to be investigated how the error on the calibration procedure scales with the number of mock datasets and moreover, whether the nuisance parameter accuracy scales with the noise properties. This will eventually set requirements on this quantity as well. We plan to address this question in a forthcoming work relying on FLASK data under different cosmological scenarios to also check whether the full method works in all possible configurations (cosmology, noise, number of mock datasets, etc.).
Another question to be answered is what is still missing in our framework. First, we have argued that the use of MFs enables shortening the shear tomography multipole range, which is thus less strongly dependent on an accurate modeling of the matter power spectrum in the highly nonlinear regime. However, this implicitly assumes that MFs are less strongly dependent on nonlinearities. Whether this is indeed the case is an open question; the hardest quantity to model is the matter bispectrum. However, this typically enters through a summation, which is weighted by the product of three exponential functions in ℓ. Highℓ terms are therefore strongly suppressed, making MFs likely less sensitive to the exact nonlinear recipe and to the effect of baryons. That this is indeed the case will be the subject of a forthcoming publication, where we will compare whether the predicted MFs change when they are evaluated for the same cosmology but different approaches to modeling the effect of nonlinearities and baryons on the matter power spectrum and bispectrum.
A final missing ingredient is the intrinsic alignment (IA), which in the weak regime, linearly adds to the lensing shear so that the reconstructed convergence field is a biased representation of the actual field. It is hard to qualitatively understand whether this has an effect on the MF estimate. On one hand, IA quickly becomes subdominant at high redshift so that a possible solution might be to cut the redshift range over which MFs are measured. Moreover, IA is a local effect that should not alter the global topology of the map, hence again not affect MFs. However, IA increases the correlation among close redshift bins and therefore might also increase the correlation among MFs at different z, which decrease the MFs constraining power. Moreover, it is possible that IA works as an additional noise with its own properties (variance and generalized skewness), thus spoiling the accuracy of the matching procedure between theory and data we have developed here. Although lensing simulations including the effect of IA are not available at the moment, it might be investigated whether IA can be included in FLASK using the option of generating the convergence field directly from a tomography spectrum that includes IA.
To summarize, the present paper represents the second step along a path toward making MFs a common tool to be added to the standard secondorder shear statistics. As hard as the journey may be, we are confident that the final goal will be rewarding enough to compensate for all the efforts to reach it.
Acknowledgments
CP and VFC are funded by Italian Space Agency (ASI) through contract Euclid – IC (I/031/10/0) and acknowledge financial contribution from the agreement ASI/INAF/I/023/12/0. We acknowledge the support from the grant MIUR PRIN 2015 Cosmology and Fundamental Physics: illuminating the Dark Universe with Euclid.
References
 Adler, R. J. 1981, The Geometry of Random Fields (Chichester, UK: Wiley) [Google Scholar]
 Bartelmann, M., & Maturi, M. 2017, Scholarpedia, 12, 32440 [NASA ADS] [CrossRef] [Google Scholar]
 Blas, D., Lesgourgues, J., & Tram, T. 2011, JCAP, 2011, 034 [NASA ADS] [CrossRef] [Google Scholar]
 Coe, D., Benítez, N., Sánchez, S. F., et al. 2006, AJ, 132, 926 [NASA ADS] [CrossRef] [Google Scholar]
 Dio, E. D., Montanari, F., Lesgourgues, J., & Durrer, R. 2013, JCAP, 2013, 044 [NASA ADS] [CrossRef] [Google Scholar]
 Eriksen, H. K., Novikov, D. I., Lilje, P. B., Banday, A. J., & Górski, K. M. 2004, ApJ, 612, 64 [NASA ADS] [CrossRef] [Google Scholar]
 Euclid Collaboration (Blanchard, A., et al.) 2019, Euclid Preparation: VII. Forecast Validation for Euclid Cosmological Probes [Google Scholar]
 Green, J., Schechter, P., Baltay, C., et al. 2012, ArXiv eprints [arXiv:1208.4012] [Google Scholar]
 Hamana, T., Takada, M., & Yoshida, N. 2004, MNRAS, 350, 893 [NASA ADS] [CrossRef] [Google Scholar]
 Hartlap, J., Simon, P., & Schneider, P. 2007, A&A, 464, 399 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hikage, C., Matsubara, T., Coles, P., et al. 2008, MNRAS, 389, 1439 [NASA ADS] [CrossRef] [Google Scholar]
 Hoekstra, H., Viola, M., & Herbonnet, R. 2017, MNRAS, 468, 3295 [NASA ADS] [CrossRef] [Google Scholar]
 Jullo, E., Pires, S., Jauzac, M., & Kneib, J.P. 2013, MNRAS, 437, 3969 [NASA ADS] [CrossRef] [Google Scholar]
 Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
 Kilbinger, M. 2015, Rep. Prog. Phys., 78, 086901 [NASA ADS] [CrossRef] [Google Scholar]
 Komatsu, E., Kogut, A., Nolta, M. R., et al. 2003, ApJS, 148, 119 [NASA ADS] [CrossRef] [Google Scholar]
 Kratochvil, J. M., Lim, E. A., Wang, S., et al. 2011, in American Astronomical Society Meeting Abstracts #, BAAS, 43, 225.02 [Google Scholar]
 Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, ArXiv eprints [arXiv:1110.3193] [Google Scholar]
 LSST Science Collaboration (Abell, P. A., et al.) 2009, ArXiv eprints [arXiv:0912.0201] [Google Scholar]
 Matsubara, T. 2010, Phys. Rev. D, 81, 083505 [NASA ADS] [CrossRef] [Google Scholar]
 Matsubara, T., & Jain, B. 2001, ApJ, 552, L89 [NASA ADS] [CrossRef] [Google Scholar]
 Munshi, D., Valageas, P., van Waerbeke, L., & Heavens, A. 2008, Phys. Rep., 462, 67 [NASA ADS] [CrossRef] [Google Scholar]
 Munshi, D., van Waerbeke, L., Smidt, J., & Coles, P. 2011, MNRAS, 419, 536 [NASA ADS] [CrossRef] [Google Scholar]
 Petri, A., Haiman, Z., Hui, L., May, M., & Kratochvil, J. M. 2013, Phys. Rev. D, 88, 123002 [NASA ADS] [CrossRef] [Google Scholar]
 Pires, S., Starck, J.L., Amara, A., et al. 2009, MNRAS, 395, 1265 [NASA ADS] [CrossRef] [Google Scholar]
 Pratten, G., & Munshi, D. 2012, MNRAS, 423, 3209 [NASA ADS] [CrossRef] [Google Scholar]
 Rix, H.W., Barden, M., Beckwith, S. V. W., et al. 2004, ApJS, 152, 163 [NASA ADS] [CrossRef] [Google Scholar]
 Sato, J., Takada, M., Jing, Y. P., & Futamase, T. 2001, ApJ, 551, L5 [NASA ADS] [CrossRef] [Google Scholar]
 Shirasaki, M., & Yoshida, N. 2014, ApJ, 786, 43 [NASA ADS] [CrossRef] [Google Scholar]
 Smith, R. E., Peacock, J. A., Jenkins, A., et al. 2003, MNRAS, 341, 1311 [NASA ADS] [CrossRef] [Google Scholar]
 Takahashi, R., Sato, M., Nishimichi, T., Taruya, A., & Oguri, M. 2012, ApJ, 761, 152 [NASA ADS] [CrossRef] [Google Scholar]
 Taruya, A., Takada, M., Hamana, T., Kayo, I., & Futamase, T. 2002, ApJ, 571, 638 [NASA ADS] [CrossRef] [Google Scholar]
 Tegmark, M., Taylor, A. N., & Heavens, A. F. 1997, ApJ, 480, 22 [NASA ADS] [CrossRef] [Google Scholar]
 Tomita, H. 1986, Prog. Theor. Phys., 76, 952 [NASA ADS] [CrossRef] [Google Scholar]
 Vicinanza, M., Cardone, V. F., Maoli, R., et al. 2019, Phys. Rev. D, 99, 043534 [NASA ADS] [CrossRef] [Google Scholar]
 Xavier, H. S., Abdalla, F. B., & Joachimi, B. 2016, MNRAS, 459, 3693 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
Limiting magnitude, total source number density (in gal arcmin^{−2}), and median redshift.
All Figures
Fig. 1. Redshift selection functions used as input for FLASK to simulate different survey depths corresponding to different limiting magnitudes. 

Open with DEXTER  
In the text 
Fig. 2. Top: original convergence map at z = 0.9 for a limiting magnitude mag_{lim} = 24.5, simulated with FLASK. Bottom: same map but reconstructed with the KS method. 

Open with DEXTER  
In the text 
Fig. 3. Numerical (blue dots) vs expected (solid black line) MFs from 500 realizations of a Gaussian random field. 

Open with DEXTER  
In the text 
Fig. 4. Left: MFs as measured from maps with mag_{lim} = 25.5 as a function of the threshold ν for different values of the redshift bin center z and fixed smoothing scale (θ_{s} = 6′). Right: same as in the left panel, but for different values of the smoothing scale θ_{s} and fixed redshift bin center (z = 1.2). 

Open with DEXTER  
In the text 
Fig. 5. Normalized covariance matrix for the MF data vector D defined in the text for the case with mag_{lim} = 25.5. 

Open with DEXTER  
In the text 
Fig. 6. Left: MFs for ν = 2 and different values of the limiting magnitude mag_{lim} as a function of the redshift bin center z, with fixed smoothing scale (θ_{s} = 6′). Right: same as in the left panel, but as a function of the smoothing angle θ_{s} with fixed redshift bin center (z = 1.2). 

Open with DEXTER  
In the text 
Fig. 7. Top: FoM ratio as function of the prior ε_{P} on the MFs nuisance parameters for different values of the limiting magnitude mag_{D} of the deep survey when V_{1} alone is added to the shear tomography with ℓ_{max} = (1500, 3000, 5000) from left to right. Bottom: same as in the top panel, but for V_{2} alone added to the shear. In each panel the curves for different mag_{D} are so superimposed that they cannot be seen at all. 

Open with DEXTER  
In the text 
Fig. 8. Same as Fig. 7, but adding two MFs to shear tomography V_{01}, V_{02}, and V_{12} for the left, center, and right panels. Again, the dependence on mag_{D} is hard to appreciate, so that for most of the panels, it is impossible to see more than one curve. The only exception is the line referring to mag_{D} = 26.0 in the bottom panels. We plot here a smaller ε_{P} range to better show the behavior over the range for which adding MFs to the shear tomography indeed helps increasing the FoM by a significant amount. 

Open with DEXTER  
In the text 
Fig. 9. Same as Fig. 7, but adding all the three MFs to shear tomography with ℓ_{max} = (1500, 3000, 5000) from left to right. Curves for different mag_{D} values are again superimposed, with the only difference that the green curves refer to mag_{D} = 24.5. 

Open with DEXTER  
In the text 
Fig. 10. Left: FoM ratio as a function of the prior ε_{P} on the MFs nuisance parameters for the intermediate shearonly scenario for different values of (mag_{D}, A_{tot}), using all three MFs. We fix A_{deep} = 40 deg^{2}. Center: same as in the left panel, but with A_{deep} = 80 deg^{2}. Right: same as in the left panel, but with A_{deep} = 160 deg^{2}. In each panel, blue, magenta, purple, and red lines refer to mag_{D} = (25.0, 25.5, 26.0, 26.5). 

Open with DEXTER  
In the text 
Fig. 11. Left: deep survey area needed to obtain the same FoM as for the reference shear tomography only survey as a function of the prior on the MFs nuisance parameters for different limiting magnitude. Right: same as in the left panel, but considering the total survey area. 

Open with DEXTER  
In the text 
Fig. 12. Error on the FoM ratio ℛ as a function of the relative uncertainty on the MFW Fisher matrix elements. We set a 10% prior on MF nuisance parameters, and considered the optimistic scenario for shear only. Blue, purple, and red lines refer to giving δFoM(WL)/FoM(WL) = (0.6, 2.2, 6.4)%, respectively. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.