Euclid: Effect of sample covariance on the number counts of galaxy clusters

Aims. We investigate the contribution of shot-noise and sample variance to the uncertainty of cosmological parameter constraints inferred from cluster number counts in the context of the Euclid survey. Methods. By analysing 1000 Euclid-like light-cones, produced with the PINOCCHIO approximate method, we validate the analytical model of Hu&Kravtsov 2003 for the covariance matrix, which takes into account both sources of statistical error. Then, we use such covariance to define the likelihood function that better extracts cosmological information from cluster number counts at the level of precision that will be reached by the future Euclid photometric catalogs of galaxy clusters. We also study the impact of the cosmology dependence of the covariance matrix on the parameter constraints. Results. The analytical covariance matrix reproduces the variance measured from simulations within the 10 per cent level; such difference has no sizeable effect on the error of cosmological parameter constraints at this level of statistics. Also, we find that the Gaussian likelihood with cosmology-dependent covariance is the only model that provides an unbiased inference of cosmological parameters without underestimating the errors.


Introduction
Galaxy clusters are the most massive gravitationally bound systems in the Universe (M ∼ 10 14 -10 15 M ) and they are composed of dark matter for 85 per cent, hot ionized gas for 12 per cent and stars for 3 per cent (Pratt et al. 2019). These massive structures are formed by the gravitational collapse of initial perturbations of the matter density field, through a hierarchical process of accretion and merging of small objects into increasingly massive systems (Kravtsov & Borgani 2012). Therefore galaxy clusters have several properties that can be used to obtain cosmological information on the geometry and the evolution of the large-scale structure of the Universe (LSS). In particular, the abundance and spatial distribution of such objects are sensitive to the variation of several cosmological parameters, such as the RMS mass fluctuation of the (linear) power spectrum on 8 h −1 Mpc scales (σ 8 ) and the matter content of the Universe This paper is published on behalf of the Euclid Consortium e-mail: alessandra.fumagalli@inaf.it (Ω m ) (Borgani et al. 1999;Schuecker et al. 2003;Allen et al. 2011;Pratt et al. 2019). Moreover, clusters can be observed at low redshift (out to redshift z ∼ 2), thus sampling the cosmic epochs during which the effect of dark energy begins to dominate the expansion of the Universe; as such, the evolution of the statistical properties of galaxy clusters should allow us to place constraints on the dark energy equation of state, and then detect possible deviations of dark energy from a simple cosmological constant (Sartoris et al. 2012). Finally, such observables can be used to constrain neutrino masses (e.g. Costanzi et al. 2013;Mantz et al. 2015;Costanzi et al. 2019;Bocquet et al. 2019;DES Collaboration et al. 2020), the Gaussianity of initial conditions (e.g. Sartoris et al. 2010;Mana et al. 2013) and the behavior of gravity on cosmological scales (e.g. Cataneo & Rapetti 2018;Bocquet et al. 2015).
The main obstacle in the use of clusters as cosmological probes lies in the proper calibration of systematic uncertainties that characterize the analyses of cluster surveys. First, cluster masses are not directly observed but must be inferred through Article number, page 1 of 15 arXiv:2102.08914v1 [astro-ph.CO] 17 Feb 2021 A&A proofs: manuscript no. PAPER other measurable properties of clusters, e.g. properties of their galaxy population (i.e. richness, velocity dispersion) or of the intracluster gas (i.g., total gas mass, temperature, pressure). The relationships between these observables and clusters masses, called scaling relations, provide a statistical measurement of masses, but require an accurate calibration in order to correctly relate the mass proxies with the actual cluster mass. Moreover, scaling relations can be affected by intrinsic scatter due to the properties of individual clusters and baryonic physics effects, that complicate the calibration process (Kravtsov & Borgani 2012;Pratt et al. 2019). Other measurement errors are related to the estimation of redshifts and the selection function (Allen et al. 2011). In addition, there may be theoretical systematics linked to the modelling of the statistical errors: shot-noise, the uncertainty due to the discrete nature of data, and sample variance, the uncertainty due to the finite size of the survey; in the case of a "full-sky" survey, the latter takes the name of cosmic variance and describes the fact that we can observe a single random realization of the Universe (e.g. Valageas et al. 2011). Finally, the analytical models describing the observed distributions, such as the mass function and halo bias, have to be carefully calibrated, to avoid introducing further systematics (e.g. Sheth & Tormen 2002;Tinker et al. 2008Tinker et al. , 2010Bocquet et al. 2015;Despali et al. 2016;Castro et al. 2021).
The study and the control of these uncertainties are fundamental for future surveys, which will provide large cluster samples that will allow us to constrain cosmological parameters with a level of precision much higher than that obtained so far. One of the main forthcoming surveys is the European Space Agency (ESA) mission Euclid 1 , planned for 2022, which will map ∼ 15 000 deg 2 of the extragalactic sky up to redshift 2, in order to investigate the nature of dark energy, dark matter and gravity. Galaxy clusters are among the cosmological probes that will be used by Euclid: the mission is expected to yield a sample of ∼ 10 5 clusters using photometric and spectroscopic data and through gravitational lensing (Laureijs et al. 2011;Euclid Collaboration: Adam et al. 2019). A forecast of the capability of the Euclid cluster survey has been performed by Sartoris et al. (2016), which shows the effect of the photometric selection function on the number of detected objects and the consequent cosmological constraints for different cosmological models. Also, Köhlinger et al. (2015) show that the weak lensing systematics in the mass calibration are under control for Euclid, which will be limited by the cluster samples themselves.
The aim of this work is to assess the contribution of shotnoise and sample variance to the statistical error budget expected for the Euclid photometric survey of galaxy clusters. The expectation is that the level of shot-noise error would decrease due to the large number of detected clusters, making the sample variance not negligible anymore. To quantify the contribution of these effects, an accurate statistical analysis is required, to be performed on a large number of realizations of past-light cones extracted from cosmological simulations describing the distribution of cluster-sized halos. This is made possible using approximate methods for such simulations (e.g. Monaco 2016, for a review). A class of these methods describes the formation process of dark matter halos, i.e. the dark matter component of galaxy clusters, through Lagrangian Perturbation Theory (LPT), which provides the distribution of large-scale structures in a faster and computationally less expensive way than through "exact" N-body simulations. As a disadvantage, such catalogs are less accurate and have to be calibrated, in order to repro-1 http://www.euclid-ec.org duce N-body results with sufficient precision. By using a large set of LPT-based simulations, we test the accuracy of an analytical model for the computation of the covariance matrix and define which is the best likelihood function to optimize the extraction of unbiased cosmological information from cluster number counts. In addition, we also analyze the impact of the cosmological dependence of the covariance matrix on the estimation of cosmological parameters. This paper is organized as follows: in Sect. 2 we present the quantities involved in the analysis, such as the mass function, likelihood function and covariance matrix, while in Sect. 3 we describe the simulations used in this work, which are dark matter halo catalogs produced by the PINOCCHIO algorithm (Monaco et al. 2002;Munari et al. 2017). In Sect. 4 we present the analyses and the results that we obtain by studying the number counts: in Sect. 4.1 (and in Appendix A) we validate the analytical model for the covariance matrix, by comparing it with the matrix from simulations. In Sect. 4.2 we analyze the effect of the mass and redshift binning on the estimation of parameters, while in Sect. 4.3 we compare the effect on the parameter posteriors of different likelihood models. In Sect. 5 we present our conclusions. While this paper is focused on the analysis relevant for a cluster survey similar in sky coverage and depth to the Euclid one, for completeness we provide in Appendix B results relevant for present and ongoing surveys.

Theoretical background
In this section we introduce the theoretical framework needed to model the cluster number counts and derive cosmological constraints via Bayesian inference.

Number counts of galaxy clusters
The starting point to model the number counts of galaxy clusters is given by the halo mass function dn(M, z), defined as the comoving volume number density of collapsed objects at redshift z with masses between M and M + dM (Press & Schechter 1974), whereρ m /M is the inverse of the Lagrangian volume of a halo of mass M, and ν = δ c /σ(R, z) is the peak height, defined in terms of the variance of the linear density field smoothed on scale R, where R is the radius enclosing the mass M = 4π 3ρ m R 3 , W R (k) is the filtering function and P(k, z) the initial matter power spectrum, linearly extrapolated to redshift z. The term δ c represents the critical linear overdensity for the spherical collapse and contains a weak dependence on cosmology and redshift that can be expressed as (Nakamura & Suto 1997) One of the main characteristics of the mass function is that, when expressed in terms of the peak height, its shape is nearly universal, meaning that the multiplicity function ν f (ν) can be described in terms of a single variable and with the same parameters for all the redshifts and cosmological models (Sheth & Tormen 2002). A number of parametrizations have been derived by fitting the mass distribution from N-body simulations (Jenkins et al. 2001;White 2002;Tinker et al. 2008;Watson et al. 2013), in order to describe such universality with the highest possible accuracy. At the present time, a fully universal parametrization has not been found yet, and the main differences between the various results reside in the definition of halos, which can be based on the Friends-of-Friends (FoF) and Spherical Overdensity (SO) algorithms (e.g. White 2001; Kravtsov & Borgani 2012) or on the dynamical definition of the Splashback radius (Diemer 2017(Diemer , 2020, and in the overdensity at which halos are identified. The need to improve the accuracy and precision in the mass function parametrization is reflected by the differences found in the cosmological parameter estimation, in particular for future surveys such as Euclid (Salvati et al. 2020;Artis et al. 2021). Another way to predict the abundance of halos is the use of emulators, built by fitting the mass function from simulations as a function of cosmology; such emulators are able to reproduce the mass function within few percent accuracy (Heitmann et al. 2016;Mc-Clintock et al. 2019;Bocquet et al. 2020). The description of the cluster mass function is further complicated by the presence of baryons, which have to be taken into account when analyzing the observational data; their effect must therefore be included in the calibration of the model (e.g. Cui et al. 2014;Velliscig et al. 2014;Bocquet et al. 2015;Castro et al. 2021).
In this work, we fix the mass function assuming that the model has been correctly calibrated. The reference mass function that we assume for our analysis is the one by (Despali et al. 2016, hereafter D16) with ν = aν 2 . The values of the parameters are: A = 0.3298, a = 0.7663, p = 0.2579 ("All z -Planck cosmology" case in D16). Comparisons with numerical simulations show departures from the universality described by this model of the order of 5 − 8%, provided that halo masses are computed within the virial overdensity, as predicted by the spherical collapse model. Besides the systematic uncertainty due to the fitting model, the mass function is affected by two sources of statistical error (which do not depend on the observational process): shot-noise and sample variance. Shot-noise is the sampling error that arises from the discrete nature of the data and contributes mainly to the high-mass tail of the mass function, where the number of objects is lower, being proportional to the square root of the number counts. On the other hand, sample variance depends only on the size and the shape of the sampled volume; it arises as a consequence of the existence of super-sample Fourier modes, with wavelength exceeding the survey size, that can not be sampled in the analyses of a finite volume survey. Sample variance introduces correlation between different mass and redshift ranges, unlike the shot-noise that affects only objects in the same bin. For currently available data the main contribution to the error comes from shot-noise, while the sample variance term is usually neglected (e.g. Mantz et al. 2015;Bocquet et al. 2019). Nevertheless, future surveys will provide catalogs with a larger number of objects, making the sample variance comparable, or even greater, than the shot-noise level (Hu & Kravtsov 2003).

Definition of likelihood functions
The analysis of the mass function is performed through Bayesian inference, by maximizing a likelihood function. The posterior distribution is explored with a Monte Carlo Markov Chains (MCMC) approach (Heavens 2009), by using a python wrapper for the nested sampler PyMultiNest (Buchner et al. 2014).
The likelihood commonly adopted in the literature for number counts analyses is the Poissonian one, which takes into account only the shot-noise term. To add the sample variance contribution, the simplest way is to use a Gaussian likelihood. In this work, we considered the following likelihood functions: -Poissonian: where x iα and µ iα are, respectively, the observed and expected number counts in the i-th mass bin and α-th redshift bin. Here the bins are not correlated, since shot-noise does not produce cross-correlation, and the likelihoods are simply multiplied; -Gaussian with shot-noise only: where σ 2 iα = µ iα is the shot-noise variance. This function represents the limit of the Poissonian case for large occupancy numbers; -Gaussian with shot-noise and sample variance: where x = {x iα } and µ = {µ iα }, while C = {C αβi j } is the covariance matrix which correlates different bins due to the sample variance contribution. This function is also valid in the limit of large numbers, as the previous one.
We maximise the average likelihood, defined as where N S = 1000 is the number of light-cones and ln L (a) is the likelihood of the a-th light-cone evaluated according to the equations described above. The posteriors obtained in this way are consistent with those of a single light-cone but, in principle, centered on the input parameter values since the effect of cosmic variance that affects each realization of the matter density field is averaged-out when combining all the 1000 light-cones; this procedure makes it easier to observe possible biases in the parameter posteriors due to the presence of systematics.
To estimate the differences on the parameter constraints between the various likelihood models, we quantify the cosmological gain using the figure of merit (FoM hereafter, Albrecht et al. 2006) in the Ω m -σ 8 plane, defined as where Cov(Ω m , σ 8 ) is the parameter covariance matrix computed from the sampled points in the parameter space. The FoM is proportional to the inverse of the area enclosed by the ellipse representing the 68 per cent confidence level and gives a measure of the accuracy of the parameter estimation: the larger the FoM, the more precise is the evaluation of the parameters. However, a larger FoM may not indicate a more efficient method of information extraction, but rather an underestimation of the error in the likelihood analysis.

Covariance matrix
The covariance matrix can be estimated from a large set of simulations through the equation where m = 1, .., N S indicates the simulation, n (m) i,α is the number of objects in the i-th mass bin and in the α-th redshift bin for the mth catalog, whilen i,α represents the same number averaged over the set of N S simulations. Such matrix describes both the shotnoise variance, given simply by the number counts in each bin, and the sample variance contribution, or more properly sample covariance: Actually the precision matrix C −1 (which has to be included in Eq. 7), obtained by inverting Eq. (10), is biased due to the noise generated by the finite number of realizations; the inverse matrix must therefore be corrected by a factor (Anderson 2003;Hartlap et al. 2007;Taylor et al. 2013) where N S is the number of catalogs and N D the dimension of the data vector i.e. the total number of bins. Although the use of simulations allows us to calculate the covariance in a simple way, numerical estimates of the covariance matrix have some limitations, mainly due to the presence of statistical noise which can only be reduced by increasing the number of catalogs. In addition, simulations make it possible to compute the matrix only at their input cosmology, preventing a fully cosmology-dependent analysis. To overcome these limitations, one can adopt an analytic prescription for the covariance matrix (Hu & Kravtsov 2003;Lacasa et al. 2018;Valageas et al. 2011). This involves a simplified treatment of non-linearities, so that the validity of this approach must be demonstrated by comparing it with simulations. To this end we consider the analytical model proposed by Hu & Kravtsov (2003) and validate its predictions against simulated data (see Sect. 4.1). As stated before, the total covariance is given by the sum of the shot-noise variance and the sample covariance, According to the model, such terms can be computed as where N αi and Nb αi are respectively the expectation values of number counts and number counts times the halo bias in the i-th mass bin and α-th redshift bin, with Ω sky = 2π(1 − cos θ), where θ is the field-of-view angle of the light-cone, and b(M, z) represents the halo bias as a function of mass and redshift. In the following, we adopt for the halo bias the expression provided by Tinker et al. (2010). The term S αβ is the covariance of the linear density field between two redshift bins, where D(z) is the linear growth rate, P(k) is the linear matter power spectrum at the present time, and W α (k) is the window function of the redshift bin, which depends on the shape of the volume probed. The simplest case is the spherical top-hat window function (see Appendix A), while the window function for a redshift slice of a light-cone is given in Costanzi et al. (2019) and takes the form where dV/dz and V α are respectively the volume per unit redshift and the volume of the slice, which depend on cosmology. Also, in the above equation j [k r(z)] are the spherical Bessel functions, Y m (k) are the spherical harmonics,k is the angular part of the wave-vector, and K are the coefficients of the harmonic expansion, such that where P (cos θ) are the Legendre polynomials.

Simulations
The accurate estimation of the statistical uncertainty associated with number counts must be carried out with a large set of simulated catalogs, representing different realizations of the Universe. Such large number of synthetic catalogs can hardly be provided by N-body simulations, which are able to produce accurate results but have high computational costs. Instead, the use of approximate methods, based on perturbative theories, makes it possible to generate a large number of catalogs in a faster and far less computationally expensive way compared to N-body simulations. This comes at the expense of less accurate results: perturbative theories give an approximate description of particle and halo displacements which are computed directly from the initial configuration of the gravitational potential, rather than computing the gravitational interactions at each time step of the simulation (e.g. Monaco 2016; Sahni & Coles 1995). PINOCCHIO (PINpointing Orbit-Crossing Collapsed HIerarchical Objects) (Monaco et al. 2002;Munari et al. 2017) is an algorithm which generates dark matter halo catalogs through Lagrangian Perturbation Theory (LPT, e.g. Moutarde et al. 1991;Buchert 1992;Bouchet et al. 1995) and ellipsoidal collapse (e.g. Bond & Myers 1996;Eisenstein & Loeb 1995), up to third order. The code simulates cubic boxes with periodic boundary conditions, starting from a regular grid on which an initial density field is generated in the same way as in N-body simulations. A collapse time is computed for each particle using ellipsoidal collapse. The collapsed particles on the grid are then displaced with LPT to form halos, and halos are finally moved to their final positions by applying again LPT. The code is also able to build past-light cones (PLC), by replicating the periodic boxes through an "on-the-fly" process that selects only the halos causally connected with an observer at the present time, once the position of the "observer" and the survey sky area are fixed. This method permits us to generate PLC in a continuous way, i.e. avoiding "piling-up" snapshots at a discrete set of redshifts.
The catalogs generated by PINOCCHIO reproduce within ∼ 5 − 10 per cent accuracy the two-point statistics on large scales (k < 0.4 h Mpc −1 ), the linear bias and the mass function of halos derived from full N-body simulations (Munari et al. 2017). The accuracy of these statistics can be further increased by re-scaling the PINOCCHIO halo masses in order to match a specific mass function calibrated against N-body simulations. Before starting to analyze the catalogs, we perform the calibration of halo masses; this step is required both because the PINOCCHIO accuracy in reproducing the halo mass function is "only" 5 percent, and because its calibration has been performed by considering a universal FoF halo mass function, while D16 define halos based on spherical overdensity within the virial radius, demonstrating that the resulting mass function is much nearer to a universal evolution than that of FoF halos.
Masses have been re-scaled by matching the halo mass function of the PINOCCHIO catalogs to the analytical model of D16. More in detail, we predicted the value for each single mass M i by using the cumulative mass function 3 The PLC can be obtained on request. The list of the available mocks can be found at http://adlibitum.oats.inaf.it/monaco/ mocks.html; the light-cones analyzed are the ones labelled "NewClus-terMocks". 4 The Euclid light-cones will be slightly larger than our simulations (about a third of the sky); moreover the survey will cover two separate patches of the sky, which is relevant to the effect of sample variance. However, for this first analysis, the PINOCCHIO light-cones are sufficient to obtain an estimate of the statistical error that will characterize catalogs of such size and number of objects.
where i = 1, 2, 3.. and we assigned such values to the simulated halos, previously sorted by preserving the mass order ranking. During this process, all the thousand catalogs were stacked together, which is equivalent to use a 1000 times larger volume: the mean distribution obtained in this way contains fluctuations due to shot-noise and sample variance that are reduced by a factor √ 1000 and can thus be properly compared with the theoretical one, preserving the fluctuations in each rescaled catalog. Otherwise, if the mass function from each single realization was directly compared with the model, the shot-noise and sample variance effects would have been washed away.
In our analyses, we considered objects in the mass range 10 14 ≤ M/M ≤ 10 16 and redshift range 0 ≤ z ≤ 2; in this interval, each rescaled light-cone contains ∼ 3 × 10 5 halos. We note that this simple constant mass-cut at 10 14 M provides a reasonable approximation to a more refined computation of the mass selection function expected for the Euclid photometric survey of galaxy clusters (see Fig. 2 of Sartoris et al. 2016; see also Euclid Collaboration: Adam et al. 2019).
In Fig. 1 we show the comparison between the calibrated and non-calibrated mass function of the light-cones, averaged over the 1000 catalogs, in the redshift bin z = 0.1 -0.2. For a better comparison, in the bottom panel we show the residual between the two mass functions from simulations and the one of D16: while the original distribution clearly differs from the analytical prediction, the calibrated mass function follows the model at all masses, except for some small fluctuations in the high-mass end where the number of objects per bin is low.
We also tested the model for the halo bias of Tinker et al. (2010, hereafter T10), to understand if the analytical prediction is in agreement with the bias from the rescaled catalogs. The latter is computed by applying the definition where ξ m is the linear two-point correlation function (2PCF) for matter and ξ h is the 2PCF for halos with masses above a threshold M; we use 10 mass thresholds in the range 10 14 ≤ M/M ≤ 10 15 . We compute the correlation functions in the range of separations r = 30 -70 h −1 Mpc, where the approximation of scaleindependent bias is valid (Manera & Gaztañaga 2011). The error is computed by propagating the uncertainty in ξ h , which is an average over the 1000 light-cones. Since the bias from simulations refers to halos with mass ≥ M, the comparison with the T10 model must be made with an effective bias, i.e. a cumulative bias weighted on the mass function Such comparison is shown in Fig. 2, representing the effective bias from boxes at various redshifts and the corresponding analytical model, as a function of the peak height (the relation with mass and redshift is shown in Sect. 2.1). We notice that the T10 model slightly overestimates (underestimates) the simulated data at low (high) masses and redshifts. However, the difference is below the 5 per cent level over the whole ν range, except for high-ν halos, where the discrepancy is about 10 per cent, but consistent with our measurements within the error. We conclude that the T10 model can provide a sufficiently accurate description for the halo bias of our simulations.

Results
In this section we present the results of the covariance comparison and likelihood analyses. First, we validate the analytical covariance matrix, described in Sect. 2.3, comparing it with the matrix from the mocks; this allows us to determine whether the analytical model correctly reproduces the results of the simulations. Once we verified to have a correct description of the covariance, we move to the likelihood analysis. First, we analyse the optimal redshift and mass binning scheme, which will ensure to extract the cosmological information in the best possible way. Then, after fixing the mass and redshift binning scheme, we test the effects on parameter posteriors of different model assumptions: likelihood model, inclusion of sample variance and cosmology dependence.
With the likelihood analysis, we aim to correctly recover the input values of the cosmological parameters Ω m , σ 8 and log 10 A s . We constrain directly Ω m and log 10 A s , assuming flat priors in 0.2 ≤ Ω m ≤ 0.4 and −9.0 ≤ log 10 A s ≤ −8.0, and then derive the corresponding value of σ 8 ; thus, σ 8 and log 10 A s are redundant parameters, linked by the relation P(k) ∝ A s k n s and by Eq. (2). All the other parameters are set to the Planck 2014 values. We are interested in detecting possible effects on the results which can occur, in principle, both in terms of biased parameters and over/underestimated parameters errors. The former case indicates the presence of systematics due to an incorrect analysis, while the latter means that not all the relevant sources of error are taken into account.

Covariance matrix estimation
As we mentioned before, the sample variance contribution to the noise can be included in the estimation of cosmological parameters by computing a covariance matrix which takes into account the cross-correlation between objects in different mass or redshift bins. We compute the matrix in the range 0 ≤ z ≤ 2 with ∆z = 0.1 and 10 14 ≤ M/M ≤ 10 16 . According to Eq. (13), since we used N S = 1000 and N D = 100 (20 redshift bins and 5 log-equispaced mass bins), we correct the precision matrix by a factor of 0.90.
In the left panel of Fig. 3 we show the normalized sample covariance matrix, obtained from simulation, which is defined as the relative contribution of the sample variance with respect to the shot-noise level, where C SN and C SV are computed from Eqs. (11) and (12). The correlation induced by the sample variance is clearly detected in the block-diagonal covariance matrix (i.e. between mass bins), at least in the low-redshift range where the sample variance contribution is comparable to, or even greater than the shot-noise level. Instead, the off-diagonal and the high-redshift diagonal terms appear affected by the statistical noise mentioned in Sect. 2.3, which completely dominates over the weak sample variance (anti-)correlation. In the right panel of Fig. 3 we show the same matrix computed with the analytical model: by comparing the two results, we note that the covariance matrix derived from simulations is well reproduced by the analytical model, at least for the diagonal and the first off-diagonal terms, where the former is not dominated by the statistical noise. To ease the comparison between simulations and model and between the amount of correlation of the various components, in Fig. 4 we show the covariance from model and simulations for different terms and components of the matrix, as a function of redshift: in blue we show the sample variance diagonal terms (i.e. same mass and redshift bin, C SV ααii ), in red and orange the diagonal sample variance in two different mass bins (C SV ααi j with respectively j = i + 1 and j = i + 2), in green the sample variance between two adjacent redshift bins (C SV αβii , β = α + 1) and in gray the shot-noise variance (C SN ααii ). In the upper panel we show the full covariance, in the central panel the covariance normalized as in Eq. (24) and in the lower panel the normalized difference between model and simulations. Confirming what was noticed from Fig. 3, the block-diagonal sample variance terms are the dominant sources of error at low redshift, with a signal that rapidly decreases when considering different mass bins (blue, red and orange lines). Shot-noise dominates at high redshift and in the off-diagonal terms. We also observe that the cross-correlation between different redshift bins produces a small anti-correlation, whose relevance however seems negligible; further considerations about this point will be presented in Sect. 4.3.
Regarding the comparison between model and simulations, the figure clearly shows that the analytical model reproduces with good agreement the covariance from simulations, with deviations within the 10 per cent level. Part of such differences can be ascribed to the statistical noise, which produces random fluctuations in the simulated covariance matrix. We also observe, mainly on the block-diagonal terms, a slight underestimation of the correlation at low redshift and a small overestimation at high redshift, which are consistent with the under/overestimation of the T10 halo bias shown in Fig. 2. Additional analyses are presented in Appendix A, where we treat the description of the model with a spherical top-hat window function. Nevertheless, this discrepancy on the covariance errors has negligible effects on the parameter constraints, at this level of statistics. This comparison will be further analyzed in Sect. 4.3.

Redshift and mass binning
The optimal binning scheme should ensure to extract the maximum information from the data while avoiding wasting computational resources with an exceedingly fine binning: adopting too large bins would hide some information, while too small bins can saturate the extractable information, making the analyses unnecessarily computationally expensive. Moreover, too narrow bins could undermine the validity of the Gaussian approximation due to the low occupancy numbers. This can happen also at high redshift, where the number density of halos drops fast.
To establish the best binning scheme for the Poissonian likelihood function, we analyze the data assuming four redshift bin widths ∆z = {0.03, 0.1, 0.2, 0.3} and three numbers of mass bins N M = {50, 200, 300}. In Fig. 5 we show the FoM as a function of ∆z, for different mass binning. Since each result of the likelihood maximization process is affected by some statistical noise, the points represent the mean values obtained from 5 realizations (which are sufficient for a consistent average result), with the corresponding standard error. About the redshift binning, the curve increases with decreasing ∆z and flattens below ∆z ∼ 0.2; from this result we conclude that for bin widths 0.2 the information is fully preserved and, among these values, we choose ∆z = 0.1 as the bin width that maximize the information. The change of the mass binning affects the results in a minor way, giving points that are consistent with each other for all the redshift bin widths. To better study the effect of the mass binning, we compute the FoM also for N M = {5, 500, 600} at ∆z = 0.1, finding that the amount of recovered information saturates around N M = 300. Thus, we use N M = 300 for the Poissonian likelihood case, corresponding to ∆ log 10 (M/M ) = 0.007.
We repeat the analysis for the Gaussian likelihood (with full covariance), by considering the redshift bin widths ∆z = {0.1, 0.2, 0.3} and three numbers of mass bins N M = {5, 7, 10}, plus N M = {2, 20} for ∆z = 0.1. We do not include the case of a tighter redshift or mass binning, to avoid deviating too much from the Gaussian limit of large occupancy numbers. The result for the FoM is shown Fig. 6, from which we can state that also for the Gaussian case the curve starts to flatten around ∆z ∼ 0.2 Fig. 4. Covariance (upper panel) and covariance normalized to the shotnoise level (central panel) as predicted by the Hu & Kravtsov (2003) analytical model (solid lines) and by simulations (dashed lines) for different matrix components: diagonal sample variance terms in blue, diagonal sample variance terms in two different mass bins in red and orange, sample variance between two adjacent redshift bins in green and shotnoise in gray. Lower panel: relative difference between analytical model and simulations. The curves are represented as a function of redshift, in the first mass bin (i = 1). and ∆z = 0.1 results to be the optimal redshift binning, since for larger bin widths less information is extracted and for tighter bins the number of objects becomes too low for the validity of the Gaussian limit. Also in this case the mass binning does not influence the results in a significant way, provided that the number of binning is not too low. We decide to use N M = 5, corresponding to the mass bin widths ∆ log 10 (M/M ) = 0.4.

Likelihood comparison
In this section we present the comparison between the posteriors of cosmological parameters obtained by applying the different definitions of likelihood results on the entire sample of lightcones, by considering the average likelihood defined by Eq. (8).
The first result is shown in Fig. 7, which represents the posteriors derived from the three likelihood functions: Poissonian, Gaussian with only shot-noise and Gaussian with shot-noise and sample variance (Eqs. 5, 6 and 7, respectively). For the latter we compute the analytical covariance matrix at the input cosmology and compare it with the results obtained by using the covariance matrix from simulations. The corresponding FoM in the σ 8 -Ω m plane is shown in Fig. 8. The first two cases look almost the same, meaning that a finer mass binning as the one adopted in the Poisson likelihood does not improve the constraining power compared to the results from a Gaussian plus shot-noise covari-  ance. In contrast, the inclusion of the sample covariance (blue and black contours) produces wider contours (and smaller FoM), indicating that neglecting this effect leads to an underestimation of the error on the parameters. Also, there is no significant difference in using the covariance matrix from simulations or the analytical model, since the difference in the FoM is below the percent level. This result means that the level of accuracy reached by the model is sufficient to obtain an unbiased estimation of parameters in a survey of galaxy clusters having sky coverage and cluster statistics comparable to that of the Euclid survey. According to this conclusion, we will use the analytical covariance matrix to describe the statistical errors for all following likelihood evaluations.
Having established that the inclusion of the sample variance has a non-negligible effect on parameter posteriors, we focus on the Gaussian likelihood case. In Fig. 9 we show the results obtained by using the full covariance matrix and only the blockdiagonal of such matrix (C i jαα ), i.e. considering shot-noise and  sample variance effects between masses at the same redshift but no correlation between different redshift bins. The resulting contours present small differences, as can be seen from the comparison of the FoM in Fig. 8: the difference in the FoM between the diagonal and full covariance cases is about one third of the effect generated by the inclusion of the full covariance with respect the only shot-noise cases. This means that, at this level of statistics and for this redshift binning, the main contribution to the sample covariance comes from the correlation between mass bins, while the correlation between redshift bins produces a minor effect on the parameter posteriors. However, the difference between the two FoMs is not necessarily negligible: for three parameters, a ∼25% change in the FoM corresponds to a potential underestimate of the parameter errorbar by ∼10%. The Euclid Consortium is presently requiring for the likelihood estimation that approximations should introduce a bias in parameter errorbars that is smaller than 10%, so as not to impact the first significant digit of the error. Because the list of potential systematics at the required precision level is long, one should avoid any oversimplification that alone induces such a sizeable effect. The full covariance is thus required to properly describe the sample variance effect at the Euclid level of accuracy.

Cosmology dependence of covariance
We also investigate if there are differences in using a cosmologydependent covariance matrix instead of a cosmologyindependent one. In fact, the use of a matrix evaluated at a fixed cosmology can represent an advantage, by reducing the computational cost, but may bias the results. In Fig. 10 we compare the parameters estimated with a cosmology-dependent covariance (black contours), i.e. recomputing the covariance at each step of the MCMC process, with the posteriors obtained by evaluating the matrix at the input cosmology (blue), or assuming a slightly lower/higher value for Ω m , log 10 A s and σ 8 (red and orange contours, respectively), chosen in order to have departures from the fiducial values of the order of 2σ from Planck Collaboration et al. (2020). Specifically, we fix the parameter values at Ω m = 0.295, log 10 A s = −8.685 and σ 8 = 0.776 for the lower case and Ω m = 0.320, log 10 A s = −8.625 and σ 8 = 0.884 for the higher case. We notice, also from the FoM comparison in Fig. 11, that there is no appreciable difference between the first two cases. In contrast, when a wrong-cosmology covariance Fig. 10. Contour plots at 68 and 95 per of cent confidence level for the Gaussian likelihood evaluated with: a cosmology-dependent covariance matrix (black), a covariance matrix fixed at the input cosmology (blue) and covariance matrices computed at two wrong cosmologies, one with lower parameter values (Ω m = 0.295, log 10 A s = −8.685 and σ 8 = 0.776, red) and one with higher parameter values (Ω m = 0.320, log 10 A s = −8.625 and σ 8 = 0.884, orange). The grey dotted lines represent the input values of parameters. Fig. 11. Figure of merit for the models described in Fig. 10. matrix is used we can find either tighter or wider contours, meaning that the effect of shot-noise and sample variance can be either under-or over-estimated. Thus, the use of a cosmologyindependent covariance matrix in the analysis of real cluster abundance data might lead to under/overestimated parameter uncertainties at the level of statistic expected for Euclid.

Discussion and conclusions
In this work we studied some of the theoretical systematics that can affect the derivation of cosmological constraints from the analysis of number counts of galaxy clusters from a survey having sky-coverage and selection function similar to those expected for the photometric Euclid cluster survey. One of the aims of the paper was to understand if the inclusion of sample variance, in addition to the shot-noise error, could have some influence on the estimation of cosmological parameters, at the level of statistics that will be reached by the future Euclid catalogs. Note that in this work we only consider uncertainties which do not deal with observations, thus neglecting the systematics related to the mass estimation; however Köhlinger et al. (2015) state that for Euclid the mass estimates from weak lensing will be under control and, although there will be still additional statistical and systematic uncertainties due to mass calibration, the analysis of real catalogs will approach the ideal case considered here.
To describe the contribution of shot-noise and sample variance, we computed an analytical model for the covariance matrix, representing the correlation between mass and redshift bins as a function of cosmological parameters. Once the model for the covariance has been properly validated, we moved to the identification of the more appropriate likelihood function to analyse cluster abundance data. The likelihood analysis has been performed with only two free parameters, Ω m and log 10 A s (and thus σ 8 ), since the mass function is less affected by the variation of the other cosmological parameters.
Both the validation of the analytical model for the covariance matrix and the comparison between posteriors from different likelihood definitions are based on the analysis of an extended set of 1000 Euclid-like past-light cones generated with the LPT-based PINOCCHIO code (Monaco et al. 2002;Munari et al. 2017).
The main results of our analysis can be summarized as follows.
-To include the sample variance effect in the likelihood analysis, we computed the covariance matrix from a large set of mock catalogs. Most of the sample variance signal is contained in the block-diagonal terms of the matrix, giving a contribution larger than the shot-noise term, at least in the low-mass/low-redshift regime. On the other hand, the anticorrelation between different redshift bins produces a minor effect with respect to the diagonal variance. -We computed the covariance matrix by applying the analytical model by Hu & Kravtsov (2003), assuming the appropriate window function, and verified that it reproduces the matrix from simulations with deviations below the 10 percent accuracy; this difference can be ascribed mainly to the non-perfect match of the T10 halo bias with the one from simulations. However, we verified that such a small difference does not affect the inference of cosmological parameters in a significant way, at the level of statistic of the Euclid survey. Therefore we conclude that the analytical model of Hu & Kravtsov (2003) can be reliably applied to compute a cosmology-dependent, noise-free covariance matrix, without requiring a large number of simulations. -We established the optimal binning scheme to extract the maximum information from the data, while limiting the computational cost of the likelihood estimation. We analyzed the halo mass function with a Poissonian and a Gaussian likelihood, for different redshift-and mass-bin widths and then computed the figure of merit from the resulting contours in Ω m -σ 8 plane. The results show that, both for the Poissonian and the Gaussian likelihood, the optimal redshift bin width is ∆z = 0.1: for larger bins, not all the information is extracted, while for smaller bins the Poissonian case saturates and the Gaussian case is no longer a valid approximation. The mass binning affects less the results, provided not to choose a too small number of bins. We decided to use N M = 300 for the Poissonian likelihood and N M = 5 for the Gaussian case. -We included the covariance matrix in the likelihood analysis and demonstrated that the contribution to the total error budget and the correlation induced by the sample variance term cannot be neglected. In fact, the Poissonian and Gaussian with shot-noise likelihood functions show smaller errorbars with respect to the Gaussian with covariance likelihood, meaning that neglecting the sample covariance leads to an underestimation of the error on parameters, at the Euclid level of accuracy. As shown in Appendix B, this result holds also for the eROSITA survey, while it is not valid for present surveys like Planck and SPT. -We verified that the anti-correlation between bins at different redshifts produces a minor, but non-negligible effect on the posteriors of cosmological parameters at the level of statistics reached by the Euclid survey. We also established that a cosmology-dependent covariance matrix is more appropriate than the cosmology-independent case, which can lead to biased results due to the wrong quantification of shot-noise and sample variance.
One of the main results of the analysis presented here is that, for next generation surveys of galaxy clusters, such as Euclid, sample variance effects need to be properly included, becoming one of the main sources of statistical uncertainty in the cosmological parameters estimation process. The correct description of sample variance is guaranteed by the analytical model validated in this work.
This analysis represents the first step towards providing all the necessary ingredients for an unbiased estimation of cosmological parameters from the number counts of galaxy clusters. It has to be complemented with the characterization of the other theoretical systematics, e.g. related to the calibration of the halo mass function, and observational systematics, related to the mass-observable relation and to the cluster selection function.
To further improve the extractable information from galaxy clusters, the same analysis will be extended to the clustering of galaxy clusters, by analyzing the covariance of the power spectrum or of the two-point correlation function. Once all the systematics will be calibrated, so as to properly combine such two observables (Schuecker et al. 2003;Mana et al. 2013;Lacasa & Rosenfeld 2016), number counts and clustering of galaxy clusters will provide valuable observational constraints, complementary to those of the other two main Euclid probes, namely galaxy clustering and cosmic shear.

Appendix A: Covariance on spherical volumes
We test the Hu & Kravtsov (2003) model in the simple case of a spherically symmetric survey window function, to quantify the level of agreement between this analytical model and results from LPT-based simulations, before applying it to the more complex geometry of the light-cones. The analytical model is simpler than the one described in Sect. 4.1, as in this case we consider only the correlation between mass bins at the fixed redshift of a PINOCCHIO snapshot; for the sample covariance, Eq. (16) then becomes where the variance σ 2 R is given by Eq.
(2), which contains the Fourier transform of the top-hat window function The matrix from simulations is obtained by computing spherical random volumes of fixed radius from 1000 periodic boxes of size L = 3870 h −1 Mpc at a given redshift; the number of spheres was chosen in order to obtain a high number of (statistically) non-overlapping sampling volumes from each box and thus depends on the radius of the spheres. The resulting covariance, computed by applying Eq. (10) to all sampling spheres, has been compared with the one from the model, with filtering scale R equal to the radius of the spheres.
In Fig. A.1 we show the resulting normalized matrices computed for R = 200 h −1 Mpc, with 10 3 sampling spheres for each box. The redshift is z = 0.506, and we used 5 log-equispaced mass bins in the range 10 14 ≤ M/M ≤ 10 15 plus one bin for M = 10 15 − 10 16 M . For a better comparison, in the lower panel we show the normalized difference between simulations and model, for the diagonal sample variance terms and for the shot-noise. We notice that the predicted variance is in agreement with the simulated one with a discrepancy less than 2 per cent. We also notice a slight underestimation of the covariance predicted by the model at low masses and a slight overestimation at high masses. We ascribe this to the modelling of the halo bias, whose accuracy is affected by scatter at the few percent level (Tinker et al. 2010).
In Fig. A.2 we show the (maximum) sample variance contribution relative to the shot-noise level, as a function of the filtering scale, for different redshifts. The curves show that the level of sample variance is lower at high redshift, where the shot-noise dominates due to the small number of objects. Instead, at low redshift (z < 1) the sample variance level is even higher than the shot-noise one, and increase as the radius of the spheres decrease; this means that, at least at low redshift where the volumes of the redshift slices in the light-cones are small, such contribution cannot be neglected, not to introduce systematics or underestimate the error on the parameter constraints.

Appendix B: Application to other surveys
We repeated the likelihood comparison by mimicking other surveys of galaxy clusters, which differ in their volume sampled and their mass and redshift ranges. More specifically, we consider a Planck-like (Tauber et al. 2010) and an SPT-like (Carlstrom et al. 2011) cluster survey, both selected through the Sunyaev-Zeldovich effect, which represent two of the main currently available cluster surveys. We also analyse an eROSITAlike (Predehl 2014) X-ray cluster sample, an upcoming survey that, although not reaching the level of statics that will be provided by Euclid, will produce a much larger sample than current surveys.
The light-cones have been extracted from our catalogs, by considering the properties (aperture, selection function, redshift Article number, page 13 of 15 A&A proofs: manuscript no. PAPER The properties of the surveys are as follows: SPT-like sample: we consider light-cones with an area of 2500 deg 2 , containing halos with redshifts z > 0.25 and masses M 500c ≥ 3 × 10 14 M . We obtain catalogs with ∼ 1100 objects. We analyze the redshift range 0.25 ≤ z ≤ 1.5 with bins of width ∆z = 0.2 and the mass range 3 × 10 14 ≤ M 500c /M ≤ 3 × 10 15 , divided in 10 bins for the Poissonian case and in 3 bins for the Gaussian case. Planck-like sample: we use the redshift-dependent selection function shown in the reference paper. Since the aperture of the Planck survey is about twice the size of the Euclid one, we stack together two light-cones to obtain a Planck-like light-cone; each of the 500 resulting samples contains ∼ 650 objects. We consider the redshift range 0 ≤ z ≤ 0.8 with ∆z = 0.25 and mass range 10 14 ≤ M vir /M ≤ 10 16 ; the number of mass bins varies for different redshift bins due to the redshift-dependent selection function, and it is chosen in order to have non-empty bins at each redshift (at least 10 objects per bin).
eROSITA-like sample: we select halos according to the redshift-dependent selection function given by M 500c (z) ≥ 2.3 z × 10 14 M , with a mass cut at 7 × 10 13 M . We analyze the redshift range 0 ≤ z ≤ 2 with ∆z = 0.1 and the mass range 10 14 ≤ M vir /M ≤ 10 16 with binning defined in order to have non-empty redshift bins, as for the Planck case. Also in this case, we stack together four PINOCCHIO light-cones to create a full-sky eROSITA light-cone, obtaining 250 samples containing ∼ 2 × 10 5 objects. For the purpose of this analysis we did not include any sensitivity mask, to account for the different depths of different surveyed area, due to the eROSITA scanning strategy. 5 Masses in the paper are defined at the overdensity ∆ = 500 with respect to the critical density; the conversion to virial masses has been performed with the python package hydro_mc (https://github. com/aragagnin/hydro_mc). In Fig. B.1 we show the distribution of cluster masses of the three samples with their selection function, for comparison to the full Euclid-like catalog. For both SPT and Planck, despite the different selection functions that favour different mass and redshift ranges, the number of objects is low, so we expect shot-noise to be the main source of uncertainty. In contrast, the eROSITA sample contains a larger number of halos, which should lower the level of shot-noise and make the sample variance non-negligible.
In Fig. B.2 we show the resulting average contours for the Planck and SPT samples, obtained with the Poissonian and Gaussian (full covariance) likelihood functions. In both the cases, the contours from the Gaussian case coincide with the Poissonian ones, confirming that for their survey properties, which produce a low number of objects, the shot-noise dominates over the sample variance. Thus, the use of the Poissonian likelihood still represents a good approximation that does not introduce significant differences at the level of statistics given by the present surveys. Moreover, no systematic effects related to uncertainties in the relation between mass and observable (integrated Compton-y parameter in this case), have been included in the analysis. Unlike Euclid, for these surveys such an uncertainty is expected to dominate the resulting uncertainty on the cosmological parameters (Bocquet et al. 2015), thus making the choice of the likelihood function conservative, since the posteriors would be larger and the effect of theoretical systematics less significant.
In Fig. B.3 we show the same result for the eROSITA case. We note that there is a large difference between the Poissonian and the Gaussian case, due to the inclusion of the sample variance effect. Such difference can be ascribed to the mass selection of the survey, which makes the Gaussian contours wider due to the fact that for an X-ray selection, the statistics of counts is dominated by low-redshift/low-mass objects distributed within a relatively small volume, which makes the contribution of sample variance becoming comparable to, or dominant over the shotnoise.