Issue 
A&A
Volume 519, September 2010



Article Number  A23  
Number of page(s)  9  
Section  Cosmology (including clusters of galaxies)  
DOI  https://doi.org/10.1051/00046361/200912866  
Published online  08 September 2010 
An analytic approach to number counts of weaklensing peak detections
M. Maturi  C. Angrick  F. Pace  M. Bartelmann
Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, AlbertUeberleStr. 2, 69120 Heidelberg, Germany
Received 10 July 2009 / Accepted 15 March 2010
Abstract
We apply an analytic method to predict peak counts in
weaklensing surveys. It is based on the theory of Gaussian random
fields and suitable to quantify the level of detections caused by
chance projections of largescale structures as well as the shape
and shot noise contributed by the background galaxies. A simple
analytical recipe is given to compute the signaltonoise
distribution of those detections. We compare our method to peak
counts obtained from numerical raytracing simulations and find good
agreement at the expected level. The number of peak detections
depends substantially on the shape and size of the filter applied to
the gravitational shear field. We confirm that weaklensing
peak counts are dominated by spurious detections up to
signaltonoise ratios of 35 and that most filters yield only a
few detections per square degree above this level, while a filter
optimised for suppressing largescale structure noise returns up to
an order of magnitude more. Galaxy shape noise and noise from largescale
structures cannot be treated as two independent components since
the two contributions add in a nontrivial way.
Key words: cosmology: theory  largescale structure of Universe  galaxies: clusters: general  gravitational lensing: weak
1 Introduction
Widearea surveys for weak gravitational lensing can be and have been used for counting peaks in the shear signal, which are commonly interpreted as the signature of sufficiently massive darkmatter halos. However, such detections are clearly contaminated by spurious detections caused by the chance superposition of largescale structures, and also by the shape and shotnoise contributions from the background galaxies used to sample the foreground shear field. As a function of the peak height, what is the contribution of genuine halos to these detections, and how much do the largescale structure and the other sources of noise contribute? In addition, the number of peaks produced by the largescale structure constitute a cosmological signal which can be used as a cosmological probe together with cluster counts. Can we predict this number without expensive numerical simulations?
Given the power of lensingpeak number counts as a cosmological probe (Dietrich & Hartlap 2010; Marian et al. 2009; Kratochvil et al. 2010), we address this question here after applying a suitable analytic approach based on peak counts in Gaussian random fields as laid out by Bardeen et al. (1986). This extends van Waerbeke (2000), who studied the background galaxy noise component alone. With respect to the latter work, we give a detection definition more suitable for comparison with observations and include the nonnegligible contribution of largescale structures. It is reasonable to do so even though at least the high peaks are caused by halos in the nonGaussian tail of the density fluctuations because the noise and largescale structure contributions to the filtered weak lensing maps remain Gaussian, and thus at least their contribution to the counts can be well described analytically. Peaks with the highest signaltonoise ratios are expected to be more abundant than predicted based on Gaussian random fields.
Weaklensing data are filtered to derive peak counts from them. Several linear filters have been proposed and used in the literature. They can all be seen as convolutions of the measured shear field with filter functions of different shapes. Many shapes have been proposed for different purposes (Schirmer et al. 2004; Maturi et al. 2005; Schneider et al. 1998). One filter function, called the optimal filter later on, was designed specifically to suppress the contribution from largescale structures by maximising the signaltonoise ratio of halo detections against the shear field of the largescale structure.
We study three such filters here, with the optimal filter among them. Results will differ substantially, arguing for a careful filter choice if halo detections are the main goal of the application. We compare our analytic results to a numerical simulation and show that both agree at the expected level. We begin in Sect. 2 with a brief summary of gravitational lensing as needed here and describe filtering methods in Sect. 3. We present our analytic method in Sect. 4 and compare it to numerical simulations in Sect. 5, where we also show our main results. Conclusions are summarised in Sect. 6. In Appendix A, we show predictions of peak counts and the noise levels in them for several planned and ongoing weaklensing surveys.
2 Gravitational lensing
Isolated lenses are characterised by their lensing potential
where is the Newtonian gravitational potential and are the angulardiameter distances between the observer and the source, the observer and the lens, and the lens and the source, respectively. The potential relates the angular positions of the source and of its image on the observer's sky through the lens equation
Since sources such as distant background galaxies are much smaller than the typical scale on which the lens properties change and the angles involved are small, it is possible to linearise Eq. (2) such that the induced image distortion is expressed by the Jacobian
(3) 
where is the convergence responsible for the isotropic magnification of an image relative to its source, and is the reduced shear quantifying its distortion. Here, and are the two components of the complex shear. Since the angular size of the source is unknown, only the reduced shear can be estimated starting from the observed ellipticity of the background sources,
where is the intrinsic ellipticity of the source and the asterisk denotes complex conjugation.
3 Measuring weak gravitational lensing
3.1 Weak lensing estimates
In absence of intrinsic alignments between background galaxies due to
possible tidal interactions (Heavens & Peacock 1988; Schneider & Bridle 2010), the intrinsic
source ellipticities in Eq. (4) average to zero in a
sufficiently large source sample.
An appropriate and convenient measure for the lensing signal on
circular apertures is the weighted average over the tangential
component of the shear
relative to the position
on the sky,
The filter function Q determines the statistical properties of the quantity and W describes the survey geometry. We shall consider three filter functions here which will be described in Sect. 3.2.
Data on gravitational lensing by a mass concentration can be modeled
by a signal
described by its
amplitude
and its radial profile ,
and a noise component
with zero mean, i.e.
(6) 
for the tangential shear. The variance of in (5) is
where is the Fourier transform of the filter Q and is the effective power spectrum of the noise component, i.e. the intrinsic noise power spectrum convolved with a window function representing the frequency response of the survey. Note that the contribution from cosmic variance is not included in this definition since it is negligibly small. In our application, the latter is a bandpass filter accounting for the finite field of view of the survey (highpass component) and the average galaxy separation (lowpass component). See Sect. 5.2 for its explicit expression. For complex sky coverage and especially for small fields of view the adopted approximation would not hold and a general treatment accounting for the full geometry must be considered (see for e.g. Hivon et al. 2002).
In practical applications,
is approximated by
where is the tangential ellipticity with respect to of a galaxy located at the position , which provides an estimate for . Note that in our application we consider linear structures only and therefore the weak lensing approximation is always satisfied, i.e. .
3.2 Weak lensing filters
Figure 1: Overview of different weaklensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide. 

Open with DEXTER 
Different filter profiles have been proposed in the literature depending on their specific application in weak lensing. We adopt three of them here which have been used so far to identify halo candidates through weak lensing.
 (1)
 The polynomial filter described by Schneider et al. (1998),
(9)
where the projected angular distance from the filter centre, , is expressed in units of the filter scale radius, , and H is the Heaviside step function. This filter was originally proposed for cosmicshear analysis but several authors have used it also for dark matter halo searches (see for e.g. Erben et al. 2000; Schirmer et al. 2004).  (2)
 A filter optimised for halos with NFW density profile,
approximating their shear signal with a hyperbolic tangent
(Schirmer et al. 2004),
where the two exponentials in parentheses are cutoffs imposed at small and large radii (a=6, b=150, c=50, and d=47) and is a parameter defining the filterprofile slope. A good choice for the latter is as empirically shown by Hetterscheidt et al. (2005).  (3)
 The optimal linear filter introduced by Maturi et al. (2005) which,
together with the optimisation with respect to the expected
halolensing signal, optimally suppresses the contamination due to the
lineofsight projection of largescale structures (LSS),
Here, is the Fourier transform of the expected shear profile of the halo and is the complete noise power spectrum including the linearly evolved LSS through as well as the noise contributions from the intrinsic source ellipticities and the shot noise by , given their angular number density and the intrinsic ellipticity dispersion . Note that for the filter construction we use the linear LSS power spectrum instead of the nonlinear one. This is a kind of an implicit definition of a halo since we assume that the difference between linear and nonlinear power spectrum is completely due to their formation. This filter depends on parameters determined by physical quantities such as the halo mass and redshift, the galaxy number density and the intrinsic ellipticity dispersion and not on an arbitrarily chosen scale which has to be determined empirically through costly numerical simulations (e.g. Hennawi & Spergel 2005). An application of this filter to the GaBoDS survey (Schirmer et al. 2003) was presented in Maturi et al. (2007), while a detailed comparison of these three filters was performed by Pace et al. (2007) by means of numerical raytracing simulations. They found that the optimal linear filter given by Eq. (11) returns the halo sample with the largest completeness ( for masses and 50% for masses for sources at ) and the lowest number of spurious detections caused by the LSS (10% for a signaltonoise threshold of ).
3.3 Weak lensing estimates and convergence
In order to simplify comparisons with numerical simulations, we
convert the quantity
from Eq. (5) to a
quantity involving the convergence,
where U is related to Q by
(Schneider 1996) if the weight function is defined to be compensated, i.e.
Equation (13) has the form of a Volterra integral equation of the first kind which can be solved for U once Q is specified. If is finite, the solution is
(Polyanin & Manzhirov 1998), which can be solved analytically for the polynomial filter
(16) 
and numerically for the hyperbolictangent filter of Eq. (10) with an efficient recursive scheme over the desired radii . If as in the case of the optimal filter, Eq. (15) can be solved by introducing an exponential cutoff at small radii to avoid the divergence. The correct solution is obtained if the cutoff scale is close to the mean separation between the background galaxies, so that no information is lost. Alternatively, Eq. (13) can be solved iteratively with respect to Q by
The iterative procedure is stopped once the difference is sufficiently small. After has been found, an appropriate constant c has to be added in order to satisfy the compensation requirement, Eq. (14). It is given by
We show in Fig. 1 the resulting filter profiles to be used on shear catalogues through Eq. (5) and their corresponding variants to be used on convergence fields with Eq. (12) both in real and in Fourier space. All of them are bandpass filters and the two of them designed for halo searches have larger amplitudes at higher frequencies compared to the polynomial filter by Schneider et al. (1998), where the halo signal is most significant. This feature is particularly prominent for the optimal filter, which is additionally negative at low frequencies, where the LSS signal dominates. These two features ensure the minimisation of the LSS contamination in halo searches.
4 Predicting weak lensing peak counts
Our analytic predictions for the number counts of weaklensing detections as a function of their signaltonoise ratio are based on modelling the analysed and filtered lensing data, resulting from Eq. (12), as an isotropic and homogeneous Gaussian random field. This is an extremely good approximation for the noise and the LSS components, but not necessarily for the nonlinear structures such as sufficiently massive halos, as we shall discuss in Sect. 5.3.
4.1 Statistics of Gaussian random fields
An ndimensional random field
assigns a set of
random numbers to each point
in an ndimensional space. A
joint probability function can be declared for m arbitrary points
as the probability to have field values between
and
,
with
.
For Gaussian random fields, the field itself,
its derivatives, integrals and any linear combination thereof are
Gaussian random variables which we denote by y_{i} with mean values
and central deviations
,
with
.
Their joint probability function is
a multivariate Gaussian,
with the quadratic form
where is the covariance matrix with elements . All statistical properties of a homogeneous Gaussian random field with zero mean are fully characterised by the twopoint correlation function or equivalently its Fourier transform, the power spectrum P(k). In our case, this is the sum of the power spectrum of the convergence due to linearly evolved structures, , and the observational noise, , caused by the galaxies.
Since we are interested in gravitationallensing quantities such as the convergence , we here consider twodimensional Gaussian random fields only with . We adopt the formalism of Bardeen et al. (1986), where , and denote the convergence field and its first and second derivatives, respectively.
4.2 Definition of detections: a new upcrossing criterion
We define as detection any contiguous area of the field which exceeds a given threshold, , determined by the required signaltonoise ratio, S/N, and the variance of the quantity (see Eq. (7)). This definition is widely used in surveys for galaxy clusters or peak counts in weaklensing surveys and can easily be applied both to real data and Gaussian random fields.
Each detection is delimited by its contour at the threshold level
.
If this contour is convex, it has a single point
,
called upcrossing point, where the field
is rising along the xaxis direction only, i.e. where the field
gradient has one vanishing and one positive component (see the sketch
for type0 detections in the lower panel of Fig. 2),
Since we assume to be a homogeneous and isotropic random field, the orientation of the coordinate frame is arbitrary and irrelevant. The conditions expressed by Eq. (21) define the socalled upcrossing criterion which allows to identify the detections and to derive their statistical properties, such as their number counts, by associating their definition to the Gaussian random field variables F, and .
However, this criterion is prone to fail for low thresholds, where detections tend to merge and the isocontours tend to deviate from the assumed convex shape. This causes detection numbers to be overestimated at low cutoffs because each ``peninsula'' and ``bay'' of their contour (see type1 in Fig. 2) would be counted as one detection. We solve this problem by dividing the upcrossing points into those with negative (red circles) and those with positive (blue squares) curvature, and respectively. In fact, for each detection, their difference is one (type1) providing the correct number count. The only exception is for those detections containing one or more ``lagoons'' (type2) since each of them decreases the detection count by one. But since this is not a frequent case and occurs only at very low cutoff levels, we do not consider this case here.
Figure 2: Weak lensing detection maps. The top four panels show the segmentation of a realistic weaklensing S/N map for increasing thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel sketches the three discussed detection types together with the points identified by the standard and the modified upcrossing criteria. Red circles and blue squares correspond to upcrossing points for which the second field derivatives are and , respectively. 

Open with DEXTER 
4.3 The number density of detections
Once the relation between the detections and the Gaussian random
variables
and their
constraints from Eq. (21) together with
or
are defined, we can describe their
statistical properties through the multivariate Gaussian probability distribution given by Eq. (19) with the covariance matrix
as given by van Waerbeke (2000). Here, the are the moments of the power spectrum P(k),
where is the nonlinear power spectrum of the matter fluctuations (Peacock & Dodds 1996) combined with the noise contribution by the background galaxies, is the survey frequency response (see Sect. 5.2 for its explicit expression), and is the Fourier transform of the filter adopted for the weak lensing analysis (see Sect. 3.2). The determinant of is and Eq. (20) can explicitly be written as
Both and can be expanded into Taylor series around the points where the upcrossing conditions are fulfilled,
so that the infinitesimal volume element can be written as , where is the Jacobian matrix,
and since . The number density of upcrossing points at the threshold with , and , n^{} and n^{+} respectively, can thus be evaluated as
where is the multivariate Gaussian defined by Eq. (19) with p=4, the correlation matrix (22), and the quadratic form (24). Both expressions can be integrated analytically and their difference, , as explained in Sect. 4.2, returns the number density of detections above the threshold ,
Note how the dependence on drops out of the difference n^{}n^{+}, leading to a very simple result. This equation is much less complex than Eqs. (41), (42) by van Waerbeke (2000). It returns the number of detection contours rather than the number of peaks.
Figure 3: Top panels: probability density function (PDF) measured from the synthetic galaxy catalogue, covering 24.4 square degrees, analysed with all adopted filters and scales. The negative part of the PDF is well described by a Gaussian (solid lines). The 3 error bars related to the Poissonian uncertainty are shown. This shows how weak lensing signaltonoise maps can be modelled as Gaussian random fields. Bottom panels: a similar comparison was performed with the measured power spectrum and the predicted one based on the expected combined large scale structure and noise power spectrums convolved with the weak lensing filter and the frequency response of the survey. For clarity, we only show the results for the intermediate scales. 

Open with DEXTER 
For completeness we report the number density estimate also for the
classical upcrossing criterion, Eq. (21) only, where
the constraint on the second derivative of the field,
,
is
not used,
with . This number density converges to the correct value for , i.e. large thresholds, because and for . This reflects the fact that, for large thresholds, the detection shapes become fully convex and any issues with more complex shapes disappear.
5 Analytic predictions vs. numerical simulations
We now compare the number counts of detections predicted by our analytic approach with those resulting form the analysis of synthetic galaxy catalogues produced with numerical raytracing simulations.
5.1 Numerical simulations
We use a hydrodynamical, numerical Nbody simulation carried out with the code GADGET2 (Springel 2005). We briefly summarise its main characteristics here and refer to Borgani et al. (2004) for a more detailed discussion. The simulation represents a concordance CDM model, with darkenergy, darkmatter and baryon density parameters , and , respectively. The Hubble constant is with h=0.7, and the linear power spectrum of the matterdensity fluctuations is normalised to . The simulated box is a cube with a side length of 192 h^{1} Mpc, containing 480^{3} darkmatter particles with a mass of each and an equal number of gas particles with each. Thus, halos of mass are resolved into several thousands of particles. The physics of the gas component includes radiative cooling, star formation and supernova feedback, assuming zero metallicity.
This simulation is used to construct backward light cones by stacking
the output snapshots from z=1 to z=0. Since the snapshots contain
the same cosmic structures at different evolutionary stages, they are
randomly shifted and rotated to avoid repetitions of the same cosmic
structures along one lineofsight. The light cone is then sliced into
thick planes, whose particles are subsequently projected with a
triangularshapedcloud scheme (TSC, Hockney & Eastwood 1988) on lens planes
perpendicular to the lineofsight. We trace a bundle of
light
rays through one light cone which start
propagating at the observer into directions on a regular grid of
4.9 degrees on each side. The effective resolution of this
raytracing
simulation is of the order of 1'. The effective convergence and shear
maps obtained from the raytracing
simulations are used to lens a background source population according
to Eq. (4). Galaxies are randomly distributed on the lens
plane at z=1 with a number density of
arcmin^{2}and have intrinsic random ellipticities drawn from the distribution
(30) 
where (for further detail, see Pace et al. 2007).
Synthetic galaxy catalogues produced in this way are finally analysed with the aperture mass (Eq. (5)) evaluated on a regular grid of positions covering the entire fieldofview of the light cone. All three filters presented in Sect. 3.2 were used with three different scales: the polynomial filter with and 11', the hyperbolictangent filter with , and 20', and the optimal filter with scale radii of the cluster model set to and 4'. These scales are chosen to sample angular scales typically used in literature.
For a statistical analysis of the weaklensing detections and their relation to the numerical simulations structures, see Pace et al. (2007).
5.2 Accounting for the geometry of surveys: the window function
Our analytic predictions for the number density of detections accounts
for the survey frequency response
discussed in
Sect. 3.1. As already stated, this is a simplified
approach and the adopted full geometry
should be
considered (see for e.g. Hivon et al. 2002) in case of complex sky
masking, especially if involving small fields of view. Thus, in our
approach we consider only an effective power spectrum
,
where the frequency response,
,
is
the product of a highpass filter suppressing the scales larger than
the light cone's side length
,
(31) 
(note that k is in the denominator here), a lowpass filter imposed by the average separation between the galaxies,
(32) 
and a lowpass filter related to the resolution used to sample the sky with the quantity of Eq. (8),
where J_{1}(x) is the cylindrical Bessel function of order one. The latter function is a circular step function covering the same area as a squareshaped pixel of size . The square shapes of the fieldofview and the pixels could be better represented by the product of two step functions in both the x and ydirection, but the low gain in accuracy does not justify the higher computational cost. Finally, for the comparison with our numerical raytracing simulation, we have to account for its resolution properties which act on the convergence power spectrum only by convolving with a lowpass filter
where as discussed in Sect. 5.1.
The agreement of this simple recipe with the numerical simulation is shown in the bottom panels of Fig. 3, where we compare the expected effective power spectrum convolved with the filter, , with the one measured in the numerical simulation. Apart from noise at large scales, only small deviations at high frequencies are visible. Note that when relating the detection threshold to the signaltonoise ratio S/N according to the variance given by Eq. (7) and , all window functions mentioned are used except for , which, of course, does not affect the variance.
5.3 Comparison with numerical simulations
Our analytic approach approximates the data as Gaussian random fields, very well representing both noise and LSS contributions to the weak lensing signaltonoise ratio maps. In fact, even if shear and convergence of LSS show nonGaussianities (Jain et al. 2000), weak lensing data are convolved with filters broad enough to make their signal Gaussian. On the other hand, this is not the case for nonlinear objects such as galaxy clusters whose nonGaussianity remains after the filtering process. Thus, particular care has to be taken when comparing the predicted number counts with real or simulated data by modelling the nonlinear structures, which is difficult and uncertain, or by avoiding their contribution in the first place. We follow the latter approach by counting the negative instead of the positive peaks found in the convergence maps derived from galaxy catalogues. In fact, massive halos contribute only positive detections in contrast to the LSS and other sources of noise which equally produce positive and negative detections with the same statistical properties. Both, negative and positive peak counts, contain cosmologically relevant information. Apart from noise, the negative peak counts are caused by linearly evolved LSS, while the difference between positive and negative counts is due to nonlinear structures. The mean density of negative peak counts can also be used to statistically correct positive peak counts by the level of spurious detections.
Figure 4: Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original upcrossing criterion (dashed line) and with the new blended upcrossing criterion (points with error bars). The standard upcrossing criterion is a good approximation for high signaltonoise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees. 

Open with DEXTER 
A comparison of the original upcrossing criterion with the new blended upcrossing criterion presented here is shown in Fig. 4 together with the number counts of negative peaks obtained from the numerical simulations. Only the result for the optimal filter with is shown for clarity. As expected, the two criteria agree very well for high signaltonoise ratios since the detections are mostly of type0, i.e. with a convex contour, as shown in the lower left panel of Fig. 2, while the merging of detections at lower signaltonoise ratios is correctly taken into account only by our new criterion.
Figure 5: Number of weak lensing peaks, shown as a function of the signaltonoise ratio, predicted with the analytic method presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004), tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and increasing filter radii from left to right as labeled in each panel. The number counts generated by the intrinsic galaxy noise alone, , and the LSS alone, , are also shown. Numbers refer to a survey of one square degree with a galaxy number density of and an intrinsic shear dispersion of . The results are compared with the number counts of positive (labeled with +) as well as negative (labeled with ) peaks detected based on the synthetic galaxy catalogues from the numerical simulation. Error bars and shaded areas refer to the Poissonian noise, i.e. the square root of the number of detections. Error bars have the same meaning as in Fig. 4. 

Open with DEXTER 
To additionally confirm the assumption that the contributions from both LSS and noise from the background galaxies can be described by a Gaussian random field after the filtering process, we modelled the positive peak counts as a combination of the peak statistics described in this work (used for the negative peaks) and the halo mass function for the contribution of highly nonlinearly evolved halos that should be responsible for the high signaltonoise part and are not taken into account by the Gaussian field statistics. The analytical prediction in this case also shows good agreement with the results from the simulation. Detailed information on the method and results will be the discussed in a future work.
We finally compare the contribution of the LSS and the noise to the total signal by treating them separately. Their number counts are plotted with dashed and dotdashed lines in Fig. 5. All filters show an unsurprisingly large number of detections caused by the noise up to signaltonoise ratios of 3 and a number of detections caused by the LSS increasing with the filter scale except for the optimal filter, which always suppresses their contribution to a negligible level. Thus, the LSS contaminates halo catalogues selected by weak lensing up to signaltonoise ratios of 45 if its contribution is ignored in the filter definition. Note that the total number of detections can be obtained only by counting the peaks from the total signal, i.e. LSS plus noise, and not by adding the peaks found in the two components separately, because the blending of peaks is different for the two cases.
6 Conclusion
We have applied an analytic method for predicting peak counts in weaklensing surveys, based on the theory of Gaussian random fields (Bardeen et al. 1986). Peaks are typically detected in shear fields after convolving them with filters of different shapes and widths. We have taken these into account by first filtering the assumed Gaussian random field appropriately and then searching for suitably defined peaks. On the way, we have argued for a refinement of the upcrossing criterion for peak detection which avoids biased counts of detections with low signaltonoise ratio, and implemented it in the analytic peakcount prediction. Peaks in the nonlinear tail of the shear distribution are underrepresented in this approach because they are highly nonGaussian, but our method is well applicable to the prediction of spurious counts, and therefore to the quantification of the background in attempts to measure number densities of darkmatter halos. We have compared our analytic prediction to peak counts in numerically simulated, synthetic shear catalogues and found agreement at the expected level.
Our main results can be summarised as follows:
 The shape and size of the filter applied to the shear field have a large influence on the contamination by spurious detections. For the optimal filter, the contribution by largescale structures is low on all filter scales, while they typically contribute substantially for other filters. This confirms previous results with a different approach (Pace et al. 2007; Dietrich et al. 2007; Maturi et al. 2005).
 Taken together, largescale structure and galaxy noise contribute the majority of detections up to signaltonoise ratios between 35. Only above this level, detections due to real darkmatter halos begin dominating.
 Shape and shot noise due to the background galaxies can not be predicted separately from the largescale structure since both affect another in a complex way.
 The optimal filter allows the detection of 3040 halos per square degree at signaltonoise ratios high enough for suppressing all noise contributions. For the other filters, this number is lower by almost an order of magnitude.
This work was supported by the Transregional Collaborative Research Centre TRR 33 (M.M., M.B.) and grant number BA 1369/121 of the Deutsche Forschungsgemeinschaft, the Heidelberg Graduate School of Fundamental Physics and the IMPRS for Astronomy & Cosmic Physics at the University of Heidelberg (CA).
Appendix A: Forecast for different weak lensing surveys
For convenience, we evaluate here the expected number density of peak counts for and for a collection of present and future weaklensing surveys with different intrinsic ellipticity dispersion, , and galaxy number density, , per arcmin^{2}. To give typical values, we assumed for all of them a squareshaped field of view, a uniform galaxy number density and no gaps for two main reasons. First, their fieldsofview are typically very large and thus do not affect the frequencies relevant for our evaluation. Second, the masking of bright objects can be done in many different ways which cannot be considered in this paper in any detail. Finally we fixed the sampling scale, described by Eq. (33), to be 5 times smaller than the typical filter scale in order to avoid undersampling, i.e. such that the high frequency cutoff is imposed by the filters themselves. For each filter, we used three different scales, namely : scale1 = , scale2 = , scale3 = 11'; : scale1 = 5', scale2 = 10', scale3 = 20'; : scale1 = and scale2 = . (Gaussian FWHM): scale1 = 1', scale2 = 2', scale3 = 5'. The results are shown in Table A.1 together with the number counts obtained with a simple Gaussian filter, usually used together with the Kaiser & Squires shear inversion algorithm (Kaiser & Squires 1993).
Table A.1: Expected number counts of peak detections per square degree for different weaklensing surveys, filters and signaltonoise ratios cutoff.
References
 Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15 [NASA ADS] [CrossRef] [Google Scholar]
 Borgani, S., Murante, G., Springel, V., et al. 2004, MNRAS, 348, 1078 [NASA ADS] [CrossRef] [Google Scholar]
 Dietrich, J. P., & Hartlap, J. 2010, MNRAS, 402, 1049 [NASA ADS] [CrossRef] [Google Scholar]
 Dietrich, J. P., Erben, T., Lamer, G., et al. 2007, A&A, 470, 821 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Erben, T., van Waerbeke, L., Mellier, Y., et al. 2000, A&A, 355, 23 [NASA ADS] [Google Scholar]
 Heavens, A., & Peacock, J. 1988, MNRAS, 232, 339 [NASA ADS] [CrossRef] [Google Scholar]
 Hennawi, J. F., & Spergel, D. N. 2005, ApJ, 624, 59 [NASA ADS] [CrossRef] [Google Scholar]
 Hetterscheidt, M., Erben, T., Schneider, P., et al. 2005, A&A, 442, 43 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hivon, E., Górski, K. M., Netterfield, C. B., et al. 2002, ApJ, 567, 2 [NASA ADS] [CrossRef] [Google Scholar]
 Hockney, R., & Eastwood, J. 1988, Computer simulation using particles (Bristol: Hilger) [Google Scholar]
 Jain, B., Seljak, U., & White, S. 2000, ApJ, 530, 547 [NASA ADS] [CrossRef] [Google Scholar]
 Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
 Kratochvil, J. M., Haiman, Z., & May, M. 2010, Phys. Rev. D, 81, 043519 [NASA ADS] [CrossRef] [Google Scholar]
 Marian, L., Smith, R. E., & Bernstein, G. M. 2009, ApJ, 698, L33 [NASA ADS] [CrossRef] [Google Scholar]
 Maturi, M., Meneghetti, M., Bartelmann, M., Dolag, K., & Moscardini, L. 2005, A&A, 442, 851 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Maturi, M., Schirmer, M., Meneghetti, M., Bartelmann, M., & Moscardini, L. 2007, A&A, 462, 473 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Pace, F., Maturi, M., Meneghetti, M., et al. 2007, A&A, 471, 731 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Peacock, J., & Dodds, S. 1996, MNRAS, 280, L19 [NASA ADS] [CrossRef] [Google Scholar]
 Polyanin, A. D., & Manzhirov, A. V. 1998, Handbook of Integral Equations, ed. B. Raton (CRC Press) [Google Scholar]
 Schirmer, M., Erben, T., Schneider, P., et al. 2003, A&A, 407, 869 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Schirmer, M., Erben, T., Schneider, P., Wolf, C., & Meisenheimer, K. 2004, A&A, 420, 75 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Schneider, M. D., & Bridle, S. 2010, MNRAS, 402, 2127 [NASA ADS] [CrossRef] [Google Scholar]
 Schneider, P. 1996, MNRAS, 283, 837 [NASA ADS] [CrossRef] [Google Scholar]
 Schneider, P., van Waerbeke, L., Jain, B., & Kruse, G. 1998, MNRAS, 296, 873 [NASA ADS] [CrossRef] [Google Scholar]
 Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
 van Waerbeke, L. 2000, MNRAS, 313, 524 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
Table A.1: Expected number counts of peak detections per square degree for different weaklensing surveys, filters and signaltonoise ratios cutoff.
All Figures
Figure 1: Overview of different weaklensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide. 

Open with DEXTER  
In the text 
Figure 2: Weak lensing detection maps. The top four panels show the segmentation of a realistic weaklensing S/N map for increasing thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel sketches the three discussed detection types together with the points identified by the standard and the modified upcrossing criteria. Red circles and blue squares correspond to upcrossing points for which the second field derivatives are and , respectively. 

Open with DEXTER  
In the text 
Figure 3: Top panels: probability density function (PDF) measured from the synthetic galaxy catalogue, covering 24.4 square degrees, analysed with all adopted filters and scales. The negative part of the PDF is well described by a Gaussian (solid lines). The 3 error bars related to the Poissonian uncertainty are shown. This shows how weak lensing signaltonoise maps can be modelled as Gaussian random fields. Bottom panels: a similar comparison was performed with the measured power spectrum and the predicted one based on the expected combined large scale structure and noise power spectrums convolved with the weak lensing filter and the frequency response of the survey. For clarity, we only show the results for the intermediate scales. 

Open with DEXTER  
In the text 
Figure 4: Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original upcrossing criterion (dashed line) and with the new blended upcrossing criterion (points with error bars). The standard upcrossing criterion is a good approximation for high signaltonoise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees. 

Open with DEXTER  
In the text 
Figure 5: Number of weak lensing peaks, shown as a function of the signaltonoise ratio, predicted with the analytic method presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004), tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and increasing filter radii from left to right as labeled in each panel. The number counts generated by the intrinsic galaxy noise alone, , and the LSS alone, , are also shown. Numbers refer to a survey of one square degree with a galaxy number density of and an intrinsic shear dispersion of . The results are compared with the number counts of positive (labeled with +) as well as negative (labeled with ) peaks detected based on the synthetic galaxy catalogues from the numerical simulation. Error bars and shaded areas refer to the Poissonian noise, i.e. the square root of the number of detections. Error bars have the same meaning as in Fig. 4. 

Open with DEXTER  
In the text 
Copyright ESO 2010
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.