Issue |
A&A
Volume 519, September 2010
|
|
---|---|---|
Article Number | A23 | |
Number of page(s) | 9 | |
Section | Cosmology (including clusters of galaxies) | |
DOI | https://doi.org/10.1051/0004-6361/200912866 | |
Published online | 08 September 2010 |
An analytic approach to number counts of weak-lensing peak detections
M. Maturi - C. Angrick - F. Pace - M. Bartelmann
Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany
Received 10 July 2009 / Accepted 15 March 2010
Abstract
We apply an analytic method to predict peak counts in
weak-lensing surveys. It is based on the theory of Gaussian random
fields and suitable to quantify the level of detections caused by
chance projections of large-scale structures as well as the shape
and shot noise contributed by the background galaxies. A simple
analytical recipe is given to compute the signal-to-noise
distribution of those detections. We compare our method to peak
counts obtained from numerical ray-tracing simulations and find good
agreement at the expected level. The number of peak detections
depends substantially on the shape and size of the filter applied to
the gravitational shear field. We confirm that weak-lensing
peak counts are dominated by spurious detections up to
signal-to-noise ratios of 3-5 and that most filters yield only a
few detections per square degree above this level, while a filter
optimised for suppressing large-scale structure noise returns up to
an order of magnitude more. Galaxy shape noise and noise from large-scale
structures cannot be treated as two independent components since
the two contributions add in a non-trivial way.
Key words: cosmology: theory - large-scale structure of Universe - galaxies: clusters: general - gravitational lensing: weak
1 Introduction
Wide-area surveys for weak gravitational lensing can be and have been used for counting peaks in the shear signal, which are commonly interpreted as the signature of sufficiently massive dark-matter halos. However, such detections are clearly contaminated by spurious detections caused by the chance superposition of large-scale structures, and also by the shape- and shot-noise contributions from the background galaxies used to sample the foreground shear field. As a function of the peak height, what is the contribution of genuine halos to these detections, and how much do the large-scale structure and the other sources of noise contribute? In addition, the number of peaks produced by the large-scale structure constitute a cosmological signal which can be used as a cosmological probe together with cluster counts. Can we predict this number without expensive numerical simulations?
Given the power of lensing-peak number counts as a cosmological probe (Dietrich & Hartlap 2010; Marian et al. 2009; Kratochvil et al. 2010), we address this question here after applying a suitable analytic approach based on peak counts in Gaussian random fields as laid out by Bardeen et al. (1986). This extends van Waerbeke (2000), who studied the background galaxy noise component alone. With respect to the latter work, we give a detection definition more suitable for comparison with observations and include the non-negligible contribution of large-scale structures. It is reasonable to do so even though at least the high peaks are caused by halos in the non-Gaussian tail of the density fluctuations because the noise and large-scale structure contributions to the filtered weak lensing maps remain Gaussian, and thus at least their contribution to the counts can be well described analytically. Peaks with the highest signal-to-noise ratios are expected to be more abundant than predicted based on Gaussian random fields.
Weak-lensing data are filtered to derive peak counts from them. Several linear filters have been proposed and used in the literature. They can all be seen as convolutions of the measured shear field with filter functions of different shapes. Many shapes have been proposed for different purposes (Schirmer et al. 2004; Maturi et al. 2005; Schneider et al. 1998). One filter function, called the optimal filter later on, was designed specifically to suppress the contribution from large-scale structures by maximising the signal-to-noise ratio of halo detections against the shear field of the large-scale structure.
We study three such filters here, with the optimal filter among them. Results will differ substantially, arguing for a careful filter choice if halo detections are the main goal of the application. We compare our analytic results to a numerical simulation and show that both agree at the expected level. We begin in Sect. 2 with a brief summary of gravitational lensing as needed here and describe filtering methods in Sect. 3. We present our analytic method in Sect. 4 and compare it to numerical simulations in Sect. 5, where we also show our main results. Conclusions are summarised in Sect. 6. In Appendix A, we show predictions of peak counts and the noise levels in them for several planned and ongoing weak-lensing surveys.
2 Gravitational lensing
Isolated lenses are characterised by their lensing potential
where





Since sources such as distant background galaxies are much smaller than the typical scale on which the lens properties change and the angles involved are small, it is possible to linearise Eq. (2) such that the induced image distortion is expressed by the Jacobian
![]() |
(3) |
where

![$g(\vec{\theta})=\gamma(\vec{\theta})/[1-\kappa(\vec{\theta})]$](/articles/aa/full_html/2010/11/aa12866-09/img17.png)


where

3 Measuring weak gravitational lensing
3.1 Weak lensing estimates
In absence of intrinsic alignments between background galaxies due to
possible tidal interactions (Heavens & Peacock 1988; Schneider & Bridle 2010), the intrinsic
source ellipticities in Eq. (4) average to zero in a
sufficiently large source sample.
An appropriate and convenient measure for the lensing signal on
circular apertures is the weighted average over the tangential
component of the shear
relative to the position
on the sky,
The filter function Q determines the statistical properties of the quantity

Data on gravitational lensing by a mass concentration can be modeled
by a signal
described by its
amplitude
and its radial profile
,
and a noise component
with zero mean, i.e.
![]() |
(6) |
for the tangential shear. The variance of

where



In practical applications,
is approximated by
where





3.2 Weak lensing filters
![]() |
Figure 1: Overview of different weak-lensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide. |
Open with DEXTER |
Different filter profiles have been proposed in the literature depending on their specific application in weak lensing. We adopt three of them here which have been used so far to identify halo candidates through weak lensing.
- (1)
- The polynomial filter described by Schneider et al. (1998),
(9)
where the projected angular distance from the filter centre,, is expressed in units of the filter scale radius,
, and H is the Heaviside step function. This filter was originally proposed for cosmic-shear analysis but several authors have used it also for dark matter halo searches (see for e.g. Erben et al. 2000; Schirmer et al. 2004).
- (2)
- A filter optimised for halos with NFW density profile,
approximating their shear signal with a hyperbolic tangent
(Schirmer et al. 2004),
where the two exponentials in parentheses are cut-offs imposed at small and large radii (a=6, b=150, c=50, and d=47) andis a parameter defining the filter-profile slope. A good choice for the latter is
as empirically shown by Hetterscheidt et al. (2005).
- (3)
- The optimal linear filter introduced by Maturi et al. (2005) which,
together with the optimisation with respect to the expected
halo-lensing signal, optimally suppresses the contamination due to the
line-of-sight projection of large-scale structures (LSS),
Here,is the Fourier transform of the expected shear profile of the halo and
is the complete noise power spectrum including the linearly evolved LSS through
as well as the noise contributions from the intrinsic source ellipticities and the shot noise by
, given their angular number density
and the intrinsic ellipticity dispersion
. Note that for the filter construction we use the linear LSS power spectrum instead of the non-linear one. This is a kind of an implicit definition of a halo since we assume that the difference between linear and non-linear power spectrum is completely due to their formation. This filter depends on parameters determined by physical quantities such as the halo mass and redshift, the galaxy number density and the intrinsic ellipticity dispersion and not on an arbitrarily chosen scale which has to be determined empirically through costly numerical simulations (e.g. Hennawi & Spergel 2005). An application of this filter to the GaBoDS survey (Schirmer et al. 2003) was presented in Maturi et al. (2007), while a detailed comparison of these three filters was performed by Pace et al. (2007) by means of numerical ray-tracing simulations. They found that the optimal linear filter given by Eq. (11) returns the halo sample with the largest completeness (
for masses
and
50% for masses
for sources at
) and the lowest number of spurious detections caused by the LSS (
10% for a signal-to-noise threshold of
).
3.3 Weak lensing estimates and convergence
In order to simplify comparisons with numerical simulations, we
convert the quantity
from Eq. (5) to a
quantity involving the convergence,
where U is related to Q by
(Schneider 1996) if the weight function

Equation (13) has the form of a Volterra integral equation of the first kind which can be solved for U once Q is specified. If

(Polyanin & Manzhirov 1998), which can be solved analytically for the polynomial filter
![]() |
(16) |
and numerically for the hyperbolic-tangent filter of Eq. (10) with an efficient recursive scheme over the desired radii


The iterative procedure is stopped once the difference


We show in Fig. 1 the resulting filter profiles to be used on shear catalogues through Eq. (5) and their corresponding variants to be used on convergence fields with Eq. (12) both in real and in Fourier space. All of them are band-pass filters and the two of them designed for halo searches have larger amplitudes at higher frequencies compared to the polynomial filter by Schneider et al. (1998), where the halo signal is most significant. This feature is particularly prominent for the optimal filter, which is additionally negative at low frequencies, where the LSS signal dominates. These two features ensure the minimisation of the LSS contamination in halo searches.
4 Predicting weak lensing peak counts
Our analytic predictions for the number counts of weak-lensing detections as a function of their signal-to-noise ratio are based on modelling the analysed and filtered lensing data, resulting from Eq. (12), as an isotropic and homogeneous Gaussian random field. This is an extremely good approximation for the noise and the LSS components, but not necessarily for the non-linear structures such as sufficiently massive halos, as we shall discuss in Sect. 5.3.
4.1 Statistics of Gaussian random fields
An n-dimensional random field
assigns a set of
random numbers to each point
in an n-dimensional space. A
joint probability function can be declared for m arbitrary points
as the probability to have field values between
and
,
with
.
For Gaussian random fields, the field itself,
its derivatives, integrals and any linear combination thereof are
Gaussian random variables which we denote by yi with mean values
and central deviations
,
with
.
Their joint probability function is
a multivariate Gaussian,
with the quadratic form
where





Since we are interested in gravitational-lensing quantities such as
the convergence ,
we here consider two-dimensional Gaussian
random fields only with
.
We adopt the
formalism of Bardeen et al. (1986), where
,
and
denote the convergence field
and its first and second derivatives, respectively.
4.2 Definition of detections: a new up-crossing criterion
We define as detection any contiguous area of the field
which exceeds a given threshold,
,
determined by the required signal-to-noise ratio, S/N, and the
variance
of the quantity
(see Eq. (7)). This definition is widely used in surveys
for galaxy clusters or peak counts in weak-lensing surveys and can
easily be applied both to real data and Gaussian random fields.
Each detection is delimited by its contour at the threshold level
.
If this contour is convex, it has a single point
,
called up-crossing point, where the field
is rising along the x-axis direction only, i.e. where the field
gradient has one vanishing and one positive component (see the sketch
for type-0 detections in the lower panel of Fig. 2),
Since we assume



However, this criterion is prone to fail for low thresholds, where
detections tend to merge and the isocontours tend to deviate from the
assumed convex shape. This causes detection numbers to be
overestimated at low cut-offs because each ``peninsula'' and ``bay''
of their contour (see type-1 in Fig. 2) would be
counted as one detection.
We solve this problem by dividing the up-crossing points into those
with negative (red circles) and those with positive (blue squares)
curvature,
and
respectively. In fact,
for each detection, their difference is one (type-1) providing the
correct number count. The only exception is for those detections
containing one or more ``lagoons'' (type-2) since each of them
decreases the detection count by one. But since this is not a frequent
case and occurs only at very low cut-off levels, we do not consider
this case here.
![]() |
Figure 2:
Weak lensing detection maps. The top four panels show the
segmentation of a realistic weak-lensing S/N map for increasing
thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel
sketches the three discussed detection types together with the
points identified by the standard and the modified up-crossing
criteria. Red circles and blue squares correspond to up-crossing
points for which the second field derivatives are
|
Open with DEXTER |
4.3 The number density of detections
Once the relation between the detections and the Gaussian random
variables
and their
constraints from Eq. (21) together with
or
are defined, we can describe their
statistical properties through the multivariate Gaussian probability distribution given by Eq. (19) with the covariance matrix
as given by van Waerbeke (2000). Here, the

where





Both



so that the infinitesimal volume element



and





where




Note how the dependence on

![]() |
Figure 3:
Top panels: probability density function (PDF)
measured from the synthetic galaxy catalogue, covering 24.4 square
degrees, analysed with all adopted filters and scales. The
negative part of the PDF is well described by a Gaussian (solid
lines). The 3- |
Open with DEXTER |
For completeness we report the number density estimate also for the
classical up-crossing criterion, Eq. (21) only, where
the constraint on the second derivative of the field,
,
is
not used,
with






5 Analytic predictions vs. numerical simulations
We now compare the number counts of detections predicted by our analytic approach with those resulting form the analysis of synthetic galaxy catalogues produced with numerical ray-tracing simulations.
5.1 Numerical simulations
We use a hydrodynamical, numerical N-body simulation carried out
with the code GADGET-2 (Springel 2005). We briefly summarise
its main characteristics here and refer to Borgani et al. (2004) for a more
detailed discussion. The simulation represents a concordance
CDM model, with dark-energy, dark-matter and baryon density
parameters
,
and
,
respectively. The Hubble constant is
with h=0.7, and the
linear power spectrum of the matter-density fluctuations is normalised
to
.
The simulated box is a cube with a side length of
192 h-1 Mpc, containing 4803 dark-matter particles with a mass
of
each and an equal number of gas
particles with
each. Thus, halos of
mass
are resolved into several thousands of
particles. The physics of the gas component includes radiative cooling,
star formation and supernova feedback, assuming zero metallicity.
This simulation is used to construct backward light cones by stacking
the output snapshots from z=1 to z=0. Since the snapshots contain
the same cosmic structures at different evolutionary stages, they are
randomly shifted and rotated to avoid repetitions of the same cosmic
structures along one line-of-sight. The light cone is then sliced into
thick planes, whose particles are subsequently projected with a
triangular-shaped-cloud scheme (TSC, Hockney & Eastwood 1988) on lens planes
perpendicular to the line-of-sight. We trace a bundle of
light
rays through one light cone which start
propagating at the observer into directions on a regular grid of
4.9 degrees on each side. The effective resolution of this
ray-tracing
simulation is of the order of 1'. The effective convergence and shear
maps obtained from the ray-tracing
simulations are used to lens a background source population according
to Eq. (4). Galaxies are randomly distributed on the lens
plane at z=1 with a number density of
arcmin-2and have intrinsic random ellipticities drawn from the distribution
![]() |
(30) |
where

Synthetic galaxy catalogues produced in this way are finally analysed
with the aperture mass (Eq. (5)) evaluated on a regular grid of
positions covering the entire field-of-view of the
light cone. All three filters presented in Sect. 3.2 were
used with three different scales: the polynomial filter with
and 11', the hyperbolic-tangent filter
with
,
and 20', and the optimal filter with
scale radii of the cluster model set to
and 4'.
These scales are chosen to sample angular scales typically used in
literature.
For a statistical analysis of the weak-lensing detections and their relation to the numerical simulations structures, see Pace et al. (2007).
5.2 Accounting for the geometry of surveys: the window function
Our analytic predictions for the number density of detections accounts
for the survey frequency response
discussed in
Sect. 3.1. As already stated, this is a simplified
approach and the adopted full geometry
should be
considered (see for e.g. Hivon et al. 2002) in case of complex sky
masking, especially if involving small fields of view. Thus, in our
approach we consider only an effective power spectrum
,
where the frequency response,
,
is
the product of a high-pass filter suppressing the scales larger than
the light cone's side length
,
![]() |
(31) |
(note that k is in the denominator here), a low-pass filter imposed by the average separation

![]() |
(32) |
and a low-pass filter related to the resolution


where J1(x) is the cylindrical Bessel function of order one. The latter function is a circular step function covering the same area as a square-shaped pixel of size


where

The agreement of this simple recipe with the numerical simulation is
shown in the bottom panels of Fig. 3, where we
compare the expected effective power spectrum convolved with the
filter,
,
with
the one measured in the numerical simulation. Apart from noise at
large scales, only small deviations at high frequencies are visible.
Note that when relating the detection threshold to the signal-to-noise
ratio S/N according to the variance given by Eq. (7)
and
,
all
window functions mentioned are used except for
,
which, of course, does not affect the variance.
5.3 Comparison with numerical simulations
Our analytic approach approximates the data as Gaussian random fields, very well representing both noise and LSS contributions to the weak lensing signal-to-noise ratio maps. In fact, even if shear and convergence of LSS show non-Gaussianities (Jain et al. 2000), weak lensing data are convolved with filters broad enough to make their signal Gaussian. On the other hand, this is not the case for non-linear objects such as galaxy clusters whose non-Gaussianity remains after the filtering process. Thus, particular care has to be taken when comparing the predicted number counts with real or simulated data by modelling the non-linear structures, which is difficult and uncertain, or by avoiding their contribution in the first place. We follow the latter approach by counting the negative instead of the positive peaks found in the convergence maps derived from galaxy catalogues. In fact, massive halos contribute only positive detections in contrast to the LSS and other sources of noise which equally produce positive and negative detections with the same statistical properties. Both, negative and positive peak counts, contain cosmologically relevant information. Apart from noise, the negative peak counts are caused by linearly evolved LSS, while the difference between positive and negative counts is due to non-linear structures. The mean density of negative peak counts can also be used to statistically correct positive peak counts by the level of spurious detections.
![]() |
Figure 4: Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original up-crossing criterion (dashed line) and with the new blended up-crossing criterion (points with error bars). The standard up-crossing criterion is a good approximation for high signal-to-noise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees. |
Open with DEXTER |

A comparison of the original up-crossing criterion with the new
blended up-crossing criterion presented here is shown in
Fig. 4 together with the number counts of negative
peaks obtained from the numerical simulations. Only the result for the
optimal filter with
is shown for clarity. As expected,
the two criteria agree very well for high signal-to-noise ratios since
the detections are mostly of type-0, i.e. with a convex contour, as
shown in the lower left panel of Fig. 2, while the
merging of detections at lower signal-to-noise ratios is correctly
taken into account only by our new criterion.
![]() |
Figure 5:
Number of weak lensing peaks, shown as a function of the
signal-to-noise ratio, predicted with the analytic method
presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004),
tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and
increasing filter radii from left to right as labeled in each
panel. The number counts generated by the intrinsic galaxy noise
alone, |
Open with DEXTER |



To additionally confirm the assumption that the contributions from both LSS and noise from the background galaxies can be described by a Gaussian random field after the filtering process, we modelled the positive peak counts as a combination of the peak statistics described in this work (used for the negative peaks) and the halo mass function for the contribution of highly non-linearly evolved halos that should be responsible for the high signal-to-noise part and are not taken into account by the Gaussian field statistics. The analytical prediction in this case also shows good agreement with the results from the simulation. Detailed information on the method and results will be the discussed in a future work.
We finally compare the contribution of the LSS and the noise to the total signal by treating them separately. Their number counts are plotted with dashed and dot-dashed lines in Fig. 5. All filters show an unsurprisingly large number of detections caused by the noise up to signal-to-noise ratios of 3 and a number of detections caused by the LSS increasing with the filter scale except for the optimal filter, which always suppresses their contribution to a negligible level. Thus, the LSS contaminates halo catalogues selected by weak lensing up to signal-to-noise ratios of 4-5 if its contribution is ignored in the filter definition. Note that the total number of detections can be obtained only by counting the peaks from the total signal, i.e. LSS plus noise, and not by adding the peaks found in the two components separately, because the blending of peaks is different for the two cases.
6 Conclusion
We have applied an analytic method for predicting peak counts in weak-lensing surveys, based on the theory of Gaussian random fields (Bardeen et al. 1986). Peaks are typically detected in shear fields after convolving them with filters of different shapes and widths. We have taken these into account by first filtering the assumed Gaussian random field appropriately and then searching for suitably defined peaks. On the way, we have argued for a refinement of the up-crossing criterion for peak detection which avoids biased counts of detections with low signal-to-noise ratio, and implemented it in the analytic peak-count prediction. Peaks in the non-linear tail of the shear distribution are underrepresented in this approach because they are highly non-Gaussian, but our method is well applicable to the prediction of spurious counts, and therefore to the quantification of the background in attempts to measure number densities of dark-matter halos. We have compared our analytic prediction to peak counts in numerically simulated, synthetic shear catalogues and found agreement at the expected level.
Our main results can be summarised as follows:
- The shape and size of the filter applied to the shear field have a large influence on the contamination by spurious detections. For the optimal filter, the contribution by large-scale structures is low on all filter scales, while they typically contribute substantially for other filters. This confirms previous results with a different approach (Pace et al. 2007; Dietrich et al. 2007; Maturi et al. 2005).
- Taken together, large-scale structure and galaxy noise contribute the majority of detections up to signal-to-noise ratios between 3-5. Only above this level, detections due to real dark-matter halos begin dominating.
- Shape and shot noise due to the background galaxies can not be predicted separately from the large-scale structure since both affect another in a complex way.
- The optimal filter allows the detection of
30-40 halos per square degree at signal-to-noise ratios high enough for suppressing all noise contributions. For the other filters, this number is lower by almost an order of magnitude.
This work was supported by the Transregional Collaborative Research Centre TRR 33 (M.M., M.B.) and grant number BA 1369/12-1 of the Deutsche Forschungsgemeinschaft, the Heidelberg Graduate School of Fundamental Physics and the IMPRS for Astronomy & Cosmic Physics at the University of Heidelberg (CA).
Appendix A: Forecast for different weak lensing surveys
For convenience, we evaluate here the expected number density of peak
counts for
and for a collection of present and
future weak-lensing surveys with different intrinsic ellipticity
dispersion,
,
and galaxy number density,
,
per arcmin2. To give typical values, we assumed for all of them a
square-shaped field of view, a uniform galaxy number density and no
gaps for two main reasons. First, their fields-of-view are typically
very large and thus do not affect the frequencies relevant for our
evaluation. Second, the masking of bright objects can be done in many
different ways which cannot be considered in this paper in any
detail. Finally we fixed the sampling scale, described by
Eq. (33), to be 5 times smaller than the typical
filter scale in order to avoid undersampling, i.e. such that the high
frequency cut-off is imposed by the filters themselves. For each
filter, we used three different scales, namely
:
scale-1 =
,
scale-2 =
,
scale-3 = 11';
: scale-1 = 5', scale-2 = 10', scale-3 = 20';
: scale-1 =
and
scale-2 =
.
(Gaussian FWHM): scale-1 = 1', scale-2 = 2', scale-3 = 5'. The
results are shown in Table A.1 together with the number
counts obtained with a simple Gaussian filter, usually used together
with the Kaiser & Squires shear inversion algorithm (Kaiser & Squires 1993).
Table A.1: Expected number counts of peak detections per square degree for different weak-lensing surveys, filters and signal-to-noise ratios cut-off.
References
- Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15 [NASA ADS] [CrossRef] [Google Scholar]
- Borgani, S., Murante, G., Springel, V., et al. 2004, MNRAS, 348, 1078 [NASA ADS] [CrossRef] [Google Scholar]
- Dietrich, J. P., & Hartlap, J. 2010, MNRAS, 402, 1049 [NASA ADS] [CrossRef] [Google Scholar]
- Dietrich, J. P., Erben, T., Lamer, G., et al. 2007, A&A, 470, 821 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Erben, T., van Waerbeke, L., Mellier, Y., et al. 2000, A&A, 355, 23 [NASA ADS] [Google Scholar]
- Heavens, A., & Peacock, J. 1988, MNRAS, 232, 339 [NASA ADS] [CrossRef] [Google Scholar]
- Hennawi, J. F., & Spergel, D. N. 2005, ApJ, 624, 59 [NASA ADS] [CrossRef] [Google Scholar]
- Hetterscheidt, M., Erben, T., Schneider, P., et al. 2005, A&A, 442, 43 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Hivon, E., Górski, K. M., Netterfield, C. B., et al. 2002, ApJ, 567, 2 [NASA ADS] [CrossRef] [Google Scholar]
- Hockney, R., & Eastwood, J. 1988, Computer simulation using particles (Bristol: Hilger) [Google Scholar]
- Jain, B., Seljak, U., & White, S. 2000, ApJ, 530, 547 [NASA ADS] [CrossRef] [Google Scholar]
- Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
- Kratochvil, J. M., Haiman, Z., & May, M. 2010, Phys. Rev. D, 81, 043519 [NASA ADS] [CrossRef] [Google Scholar]
- Marian, L., Smith, R. E., & Bernstein, G. M. 2009, ApJ, 698, L33 [NASA ADS] [CrossRef] [Google Scholar]
- Maturi, M., Meneghetti, M., Bartelmann, M., Dolag, K., & Moscardini, L. 2005, A&A, 442, 851 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Maturi, M., Schirmer, M., Meneghetti, M., Bartelmann, M., & Moscardini, L. 2007, A&A, 462, 473 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Pace, F., Maturi, M., Meneghetti, M., et al. 2007, A&A, 471, 731 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Peacock, J., & Dodds, S. 1996, MNRAS, 280, L19 [NASA ADS] [CrossRef] [Google Scholar]
- Polyanin, A. D., & Manzhirov, A. V. 1998, Handbook of Integral Equations, ed. B. Raton (CRC Press) [Google Scholar]
- Schirmer, M., Erben, T., Schneider, P., et al. 2003, A&A, 407, 869 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Schirmer, M., Erben, T., Schneider, P., Wolf, C., & Meisenheimer, K. 2004, A&A, 420, 75 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Schneider, M. D., & Bridle, S. 2010, MNRAS, 402, 2127 [NASA ADS] [CrossRef] [Google Scholar]
- Schneider, P. 1996, MNRAS, 283, 837 [NASA ADS] [CrossRef] [Google Scholar]
- Schneider, P., van Waerbeke, L., Jain, B., & Kruse, G. 1998, MNRAS, 296, 873 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
- van Waerbeke, L. 2000, MNRAS, 313, 524 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
Table A.1: Expected number counts of peak detections per square degree for different weak-lensing surveys, filters and signal-to-noise ratios cut-off.
All Figures
![]() |
Figure 1: Overview of different weak-lensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide. |
Open with DEXTER | |
In the text |
![]() |
Figure 2:
Weak lensing detection maps. The top four panels show the
segmentation of a realistic weak-lensing S/N map for increasing
thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel
sketches the three discussed detection types together with the
points identified by the standard and the modified up-crossing
criteria. Red circles and blue squares correspond to up-crossing
points for which the second field derivatives are
|
Open with DEXTER | |
In the text |
![]() |
Figure 3:
Top panels: probability density function (PDF)
measured from the synthetic galaxy catalogue, covering 24.4 square
degrees, analysed with all adopted filters and scales. The
negative part of the PDF is well described by a Gaussian (solid
lines). The 3- |
Open with DEXTER | |
In the text |
![]() |
Figure 4: Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original up-crossing criterion (dashed line) and with the new blended up-crossing criterion (points with error bars). The standard up-crossing criterion is a good approximation for high signal-to-noise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees. |
Open with DEXTER | |
In the text |
![]() |
Figure 5:
Number of weak lensing peaks, shown as a function of the
signal-to-noise ratio, predicted with the analytic method
presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004),
tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and
increasing filter radii from left to right as labeled in each
panel. The number counts generated by the intrinsic galaxy noise
alone, |
Open with DEXTER | |
In the text |
Copyright ESO 2010
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.