Free Access
Issue
A&A
Volume 519, September 2010
Article Number A23
Number of page(s) 9
Section Cosmology (including clusters of galaxies)
DOI https://doi.org/10.1051/0004-6361/200912866
Published online 08 September 2010
A&A 519, A23 (2010)

An analytic approach to number counts of weak-lensing peak detections

M. Maturi - C. Angrick - F. Pace - M. Bartelmann

Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany

Received 10 July 2009 / Accepted 15 March 2010

Abstract
We apply an analytic method to predict peak counts in weak-lensing surveys. It is based on the theory of Gaussian random fields and suitable to quantify the level of detections caused by chance projections of large-scale structures as well as the shape and shot noise contributed by the background galaxies. A simple analytical recipe is given to compute the signal-to-noise distribution of those detections. We compare our method to peak counts obtained from numerical ray-tracing simulations and find good agreement at the expected level. The number of peak detections depends substantially on the shape and size of the filter applied to the gravitational shear field. We confirm that weak-lensing peak counts are dominated by spurious detections up to signal-to-noise ratios of 3-5 and that most filters yield only a few detections per square degree above this level, while a filter optimised for suppressing large-scale structure noise returns up to an order of magnitude more. Galaxy shape noise and noise from large-scale structures cannot be treated as two independent components since the two contributions add in a non-trivial way.

Key words: cosmology: theory - large-scale structure of Universe - galaxies: clusters: general - gravitational lensing: weak

1 Introduction

Wide-area surveys for weak gravitational lensing can be and have been used for counting peaks in the shear signal, which are commonly interpreted as the signature of sufficiently massive dark-matter halos. However, such detections are clearly contaminated by spurious detections caused by the chance superposition of large-scale structures, and also by the shape- and shot-noise contributions from the background galaxies used to sample the foreground shear field. As a function of the peak height, what is the contribution of genuine halos to these detections, and how much do the large-scale structure and the other sources of noise contribute? In addition, the number of peaks produced by the large-scale structure constitute a cosmological signal which can be used as a cosmological probe together with cluster counts. Can we predict this number without expensive numerical simulations?

Given the power of lensing-peak number counts as a cosmological probe (Dietrich & Hartlap 2010; Marian et al. 2009; Kratochvil et al. 2010), we address this question here after applying a suitable analytic approach based on peak counts in Gaussian random fields as laid out by Bardeen et al. (1986). This extends van Waerbeke (2000), who studied the background galaxy noise component alone. With respect to the latter work, we give a detection definition more suitable for comparison with observations and include the non-negligible contribution of large-scale structures. It is reasonable to do so even though at least the high peaks are caused by halos in the non-Gaussian tail of the density fluctuations because the noise and large-scale structure contributions to the filtered weak lensing maps remain Gaussian, and thus at least their contribution to the counts can be well described analytically. Peaks with the highest signal-to-noise ratios are expected to be more abundant than predicted based on Gaussian random fields.

Weak-lensing data are filtered to derive peak counts from them. Several linear filters have been proposed and used in the literature. They can all be seen as convolutions of the measured shear field with filter functions of different shapes. Many shapes have been proposed for different purposes (Schirmer et al. 2004; Maturi et al. 2005; Schneider et al. 1998). One filter function, called the optimal filter later on, was designed specifically to suppress the contribution from large-scale structures by maximising the signal-to-noise ratio of halo detections against the shear field of the large-scale structure.

We study three such filters here, with the optimal filter among them. Results will differ substantially, arguing for a careful filter choice if halo detections are the main goal of the application. We compare our analytic results to a numerical simulation and show that both agree at the expected level. We begin in Sect. 2 with a brief summary of gravitational lensing as needed here and describe filtering methods in Sect. 3. We present our analytic method in Sect. 4 and compare it to numerical simulations in Sect. 5, where we also show our main results. Conclusions are summarised in Sect. 6. In Appendix A, we show predictions of peak counts and the noise levels in them for several planned and ongoing weak-lensing surveys.

2 Gravitational lensing

Isolated lenses are characterised by their lensing potential

\begin{displaymath}\psi(\vec{\theta}) \equiv \frac{2}{c^2}
\frac{D_{\rm ds}}{D_...
... d}D_{\rm s}}
\int
\Phi(D_{\rm d}\vec{\theta}, z)~{\rm d}z,
\end{displaymath} (1)

where $\Phi$ is the Newtonian gravitational potential and $D_{\rm s,
d, ds}$ are the angular-diameter distances between the observer and the source, the observer and the lens, and the lens and the source, respectively. The potential $\psi$ relates the angular positions $\vec\beta$ of the source and $\vec\theta$ of its image on the observer's sky through the lens equation

\begin{displaymath}\vec{\beta}=\vec{\theta}-\vec\nabla\psi.
\end{displaymath} (2)

Since sources such as distant background galaxies are much smaller than the typical scale on which the lens properties change and the angles involved are small, it is possible to linearise Eq. (2) such that the induced image distortion is expressed by the Jacobian

\begin{displaymath}A = (1-\kappa)
\left(
\begin{array}{cc}
1-g_1 & -g_2 \\
-g_2 & 1+g_1 \\
\end{array} \right),
\end{displaymath} (3)

where $\kappa=\nabla^2 \psi/2$ is the convergence responsible for the isotropic magnification of an image relative to its source, and $g(\vec{\theta})=\gamma(\vec{\theta})/[1-\kappa(\vec{\theta})]$ is the reduced shear quantifying its distortion. Here, $\gamma_1=\left(\psi_{,11}-\psi_{,22}\right)/2$ and $\gamma_2=\psi_{,12}$ are the two components of the complex shear. Since the angular size of the source is unknown, only the reduced shear can be estimated starting from the observed ellipticity of the background sources,

\begin{displaymath}\epsilon=\frac{\epsilon_{\rm s}+g}{1+g^*\epsilon_{\rm s}},
\end{displaymath} (4)

where $\epsilon_{\rm s}$ is the intrinsic ellipticity of the source and the asterisk denotes complex conjugation.

3 Measuring weak gravitational lensing

3.1 Weak lensing estimates

In absence of intrinsic alignments between background galaxies due to possible tidal interactions (Heavens & Peacock 1988; Schneider & Bridle 2010), the intrinsic source ellipticities in Eq. (4) average to zero in a sufficiently large source sample. An appropriate and convenient measure for the lensing signal on circular apertures is the weighted average over the tangential component of the shear $\gamma_{\rm t}$ relative to the position $\vec\theta$ on the sky,

\begin{displaymath}\tilde \Gamma(\vec\theta)=\int{\rm d}^2\theta^\prime\gamma_{\...
...\vec{\theta}^\prime-\vec{\theta}\vert) W(\vec{\theta}^\prime).
\end{displaymath} (5)

The filter function Q determines the statistical properties of the quantity $\tilde \Gamma$ and W describes the survey geometry. We shall consider three filter functions here which will be described in Sect. 3.2.

Data on gravitational lensing by a mass concentration can be modeled by a signal $s(\vec \theta)=\Gamma\tau(\vec{\theta})$ described by its amplitude $\Gamma$ and its radial profile $\tau$, and a noise component $n(\vec{\theta})$ with zero mean, i.e.

\begin{displaymath}\gamma_{\rm t}(\vec{\theta})=\Gamma\tau(\vec{\theta})+n(\vec{\theta})
\end{displaymath} (6)

for the tangential shear. The variance of $\tilde \Gamma$ in (5) is

\begin{displaymath}\sigma^2_{\tilde \Gamma}=\int\frac{k~{\rm d} k}{2\pi}\tilde{P_{\rm g}}(k)
\vert\hat Q(\vec{k})\vert^2,
\end{displaymath} (7)

where $\hat Q(\vec{k})$ is the Fourier transform of the filter Q and $\tilde P_{\rm g}(k)=\hat{W}^2(k)P_{\rm g}(k)$ is the effective power spectrum of the noise component, i.e. the intrinsic noise power spectrum convolved with a window function representing the frequency response of the survey. Note that the contribution from cosmic variance is not included in this definition since it is negligibly small. In our application, the latter is a band-pass filter accounting for the finite field of view of the survey (high-pass component) and the average galaxy separation (low-pass component). See Sect. 5.2 for its explicit expression. For complex sky coverage and especially for small fields of view the adopted approximation would not hold and a general treatment accounting for the full geometry  $\hat{W}(\vec{k})$ must be considered (see for e.g. Hivon et al. 2002).

In practical applications, $\tilde \Gamma$ is approximated by

\begin{displaymath}\tilde \Gamma(\vec\theta)=\frac{1}{n}\sum_i\epsilon_{{\rm t}i}(\vec{\theta})
Q(\vert\vec{\theta_i}-\vec{\theta}\vert),
\end{displaymath} (8)

where $\epsilon_{{\rm t}i}(\vec{\theta})$ is the tangential ellipticity with respect to $\vec\theta$ of a galaxy located at the position $\vec\theta_i$, which provides an estimate for  $\gamma_{\rm t}$. Note that in our application we consider linear structures only and therefore the weak lensing approximation is always satisfied, i.e. $g\approx \gamma$.

3.2 Weak lensing filters

\begin{figure}
\par\mbox{\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866f...
...m}
\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866fg1c} }
\end{figure} Figure 1:

Overview of different weak-lensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide.

Open with DEXTER

Different filter profiles have been proposed in the literature depending on their specific application in weak lensing. We adopt three of them here which have been used so far to identify halo candidates through weak lensing.

(1)
The polynomial filter described by Schneider et al. (1998),

\begin{displaymath}Q_{\rm poly}(x)=\frac{6x^2}{\pi r_{\rm s}^2}\left(1-x^2\right)
{\rm H}(1-x),
\end{displaymath} (9)

where the projected angular distance from the filter centre, $x=\theta/\theta_{\rm s}$, is expressed in units of the filter scale radius, $\theta_{\rm s}$, and H is the Heaviside step function. This filter was originally proposed for cosmic-shear analysis but several authors have used it also for dark matter halo searches (see for e.g. Erben et al. 2000; Schirmer et al. 2004).

(2)
A filter optimised for halos with NFW density profile, approximating their shear signal with a hyperbolic tangent (Schirmer et al. 2004),

\begin{displaymath}Q_{\rm tanh}(x)=\left(1+{\rm e}^{a-bx}+{\rm e}^{cx-d}\right)^{-1}
\tanh (x/x_{\rm c}),
\end{displaymath} (10)

where the two exponentials in parentheses are cut-offs imposed at small and large radii (a=6, b=150, c=50, and d=47) and $x_{\rm
c}$ is a parameter defining the filter-profile slope. A good choice for the latter is $x_{\rm c}=0.1$ as empirically shown by Hetterscheidt et al. (2005).

(3)
The optimal linear filter introduced by Maturi et al. (2005) which, together with the optimisation with respect to the expected halo-lensing signal, optimally suppresses the contamination due to the line-of-sight projection of large-scale structures (LSS),

\begin{displaymath}\hat Q_{\rm opt}(\vec k) =
\alpha
\frac{\tau(\vec{k})}{P_{\...
...left\vert\hat\tau (\vec{k})\right\vert^2}{P_{\rm f}(k)}{\cdot}
\end{displaymath} (11)

Here, $\hat\tau(k)$ is the Fourier transform of the expected shear profile of the halo and $P_{\rm f}(k)=P_{\rm g}+P_{\rm lin}(k)$ is the complete noise power spectrum including the linearly evolved LSS through $P_{\rm lin}$ as well as the noise contributions from the intrinsic source ellipticities and the shot noise by $P_{\rm
g}=\sigma_\epsilon^2/(2n_{\rm g})$, given their angular number density $n_{\rm g}$ and the intrinsic ellipticity dispersion $\sigma_\epsilon$. Note that for the filter construction we use the linear LSS power spectrum instead of the non-linear one. This is a kind of an implicit definition of a halo since we assume that the difference between linear and non-linear power spectrum is completely due to their formation. This filter depends on parameters determined by physical quantities such as the halo mass and redshift, the galaxy number density and the intrinsic ellipticity dispersion and not on an arbitrarily chosen scale which has to be determined empirically through costly numerical simulations (e.g. Hennawi & Spergel 2005). An application of this filter to the GaBoDS survey (Schirmer et al. 2003) was presented in Maturi et al. (2007), while a detailed comparison of these three filters was performed by Pace et al. (2007) by means of numerical ray-tracing simulations. They found that the optimal linear filter given by Eq. (11) returns the halo sample with the largest completeness ($100\%$ for masses $M\geq 3\times 10^{14}~h^{-1}~M_\odot$ and $\sim$50% for masses $M\sim 2\times 10^{14}~h^{-1}~M_\odot$ for sources at $z_{\rm s}=1$) and the lowest number of spurious detections caused by the LSS ($\leq$10% for a signal-to-noise threshold of ${\it S/N}\sim 5$).

3.3 Weak lensing estimates and convergence

In order to simplify comparisons with numerical simulations, we convert the quantity $\tilde \Gamma$ from Eq. (5) to a quantity involving the convergence,

$\displaystyle \tilde \Gamma(\vec\theta) = \int{\rm d}^2\theta'\kappa(\vec\theta')
U(\vert\vec\theta'-\vec\theta\vert),$   (12)

where U is related to Q by

\begin{displaymath}
Q(\theta) =
\frac{2}{\theta^2}\int_0^{\theta}{\rm d}\theta'\theta'U(\theta')-U(\theta)
\end{displaymath} (13)

(Schneider 1996) if the weight function $U(\theta)$ is defined to be compensated, i.e.

\begin{displaymath}\int{\rm d}\theta'\theta'U(\theta')=0.
\end{displaymath} (14)

Equation (13) has the form of a Volterra integral equation of the first kind which can be solved for U once Q is specified. If $\lim_{\rm x\rightarrow 0} Q(x)/x$ is finite, the solution is

\begin{displaymath}
U(\theta)=-Q(\theta)-\int_0^\theta {\rm d}\theta^\prime
\frac{2}{\theta^\prime}Q(\theta^\prime),
\end{displaymath} (15)

(Polyanin & Manzhirov 1998), which can be solved analytically for the polynomial filter

\begin{displaymath}U_{\rm poly}(x)=\frac{9}{\pi r_{\rm s}^2}\left(1-x^2\right)
\left(\frac{1}{3}-x^2\right)
{\rm H}\left(1-x\right),
\end{displaymath} (16)

and numerically for the hyperbolic-tangent filter of Eq. (10) with an efficient recursive scheme over the desired radii $\theta$. If $\lim_{x\rightarrow 0} Q(x)/x=\infty$ as in the case of the optimal filter, Eq. (15) can be solved by introducing an exponential cut-off at small radii to avoid the divergence. The correct solution is obtained if the cut-off scale is close to the mean separation between the background galaxies, so that no information is lost. Alternatively, Eq. (13) can be solved iteratively with respect to Q by

\begin{displaymath}U_0(\theta)=-Q(\theta),
\quad
U_n(\theta)=-Q(\theta)+\frac{...
...heta {\rm d}\theta^\prime\theta^\prime U_{n-1}(\theta^\prime).
\end{displaymath} (17)

The iterative procedure is stopped once the difference $U_n(\theta)-U_{n-1}(\theta)$ is sufficiently small. After $U(\theta)$has been found, an appropriate constant c has to be added in order to satisfy the compensation requirement, Eq. (14). It is given by

\begin{displaymath}
c=-\frac{2}{\theta_{\rm max}^2}\int\limits_0^{\theta_{\rm max}}{\rm d}\theta^\prime
\theta^\prime U(\theta^\prime).
\end{displaymath} (18)

We show in Fig. 1 the resulting filter profiles to be used on shear catalogues through Eq. (5) and their corresponding variants to be used on convergence fields with Eq. (12) both in real and in Fourier space. All of them are band-pass filters and the two of them designed for halo searches have larger amplitudes at higher frequencies compared to the polynomial filter by Schneider et al. (1998), where the halo signal is most significant. This feature is particularly prominent for the optimal filter, which is additionally negative at low frequencies, where the LSS signal dominates. These two features ensure the minimisation of the LSS contamination in halo searches.

4 Predicting weak lensing peak counts

Our analytic predictions for the number counts of weak-lensing detections as a function of their signal-to-noise ratio are based on modelling the analysed and filtered lensing data, resulting from Eq. (12), as an isotropic and homogeneous Gaussian random field. This is an extremely good approximation for the noise and the LSS components, but not necessarily for the non-linear structures such as sufficiently massive halos, as we shall discuss in Sect. 5.3.

4.1 Statistics of Gaussian random fields

An n-dimensional random field $F(\vec{r})$ assigns a set of random numbers to each point $\vec r$ in an n-dimensional space. A joint probability function can be declared for m arbitrary points $\vec{r}_j$ as the probability to have field values between $F(\vec{r}_j)$ and $F(\vec{r}_j)+{\rm d}F(\vec{r}_j)$, with $j=1,\ldots,m$. For Gaussian random fields, the field itself, its derivatives, integrals and any linear combination thereof are Gaussian random variables which we denote by yi with mean values $\langle y_i\rangle$ and central deviations $\Delta y_i:=y_i-\langle
y_i\rangle$, with $i=1,\ldots,p$. Their joint probability function is a multivariate Gaussian,

\begin{displaymath}\mathcal{P}(y_1,\ldots,y_{\rm p})~{\rm d}y_1\ldots{\rm d}y_{\...
...t)}}~
{\rm e}^{-\mathcal{Q}}~{\rm d}y_1\ldots{\rm d}y_{\rm p}
\end{displaymath} (19)

with the quadratic form

\begin{displaymath}\mathcal{Q}:=\frac{1}{2}\sum_{\rm i,j=1}^{p}\Delta y_{\rm i}
\left(\tens{\mathcal{M}}^{-1}\right)_{\rm ij}
\Delta y_{\rm j},
\end{displaymath} (20)

where $\tens{\mathcal{M}}$ is the covariance matrix with elements $\mathcal{M}_{ij}:=\langle\Delta y_i\Delta y_j\rangle$. All statistical properties of a homogeneous Gaussian random field with zero mean are fully characterised by the two-point correlation function $\xi(\vec{r}_1,\vec{r}_2)=\xi(\vert\vec{r}_1-\vec{r}_2\vert):=\langle
F(\vec{r}_1)F(\vec{r}_2)\rangle$ or equivalently its Fourier transform, the power spectrum P(k). In our case, this is the sum of the power spectrum of the convergence due to linearly evolved structures, $P_{\rm LSS}(k)$, and the observational noise, $P_{\rm
g}(k)$, caused by the galaxies.

Since we are interested in gravitational-lensing quantities such as the convergence $\kappa$, we here consider two-dimensional Gaussian random fields only with $\vec{r} \coloneqq \vec{\theta}$. We adopt the formalism of Bardeen et al. (1986), where $F=\kappa$, $\eta_i=\partial_iF$and $\zeta_{ij}=\partial_i\partial_jF$ denote the convergence field and its first and second derivatives, respectively.

4.2 Definition of detections: a new up-crossing criterion

We define as detection any contiguous area of the field $\kappa$ which exceeds a given threshold, $\kappa_{\rm th}={\it S/N}\cdot\sigma_{\tilde{\Gamma}}$, determined by the required signal-to-noise ratio, S/N, and the variance $\sigma_{\tilde{\Gamma}}$ of the quantity $\tilde \Gamma$(see Eq. (7)). This definition is widely used in surveys for galaxy clusters or peak counts in weak-lensing surveys and can easily be applied both to real data and Gaussian random fields.

Each detection is delimited by its contour at the threshold level $\kappa_{\rm th}$. If this contour is convex, it has a single point $\vec{\theta}_{\rm up}$, called up-crossing point, where the field is rising along the x-axis direction only, i.e. where the field gradient has one vanishing and one positive component (see the sketch for type-0 detections in the lower panel of Fig. 2),

\begin{displaymath}F(\vec{\theta}_{\rm up})=\kappa_{\rm th},
\quad
\eta_1(\vec{\theta}_{\rm up})>0,
\quad
\eta_2(\vec{\theta}_{\rm up})=0.
\end{displaymath} (21)

Since we assume $\kappa$ to be a homogeneous and isotropic random field, the orientation of the coordinate frame is arbitrary and irrelevant. The conditions expressed by Eq. (21) define the so-called up-crossing criterion which allows to identify the detections and to derive their statistical properties, such as their number counts, by associating their definition to the Gaussian random field variables F, $\eta_1$ and $\eta_2$.

However, this criterion is prone to fail for low thresholds, where detections tend to merge and the isocontours tend to deviate from the assumed convex shape. This causes detection numbers to be overestimated at low cut-offs because each ``peninsula'' and ``bay'' of their contour (see type-1 in Fig. 2) would be counted as one detection. We solve this problem by dividing the up-crossing points into those with negative (red circles) and those with positive (blue squares) curvature, $\zeta _{22}<0$ and $\zeta _{22}>0$ respectively. In fact, for each detection, their difference is one (type-1) providing the correct number count. The only exception is for those detections containing one or more ``lagoons'' (type-2) since each of them decreases the detection count by one. But since this is not a frequent case and occurs only at very low cut-off levels, we do not consider this case here.

\begin{figure}
\par\includegraphics[width=8.2cm,clip]{fig/12866fg2a}\par\vspace*{2mm}
\includegraphics[width=8.5cm,clip]{fig/12866fg2b}
\end{figure} Figure 2:

Weak lensing detection maps. The top four panels show the segmentation of a realistic weak-lensing S/N map for increasing thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel sketches the three discussed detection types together with the points identified by the standard and the modified up-crossing criteria. Red circles and blue squares correspond to up-crossing points for which the second field derivatives are $\zeta _{22}<0$ and $\zeta _{22}>0$, respectively.

Open with DEXTER

4.3 The number density of detections

Once the relation between the detections and the Gaussian random variables $\vec{y}=(\kappa,\eta_1,\eta_2,\zeta_{22})$ and their constraints from Eq. (21) together with $\zeta _{22}<0$ or $\zeta _{22}>0$ are defined, we can describe their statistical properties through the multivariate Gaussian probability distribution given by Eq. (19) with the covariance matrix

\begin{displaymath}\tens{\mathcal{M}}=\left(\begin{array}{cccc}
\sigma_0^2 & 0 ...
...\sigma_1^2/2 & 0 & 0 & 3\sigma_2^2/8 \\
\end{array} \right),
\end{displaymath} (22)

as given by van Waerbeke (2000). Here, the $\sigma_j$ are the moments of the power spectrum P(k),

\begin{displaymath}\sigma_j^2=\int\frac{k^{2j+1}~{\rm d}k}{2\pi}P(k)\hat{W}^2(k)\vert\hat{Q}(k)\vert^2,
\end{displaymath} (23)

where $P(k)=P_{{\rm LSS}}+P_{\rm g}$ is the non-linear power spectrum of the matter fluctuations (Peacock & Dodds 1996) combined with the noise contribution by the background galaxies, $\hat{W}(k)$ is the survey frequency response (see Sect. 5.2 for its explicit expression), and $\hat Q(k)$ is the Fourier transform of the filter adopted for the weak lensing analysis (see Sect. 3.2). The determinant of $\tens{\mathcal{M}}$ is $(3\sigma_0^2\sigma_1^4\sigma_2^2-2\sigma_1^8)/32$ and Eq. (20) can explicitly be written as

\begin{displaymath}\mathcal{Q}=\frac{1}{2}
\left(
\frac{2\vec{\eta}^2}{\sigma_...
...^2\sigma_2^2}{3\sigma_0^2\sigma_2^2-2\sigma_1^4}
\right)\cdot
\end{displaymath} (24)

Both $\kappa$ and $\eta_2$ can be expanded into Taylor series around the points $\vec{\theta}_{\rm up}$ where the up-crossing conditions are fulfilled,

\begin{displaymath}\kappa(\vec{\theta})\approx
\kappa_{\rm th}+
\sum_{i=1}^2 \...
... \sum_{i=1}^2\zeta_{2i}(\vec{\theta}-\vec{\theta}_{\rm up})_i,
\end{displaymath} (25)

so that the infinitesimal volume element ${\rm d}\kappa{\rm d}\eta_2$ can be written as ${\rm d}\kappa {\rm d}\eta_2=\vert\det \tens{J}\vert {\rm d}^2r$, where $\tens{J}$is the Jacobian matrix,

\begin{displaymath}\tens{J}=\left(\begin{array}{cc}
\partial\kappa/\partial x_1...
... & \eta_2 \\
\zeta_{21} & \zeta_{22} \\
\end{array}\right)
\end{displaymath} (26)

and $\vert\det\tens{J}\vert=\vert\eta_1\zeta_{22}\vert$ since $\eta_2=0$. The number density of up-crossing points at the threshold $\kappa_{\rm th}$with $\zeta _{22}<0$, and $\zeta _{22}>0$, n- and n+ respectively, can thus be evaluated as

\begin{displaymath}n^{\mp}(\kappa_{\rm th})=
\mp
\int\limits_0^\infty {\rm d}\...
... \kappa=\kappa_{\rm th},\eta_2=0,\eta_1,\zeta_{22}
\right)\!,
\end{displaymath} (27)

where $\mathcal{P}(\kappa,\eta_1,\eta_2,\zeta_{22})$ is the multivariate Gaussian defined by Eq. (19) with p=4, the correlation matrix (22), and the quadratic form (24). Both expressions can be integrated analytically and their difference, $n_{\rm det}(\kappa_{\rm th})=n^{-}(\kappa_{\rm th})-n^{+}(\kappa_{\rm th})$, as explained in Sect. 4.2, returns the number density of detections  $n_{\rm det}$ above the threshold $\kappa_{\rm th}$,

\begin{displaymath}n_{\rm det}(\kappa_{\rm th})=
\frac{1}{4\sqrt{2}\pi^{3/2}}
...
...
\left(
-\frac{\kappa_{\rm th}^2}{2\sigma_0^2}
\right)\cdot
\end{displaymath} (28)

Note how the dependence on $\sigma_2$ drops out of the difference n--n+, leading to a very simple result. This equation is much less complex than Eqs. (41), (42) by van Waerbeke (2000). It returns the number of detection contours rather than the number of peaks.

\begin{figure}
\par\mbox{ \includegraphics[angle=-90,width=5.3cm,clip]{fig/12866...
...m}
\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866fg3f} }
\end{figure} Figure 3:

Top panels: probability density function (PDF) measured from the synthetic galaxy catalogue, covering 24.4 square degrees, analysed with all adopted filters and scales. The negative part of the PDF is well described by a Gaussian (solid lines). The 3-$\sigma $ error bars related to the Poissonian uncertainty are shown. This shows how weak lensing signal-to-noise maps can be modelled as Gaussian random fields. Bottom panels: a similar comparison was performed with the measured power spectrum and the predicted one based on the expected combined large scale structure and noise power spectrums convolved with the weak lensing filter and the frequency response of the survey. For clarity, we only show the results for the intermediate scales.

Open with DEXTER

For completeness we report the number density estimate also for the classical up-crossing criterion, Eq. (21) only, where the constraint on the second derivative of the field, $\zeta_{22}$, is not used,

                            $\displaystyle n_{\rm up}(\kappa_{\rm th})$ = $\displaystyle \frac{1}{4\sqrt{2}\pi^{3/2}}
\left(
\frac{\sigma_1}{\sigma_0}
\ri...
..._{\rm th}}{\sigma_0}
\exp
\left(
-\frac{\kappa_{\rm th}^2}{2\sigma_0^2}
\right)$  
    $\displaystyle \times
\left[
{\rm erf}
\left(
\frac{\kappa_{\rm th}\sigma_1^2}{\...
...\left(
-\frac{\kappa_{\rm th}^2\sigma_1^4}{\sigma_0^2\gamma^2}
\right)
\right],$ (29)

with $\gamma:=\sqrt{3\sigma_0^2\sigma_2^2-2\sigma_1^4}$. This number density converges to the correct value $n_{\rm det}$ for $\kappa_{\rm th}\rightarrow\infty$, i.e. large thresholds, because ${\rm erf}(x)\rightarrow1$ and $\exp(-x^2)/x\rightarrow0$for $x\rightarrow\infty$. This reflects the fact that, for large thresholds, the detection shapes become fully convex and any issues with more complex shapes disappear.

5 Analytic predictions vs. numerical simulations

We now compare the number counts of detections predicted by our analytic approach with those resulting form the analysis of synthetic galaxy catalogues produced with numerical ray-tracing simulations.

5.1 Numerical simulations

We use a hydrodynamical, numerical N-body simulation carried out with the code GADGET-2 (Springel 2005). We briefly summarise its main characteristics here and refer to Borgani et al. (2004) for a more detailed discussion. The simulation represents a concordance $\Lambda$CDM model, with dark-energy, dark-matter and baryon density parameters $\Omega_{\Lambda}=0.7$, $\Omega_{\rm m}=0.3$ and $\Omega_{\rm b}=0.04$, respectively. The Hubble constant is $H_0=100~h~{\rm km~s^{-1}~Mpc^{-1}}$ with h=0.7, and the linear power spectrum of the matter-density fluctuations is normalised to $\sigma_8=0.8$. The simulated box is a cube with a side length of 192 h-1 Mpc, containing 4803 dark-matter particles with a mass of $6.6\times10^9~h^{-1}~M_{\odot}$ each and an equal number of gas particles with $8.9\times10^8~h^{-1}~M_{\odot}$ each. Thus, halos of mass $10^{13}~h^{-1}~M_\odot$ are resolved into several thousands of particles. The physics of the gas component includes radiative cooling, star formation and supernova feedback, assuming zero metallicity.

This simulation is used to construct backward light cones by stacking the output snapshots from z=1 to z=0. Since the snapshots contain the same cosmic structures at different evolutionary stages, they are randomly shifted and rotated to avoid repetitions of the same cosmic structures along one line-of-sight. The light cone is then sliced into thick planes, whose particles are subsequently projected with a triangular-shaped-cloud scheme (TSC, Hockney & Eastwood 1988) on lens planes perpendicular to the line-of-sight. We trace a bundle of $2048\times2048$ light rays through one light cone which start propagating at the observer into directions on a regular grid of 4.9 degrees on each side. The effective resolution of this ray-tracing simulation is of the order of 1'. The effective convergence and shear maps obtained from the ray-tracing simulations are used to lens a background source population according to Eq. (4). Galaxies are randomly distributed on the lens plane at z=1 with a number density of $n_{\rm g}=30$ arcmin-2and have intrinsic random ellipticities drawn from the distribution

\begin{displaymath}p(\epsilon_{\rm s})=
\frac{\exp[(1-\epsilon_{\rm s}^2)/\sigm...
...^2]}
{\pi\sigma_{\epsilon}^2[\exp(1/\sigma_{\epsilon}^2)-1]},
\end{displaymath} (30)

where $\sigma _\epsilon =0.25$ (for further detail, see Pace et al. 2007).

Synthetic galaxy catalogues produced in this way are finally analysed with the aperture mass (Eq. (5)) evaluated on a regular grid of $512\times 512$ positions covering the entire field-of-view of the light cone. All three filters presented in Sect. 3.2 were used with three different scales: the polynomial filter with $r_{\rm
s}=2\hbox{$.\mkern-4mu^\prime$ }75,\ 5\hbox{$.\mkern-4mu^\prime$ }5,$ and 11', the hyperbolic-tangent filter with $r_{\rm s}=5',\ 10'$, and 20', and the optimal filter with scale radii of the cluster model set to $r_{\rm s}=1',\ 2',$ and 4'. These scales are chosen to sample angular scales typically used in literature.

For a statistical analysis of the weak-lensing detections and their relation to the numerical simulations structures, see Pace et al. (2007).

5.2 Accounting for the geometry of surveys: the window function

Our analytic predictions for the number density of detections accounts for the survey frequency response $\hat{W}(k)$ discussed in Sect. 3.1. As already stated, this is a simplified approach and the adopted full geometry $\hat{W}(\vec{k})$ should be considered (see for e.g. Hivon et al. 2002) in case of complex sky masking, especially if involving small fields of view. Thus, in our approach we consider only an effective power spectrum $\tilde
P(k)=P(k)\hat{W}^2(k)$, where the frequency response, $\hat{W}(k)$, is the product of a high-pass filter suppressing the scales larger than the light cone's side length $L_{\rm
f}=2\pi/k_{\rm f}=4.9\;{\rm deg}$,

\begin{displaymath}\hat{W}^2_{\rm f}(k)=
\exp\left(-\frac{k_{\rm f}^2}{k^2}\right)
\end{displaymath} (31)

(note that k is in the denominator here), a low-pass filter imposed by the average separation $d=2\pi/k_{\rm g}=n_{\rm
g}^{-1/2}=0\hbox{$.\mkern-4mu^\prime$ }18$ between the galaxies,

\begin{displaymath}\hat{W}_{\rm g}^2(k)=
\exp\left(-\frac{k^2}{k_{\rm g}^2}\right),
\end{displaymath} (32)

and a low-pass filter related to the resolution $d_{\rm pix}=0\hbox{$.\mkern-4mu^\prime$ }57$ used to sample the sky with the quantity $\tilde \Gamma$ of Eq. (8),

\begin{displaymath}\hat{W}_{\rm pix}(k)=
\frac{2~\sqrt{\pi}}{kd_{\rm pix}}~
{J}_1\left(\frac{kd_{\rm pix}}{\sqrt{\pi}}\right),
\end{displaymath} (33)

where J1(x) is the cylindrical Bessel function of order one. The latter function is a circular step function covering the same area as a square-shaped pixel of size $d_{\rm pix}$. The square shapes of the field-of-view and the pixels could be better represented by the product of two step functions in both the x- and y-direction, but the low gain in accuracy does not justify the higher computational cost. Finally, for the comparison with our numerical ray-tracing simulation, we have to account for its resolution properties which act on the convergence power spectrum only by convolving $P_{\rm LSS}$ with a low-pass filter

\begin{displaymath}\hat{W}_{\rm s}^2(k)=
\exp\left(-\frac{k^2}{k_{\rm s}^2}\right),
\end{displaymath} (34)

where $k_{\rm s}=2\pi/1\;{\rm arcmin}^{-1}$ as discussed in Sect. 5.1.

The agreement of this simple recipe with the numerical simulation is shown in the bottom panels of Fig. 3, where we compare the expected effective power spectrum convolved with the filter, $\tilde{P}(k)\hat{U}^2(k)=P(k)\hat{W}^2(k)\hat{U}^2(k)$, with the one measured in the numerical simulation. Apart from noise at large scales, only small deviations at high frequencies are visible. Note that when relating the detection threshold to the signal-to-noise ratio S/N according to the variance given by Eq. (7) and $\kappa_{\rm th}={\it S/N}\cdot\sigma_{\tilde{\Gamma}}$, all window functions mentioned are used except for $\hat W_{\rm pix}$, which, of course, does not affect the variance.

5.3 Comparison with numerical simulations

Our analytic approach approximates the data as Gaussian random fields, very well representing both noise and LSS contributions to the weak lensing signal-to-noise ratio maps. In fact, even if shear and convergence of LSS show non-Gaussianities (Jain et al. 2000), weak lensing data are convolved with filters broad enough to make their signal Gaussian. On the other hand, this is not the case for non-linear objects such as galaxy clusters whose non-Gaussianity remains after the filtering process. Thus, particular care has to be taken when comparing the predicted number counts with real or simulated data by modelling the non-linear structures, which is difficult and uncertain, or by avoiding their contribution in the first place. We follow the latter approach by counting the negative instead of the positive peaks found in the convergence maps derived from galaxy catalogues. In fact, massive halos contribute only positive detections in contrast to the LSS and other sources of noise which equally produce positive and negative detections with the same statistical properties. Both, negative and positive peak counts, contain cosmologically relevant information. Apart from noise, the negative peak counts are caused by linearly evolved LSS, while the difference between positive and negative counts is due to non-linear structures. The mean density of negative peak counts can also be used to statistically correct positive peak counts by the level of spurious detections.

\begin{figure}
\par\includegraphics[angle=-90,width=8.5cm,clip]{fig/12866fg4.eps}
\vspace*{5mm}
\end{figure} Figure 4:

Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original up-crossing criterion (dashed line) and with the new blended up-crossing criterion (points with error bars). The standard up-crossing criterion is a good approximation for high signal-to-noise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees.

Open with DEXTER
To verify these considerations, we tested if the resulting weak lensing maps below the zero level behave as Gaussian random fields, i.e. if the negative wing of their probability density function (hereafter PDF) is compatible with a Gaussian. The result is shown in the top panels of Fig. 3 for all adopted filters and scales. On one hand, the left side of the PDF is fitted by a Gaussian whose mean is compatible with zero. On the other hand, the largest PDF values show a slightly extended tail caused by the non-linear objects present in the numerical simulation. For illustrative purposes, we also show in the bottom panels of Fig. 3 the expected filtered power spectra, $\tilde P=P \vert\hat{W}\hat{U}\vert^2$, assumed in Eq. (23), with those measured from the synthetic galaxy catalogues convolved with the three adopted filters, respectively. For clarity, we show the results for the intermediate filter scales only since the others are equivalent. All main features are well reproduced. Only at high frequencies the assumed power spectra drop slightly more steeply than measured in the numerical simulations. This might be one reason for the small deviations between the numerical measurements and the analytical predictions. The other one is sample variance.

A comparison of the original up-crossing criterion with the new blended up-crossing criterion presented here is shown in Fig. 4 together with the number counts of negative peaks obtained from the numerical simulations. Only the result for the optimal filter with $r_{\rm s}=1'$ is shown for clarity. As expected, the two criteria agree very well for high signal-to-noise ratios since the detections are mostly of type-0, i.e. with a convex contour, as shown in the lower left panel of Fig. 2, while the merging of detections at lower signal-to-noise ratios is correctly taken into account only by our new criterion.

\begin{figure}
\par\mbox{\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866f...
...phics[angle=-90,width=5.2cm,clip]{fig/12866fg5i} }
\vspace*{0mm}
\end{figure} Figure 5:

Number of weak lensing peaks, shown as a function of the signal-to-noise ratio, predicted with the analytic method presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004), tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and increasing filter radii from left to right as labeled in each panel. The number counts generated by the intrinsic galaxy noise alone, $P_{\rm g}$, and the LSS alone, $P_{\rm LSS}$, are also shown. Numbers refer to a survey of one square degree with a galaxy number density of $n_{\rm g}=30~{\rm arcmin}^{-1}$ and an intrinsic shear dispersion of $\sigma _\epsilon =0.25$. The results are compared with the number counts of positive (labeled with +) as well as negative (labeled with -) peaks detected based on the synthetic galaxy catalogues from the numerical simulation. Error bars and shaded areas refer to the Poissonian noise, i.e. the square root of the number of detections. Error bars have the same meaning as in Fig. 4.

Open with DEXTER
Our analytic predictions of the number counts for all filters and both positive and negative detection counts resulting from the synthetic galaxies catalogue from the numerical simulation are shown in Fig. 5. The high signal-to-noise ratio tail caused by the nonlinear structures is present only in the positive detection counts, as expected. The agreement with the negative detections is within the 1-$\sigma $ error bars (representing the Poissonian uncertainties for a one square degree survey) except for the Schirmer et al. (2004) filter (tanh) and the Maturi et al. (2005) filter (opt), with a scale of 5' and 4' respectively, which are compatible only at a 2-$\sigma $ level for ${\it S/N}\sim 1$. It is plausible that these small deviations are caused by the small amount of non-Gaussianities still present in the data and the small deviations between the adopted and the actual signal power spectra (see Fig. 3).

To additionally confirm the assumption that the contributions from both LSS and noise from the background galaxies can be described by a Gaussian random field after the filtering process, we modelled the positive peak counts as a combination of the peak statistics described in this work (used for the negative peaks) and the halo mass function for the contribution of highly non-linearly evolved halos that should be responsible for the high signal-to-noise part and are not taken into account by the Gaussian field statistics. The analytical prediction in this case also shows good agreement with the results from the simulation. Detailed information on the method and results will be the discussed in a future work.

We finally compare the contribution of the LSS and the noise to the total signal by treating them separately. Their number counts are plotted with dashed and dot-dashed lines in Fig. 5. All filters show an unsurprisingly large number of detections caused by the noise up to signal-to-noise ratios of 3 and a number of detections caused by the LSS increasing with the filter scale except for the optimal filter, which always suppresses their contribution to a negligible level. Thus, the LSS contaminates halo catalogues selected by weak lensing up to signal-to-noise ratios of 4-5 if its contribution is ignored in the filter definition. Note that the total number of detections can be obtained only by counting the peaks from the total signal, i.e. LSS plus noise, and not by adding the peaks found in the two components separately, because the blending of peaks is different for the two cases.

6 Conclusion

We have applied an analytic method for predicting peak counts in weak-lensing surveys, based on the theory of Gaussian random fields (Bardeen et al. 1986). Peaks are typically detected in shear fields after convolving them with filters of different shapes and widths. We have taken these into account by first filtering the assumed Gaussian random field appropriately and then searching for suitably defined peaks. On the way, we have argued for a refinement of the up-crossing criterion for peak detection which avoids biased counts of detections with low signal-to-noise ratio, and implemented it in the analytic peak-count prediction. Peaks in the non-linear tail of the shear distribution are underrepresented in this approach because they are highly non-Gaussian, but our method is well applicable to the prediction of spurious counts, and therefore to the quantification of the background in attempts to measure number densities of dark-matter halos. We have compared our analytic prediction to peak counts in numerically simulated, synthetic shear catalogues and found agreement at the expected level.

Our main results can be summarised as follows:

  • The shape and size of the filter applied to the shear field have a large influence on the contamination by spurious detections. For the optimal filter, the contribution by large-scale structures is low on all filter scales, while they typically contribute substantially for other filters. This confirms previous results with a different approach (Pace et al. 2007; Dietrich et al. 2007; Maturi et al. 2005).
  • Taken together, large-scale structure and galaxy noise contribute the majority of detections up to signal-to-noise ratios between 3-5. Only above this level, detections due to real dark-matter halos begin dominating.
  • Shape and shot noise due to the background galaxies can not be predicted separately from the large-scale structure since both affect another in a complex way.
  • The optimal filter allows the detection of $\sim$30-40 halos per square degree at signal-to-noise ratios high enough for suppressing all noise contributions. For the other filters, this number is lower by almost an order of magnitude.
Our conclusions are thus surprisingly drastic: peak counts in weak-lensing surveys are almost exclusively caused by chance projections in the large-scale structure and by galaxy shape and shot noise unless only peaks with high signal-to-noise ratios are counted. With typical filters, only a few detections per square degree can be expected at that level, while the optimal filter returns up to an order of magnitude more. Nevertheless, the contamination level of the cluster number counts can be predicted and, after all, it is a quantity containing valuable cosmological information which can be used to tighten cosmological constraints as well.

Acknowledgements
This work was supported by the Transregional Collaborative Research Centre TRR 33 (M.M., M.B.) and grant number BA 1369/12-1 of the Deutsche Forschungsgemeinschaft, the Heidelberg Graduate School of Fundamental Physics and the IMPRS for Astronomy & Cosmic Physics at the University of Heidelberg (CA).

Appendix A: Forecast for different weak lensing surveys

For convenience, we evaluate here the expected number density of peak counts for ${\it S/N}=1,3,5$ and for a collection of present and future weak-lensing surveys with different intrinsic ellipticity dispersion, $\sigma_\epsilon$, and galaxy number density, $n_{\rm g}$, per arcmin2. To give typical values, we assumed for all of them a square-shaped field of view, a uniform galaxy number density and no gaps for two main reasons. First, their fields-of-view are typically very large and thus do not affect the frequencies relevant for our evaluation. Second, the masking of bright objects can be done in many different ways which cannot be considered in this paper in any detail. Finally we fixed the sampling scale, described by Eq. (33), to be 5 times smaller than the typical filter scale in order to avoid undersampling, i.e. such that the high frequency cut-off is imposed by the filters themselves. For each filter, we used three different scales, namely $Q_{\rm poly}$: scale-1 = $2\hbox{$.\mkern-4mu^\prime$ }75$, scale-2 = $5\hbox{$.\mkern-4mu^\prime$ }5$, scale-3 = 11'; $Q_{\rm
tanh}$: scale-1 = 5', scale-2 = 10', scale-3 = 20'; $Q_{\rm
opt}$: scale-1 = $10^{14}~M_\odot$ and scale-2 = $5\times10^{14}~M_\odot$. $Q_{\rm gauss}$ (Gaussian FWHM): scale-1 = 1', scale-2 = 2', scale-3 = 5'. The results are shown in Table A.1 together with the number counts obtained with a simple Gaussian filter, usually used together with the Kaiser & Squires shear inversion algorithm (Kaiser & Squires 1993).

Table A.1:   Expected number counts of peak detections per square degree for different weak-lensing surveys, filters and signal-to-noise ratios cut-off.

References

  1. Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15 [NASA ADS] [CrossRef] [Google Scholar]
  2. Borgani, S., Murante, G., Springel, V., et al. 2004, MNRAS, 348, 1078 [NASA ADS] [CrossRef] [Google Scholar]
  3. Dietrich, J. P., & Hartlap, J. 2010, MNRAS, 402, 1049 [NASA ADS] [CrossRef] [Google Scholar]
  4. Dietrich, J. P., Erben, T., Lamer, G., et al. 2007, A&A, 470, 821 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  5. Erben, T., van Waerbeke, L., Mellier, Y., et al. 2000, A&A, 355, 23 [NASA ADS] [Google Scholar]
  6. Heavens, A., & Peacock, J. 1988, MNRAS, 232, 339 [NASA ADS] [CrossRef] [Google Scholar]
  7. Hennawi, J. F., & Spergel, D. N. 2005, ApJ, 624, 59 [NASA ADS] [CrossRef] [Google Scholar]
  8. Hetterscheidt, M., Erben, T., Schneider, P., et al. 2005, A&A, 442, 43 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Hivon, E., Górski, K. M., Netterfield, C. B., et al. 2002, ApJ, 567, 2 [NASA ADS] [CrossRef] [Google Scholar]
  10. Hockney, R., & Eastwood, J. 1988, Computer simulation using particles (Bristol: Hilger) [Google Scholar]
  11. Jain, B., Seljak, U., & White, S. 2000, ApJ, 530, 547 [NASA ADS] [CrossRef] [Google Scholar]
  12. Kaiser, N., & Squires, G. 1993, ApJ, 404, 441 [NASA ADS] [CrossRef] [Google Scholar]
  13. Kratochvil, J. M., Haiman, Z., & May, M. 2010, Phys. Rev. D, 81, 043519 [NASA ADS] [CrossRef] [Google Scholar]
  14. Marian, L., Smith, R. E., & Bernstein, G. M. 2009, ApJ, 698, L33 [NASA ADS] [CrossRef] [Google Scholar]
  15. Maturi, M., Meneghetti, M., Bartelmann, M., Dolag, K., & Moscardini, L. 2005, A&A, 442, 851 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  16. Maturi, M., Schirmer, M., Meneghetti, M., Bartelmann, M., & Moscardini, L. 2007, A&A, 462, 473 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  17. Pace, F., Maturi, M., Meneghetti, M., et al. 2007, A&A, 471, 731 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  18. Peacock, J., & Dodds, S. 1996, MNRAS, 280, L19 [NASA ADS] [CrossRef] [Google Scholar]
  19. Polyanin, A. D., & Manzhirov, A. V. 1998, Handbook of Integral Equations, ed. B. Raton (CRC Press) [Google Scholar]
  20. Schirmer, M., Erben, T., Schneider, P., et al. 2003, A&A, 407, 869 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  21. Schirmer, M., Erben, T., Schneider, P., Wolf, C., & Meisenheimer, K. 2004, A&A, 420, 75 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  22. Schneider, M. D., & Bridle, S. 2010, MNRAS, 402, 2127 [NASA ADS] [CrossRef] [Google Scholar]
  23. Schneider, P. 1996, MNRAS, 283, 837 [NASA ADS] [CrossRef] [Google Scholar]
  24. Schneider, P., van Waerbeke, L., Jain, B., & Kruse, G. 1998, MNRAS, 296, 873 [NASA ADS] [CrossRef] [Google Scholar]
  25. Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
  26. van Waerbeke, L. 2000, MNRAS, 313, 524 [NASA ADS] [CrossRef] [Google Scholar]

All Tables

Table A.1:   Expected number counts of peak detections per square degree for different weak-lensing surveys, filters and signal-to-noise ratios cut-off.

All Figures

  \begin{figure}
\par\mbox{\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866f...
...m}
\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866fg1c} }
\end{figure} Figure 1:

Overview of different weak-lensing filters. The left panel shows the three filters adopted here to be used on shear catalogues, while the central and right panels show the corresponding filters to be used on convergence fields both in real and Fourier space, respectively. For illustration only, the spatial frequencies in the right panel are rescaled such that the main filters peaks coincide.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=8.2cm,clip]{fig/12866fg2a}\par\vspace*{2mm}
\includegraphics[width=8.5cm,clip]{fig/12866fg2b}
\end{figure} Figure 2:

Weak lensing detection maps. The top four panels show the segmentation of a realistic weak-lensing S/N map for increasing thresholds: 0.1, 0.5, 1, and 2, respectively. The bottom panel sketches the three discussed detection types together with the points identified by the standard and the modified up-crossing criteria. Red circles and blue squares correspond to up-crossing points for which the second field derivatives are $\zeta _{22}<0$ and $\zeta _{22}>0$, respectively.

Open with DEXTER
In the text

  \begin{figure}
\par\mbox{ \includegraphics[angle=-90,width=5.3cm,clip]{fig/12866...
...m}
\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866fg3f} }
\end{figure} Figure 3:

Top panels: probability density function (PDF) measured from the synthetic galaxy catalogue, covering 24.4 square degrees, analysed with all adopted filters and scales. The negative part of the PDF is well described by a Gaussian (solid lines). The 3-$\sigma $ error bars related to the Poissonian uncertainty are shown. This shows how weak lensing signal-to-noise maps can be modelled as Gaussian random fields. Bottom panels: a similar comparison was performed with the measured power spectrum and the predicted one based on the expected combined large scale structure and noise power spectrums convolved with the weak lensing filter and the frequency response of the survey. For clarity, we only show the results for the intermediate scales.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[angle=-90,width=8.5cm,clip]{fig/12866fg4.eps}
\vspace*{5mm}
\end{figure} Figure 4:

Number of negative peaks detected in the numerical simulation (shaded area) compared to the prediction obtained with the proposed method both with the original up-crossing criterion (dashed line) and with the new blended up-crossing criterion (points with error bars). The standard up-crossing criterion is a good approximation for high signal-to-noise ratios but fails for lower S/N, which are well described by the new version. Error bars represent the Poissonian noise of the number counts of a one square degree survey while the shaded area shows the Poisson noise in our numerical simulation covering 24.4 square degrees.

Open with DEXTER
In the text

  \begin{figure}
\par\mbox{\includegraphics[angle=-90,width=5.2cm,clip]{fig/12866f...
...phics[angle=-90,width=5.2cm,clip]{fig/12866fg5i} }
\vspace*{0mm}
\end{figure} Figure 5:

Number of weak lensing peaks, shown as a function of the signal-to-noise ratio, predicted with the analytic method presented here for the Schneider et al. (1998), poly, the Schirmer et al. (2004), tanh, and the Maturi et al. (2005), opt, filters from top to bottom, and increasing filter radii from left to right as labeled in each panel. The number counts generated by the intrinsic galaxy noise alone, $P_{\rm g}$, and the LSS alone, $P_{\rm LSS}$, are also shown. Numbers refer to a survey of one square degree with a galaxy number density of $n_{\rm g}=30~{\rm arcmin}^{-1}$ and an intrinsic shear dispersion of $\sigma _\epsilon =0.25$. The results are compared with the number counts of positive (labeled with +) as well as negative (labeled with -) peaks detected based on the synthetic galaxy catalogues from the numerical simulation. Error bars and shaded areas refer to the Poissonian noise, i.e. the square root of the number of detections. Error bars have the same meaning as in Fig. 4.

Open with DEXTER
In the text


Copyright ESO 2010

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.