A&A 462, 875-887 (2007)
DOI: 10.1051/0004-6361:20065955

GaBoDS: the Garching-Bonn Deep Survey

IX. A sample of 158 shear-selected mass concentration candidates[*],[*]

M. Schirmer1 - T. Erben2 - M. Hetterscheidt2 - P. Schneider2


1 - Isaac Newton Group of Telescopes, Calle Alvarez Abreu 70, 38700 Santa Cruz de La Palma, Spain
2 - Argelander-Institut für Astronomie (AIfA), Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany

Received 3 July 2006 / Accepted 6 November 2006

Abstract
Aims. The aim of the present work is the construction of a mass-selected galaxy cluster sample based on weak gravitational lensing methods. This sample will be subject to spectroscopic follow-up observations.
Methods. We apply the mass aperture statistics (S-statistics) and a new derivative of it (the P-statistics) to 19 square degrees of high quality, single colour wide field imaging data obtained with the WFI@MPG/ESO 2.2 m telescope. For the statistics a family of filter functions is used that approximates the expected tangential radial shear profile and thus allows for the efficient detection of mass concentrations. The exact performance of the P-statistics still needs to be evaluated by means of simulations.
Results. We find that the two samples of mass concentrations found with the P- and S-statistics have very similar properties. The overlap between them increases with the S/N of the detections made. In total, we present a combined list of 158 possible mass concentrations, which is the first time that such a large and blindly selected sample is published. 72 of the detections are associated with concentrations of bright galaxies. For about 22 of those we found spectra in the literature, indicating or proving that the galaxies seen are indeed spatially concentrated. 16 of those were previously known to be clusters or have meanwhile been secured as such. We currently follow-up a larger number of them spectroscopically to obtain deeper insight into their physical properties. The remaining 55% of the possible mass concentrations found are not associated with any optical light. We show that those "dark'' detections are mostly due to noise, and appear preferentially in shallow data.

Key words: cosmology: dark matter - galaxies: clusters: general - gravitational lensing

1 Introduction

After the recent determination of the fundamental cosmological parameters (Spergel et al. 2006) a profound understanding of the dark and luminous matter distribution in the Universe is one of the key problems in modern cosmology. Galaxy clusters are at the centre of attention in this context since they indicate the largest density peaks of the cosmic matter distribution. Their masses can be predicted by theory, and gravitation dominates their evolution. Thus clusters are prime targets for the comparison of observation against theory. For this purpose they should preferentially be selected by their mass instead of their luminosity in order to avoid a selection bias against underluminous members. Weak gravitational lensing methods select cluster candidates based solely on their mass properties, but this method has not yet been used systematically on a very large scale (some 100 or 1000 square degrees) due to a lack of suitable data. Only a few dozen shear-selected peaks were reported so far (see Wittman et al. 2001; Erben et al. 2000; Miyazaki et al. 2002; Wittman et al. 2003; Umetsu & Futamase 2000; Dahle et al. 2003; Schirmer et al. 2004; Hetterscheidt et al. 2005; von der Linden et al. 2006; Schirmer et al. 2003; Wittman et al. 2006, for some examples), a very small number compared to the more than 10000 clusters known to date (see Lopes et al. 2004, for 9900 cluster candidates on the northern sky within $z=0.1\dots 0.5$).

The selection of mass concentrations using the shear caused by their weak gravitational lensing effect suffers from a number of disadvantages. The most important one is the high amount of noise contributed by the intrinsic ellipticities of lensed galaxies. This largely blurs the view of the cosmic density distribution, letting peaks disappear, and fakes peaks where there actually is no overdensity. It can only be beaten down to some degree by deep observations in good seeing. The other disadvantage is that any mass along the line of sight will contribute to the signal, giving rise to false peaks. Such projection effects or cosmic shear can only be eliminated or at least recognised if redshift information of either the lensed galaxies, or the matter distribution in the field is available. Cosmic shear can act as a source of noise (see for example Maturi et al. 2005), but it has recently been shown by Maturi et al. (2006) that this can partly be filtered out.


  \begin{figure}
\includegraphics[width=15.8cm,clip]{5955fg01.eps}
\end{figure} Figure 1: Left: sky distribution of the GaBoDS fields. The size of the symbols indicates the covered sky area (not to scale), with one WFI shot covering 0.32 square degrees. All fields are at high galactic latitude. Right: image seeing of the 58 coadded WFI@2.2 mosaics used for the lensing analysis. The average seeing is 0 $.\!\!^{\prime\prime}$86.
Open with DEXTER

In the present work we use the aperture mass statistics ( $M_{{\rm ap}}$) (Schneider 1996, hereafter S96) and a derivative of it for the shear-selection of density peaks, based on 19 square degrees of sky coverage. The purpose of the work is to establish a suitable filter function for $M_{{\rm ap}}$, and then apply it to an (inhomogeneous) set of data. The sample returned is currently the largest sample of shear-selected cluster candidates, yet is dwarfed by the total number of galaxy clusters known.

The outline of this paper is as follows. In Sect. 2 we give an overview of the data used, concentrating on its quality and showing the usefulness for this analysis. Section 3 contains a discussion of our implemented version of $M_{{\rm ap}}$, particularly with regard to the chosen spatial filter functions. We also introduce a new statistics, deduced from $M_{{\rm ap}}$. In Sect. 4 we present and discuss our detections, and we conclude in Sect. 5.

Throughout the rest of this paper we use common weak lensing notations, and refer the reader to Bartelmann & Schneider (2001) and Schneider et al. (2006) for more details and technical coverage.

2 Observations, data reduction, data quality

  
2.1 The survey data

The GaBoDS was conducted with the wide field imager (hereafter WFI@2.2) of the 2.2 m MPG/ESO telescope in the R-band. It is to about 80% a virtual survey, which means most of the data was not taken by us but collected from the ESO archive. There are five main data sources: The final sky coverage is 18.6 square degrees, spread over 58 fields which are suitable for our weak gravitational lensing analysis. The overall sky distribution of the fields is shown in Fig. 1, and further characteristics are summarised in Table B.1.

2.2 Data reduction

For the reduction of this specific data set we developed a stand-alone, fully automatised pipeline (THELI), which we made freely available to the community. It is capable of reducing almost any kind of optical, near-IR and mid-IR imaging data. A detailed description of this software package is found in Erben et al. (2005), in which we investigate its performance on optical data. One of the main assets of this package is a very accurate astrometric solution that does not introduce any artificial PSF distortions into the coadded images.

The only difference between the current version of THELI and the one we used for the reduction of the survey data over the last years, is that for the image coaddition EISdrizzle was used, which is meanwhile replaced by Swarp. The latter method leads to a 4% smaller PSF in the final image in case of superb intrinsic image seeing as in our survey. The PSF anisotropy patterns themselves are indistinguishable between the two coaddition methods. Since the natural seeing variations (Fig. 1) in our images are much larger than those 4%, our analysis remains unaffected.


  \begin{figure}
\includegraphics[width=17cm,clip]{5955fg02.ps}
\end{figure} Figure 2: Left: the average $\varepsilon _{1,2}$ ellipticity components for the PSF of each mosaic, before PSF correction. The $\varepsilon _2$ component is symmetric around zero, whereas the $\varepsilon _1$ component scatters broader around a value of 1.4%. We do not show the corresponding plot after PSF correction, since all $\vert\varepsilon _{1,2}\vert < 0.001$. Middle: the scatter of the two components around their average values. This indicates the deviation from a constant PSF anisotropy across the field and the noise level. Right: same as in the middle, but after applying our PSF correction scheme. As a result, not only does $\langle\varepsilon_{1,2}\rangle$ get largely removed, also their rms is significantly lowered.
Open with DEXTER

2.3 Image seeing and PSF properties

The image seeing and the PSF anisotropy are critical for weak lensing measurements. They dilute and distort the signal and determine how well the shape of the lensed galaxies can be recovered. As can be seen from Fig. 1, 80% of our coadded images have sub-arcsecond image seeing, and 20% are around 1 $.\!\!^{\prime\prime}$0, with an average of 0 $.\!\!^{\prime\prime}$86. Thus, we reach the sub-arcsecond seeing regime very well which is commonly regarded as mandatory for weak lensing purposes.

PSF anisotropies are rather small and usually well-behaved with WFI@2.2, which we have demonstrated several times (Schirmer et al. 2004; Erben et al. 2005; Schirmer et al. 2003, for example). With a well-focussed telescope, 1% of anisotropy in sub-arcsecond seeing conditions can be achieved, with a long-term statistical mode of around 2% (see Fig. 3). Discontinuities in the PSF are largely absent across chip borders. Slightly defocused exposures exhibit anisotropies of $3{-}5\%$. We rejected individual exposures from the coaddition if one or more CCDs had an anisotropy larger than 6% in either of the ellipticity components $\varepsilon _1$ or $\varepsilon _2$. These anisotropies arise from astigmatism, and they flip by 90 degrees if one passes from an intrafocal to an extrafocal exposure (see Schirmer et al. 2003, for an example of this, and Fig. B.2 for a typical PSF anisotropy pattern in our data).

Since such intra- and extrafocal exposures are roughly equally numbered for a larger set of exposures, anisotropies due to defocusing average out in the coadded images. Those have on average an anisotropy of $1{-}2\%$ with a similar amount in the rms (variation of the PSF across the field). This is illustrated in Fig. 2, where we show the combined PSF anisotropy properties for all coadded mosaic images. The left panel shows the uncorrected mean ellipticity components, having $\langle\varepsilon_1\rangle~=0.0138$ and $\langle\varepsilon_2\rangle~=-0.0002$. These anisotropies are small and become $\vert\langle\varepsilon_{1,2}\rangle\vert~< 0.001$ after PSF correction for all mosaics.


  \begin{figure}
\par\includegraphics[width=7.6cm,clip]{5955fg03.ps}
\end{figure} Figure 3: Characteristic PSF anisotropies $\left <\vert\varepsilon ^{*}\vert\right >$ (modulus of the ellipticity of stars, averaged over the entire CCD) of 700 randomly chosen exposures for CCDs 1 and 8 of the WFI@2.2 detector mosaic. The remaining six CCDs have somewhat better properties. The exposure times were in the range of 300-500 s, and the zenith distances were smaller than 40 degrees. The differences between the distributions arise from slightly different spatial orientations with respect to the focal plane (the CCDs are confined within $\pm 20$ microns), thus responding differently to the focus. Anisotropies larger than $1{-}2\%$ are mostly due to an increasingly defocused instrument. The exposures used for this statistics were taken by a dozen different observers spread over more than a year, and give an idea of the quality of the archival data.
Open with DEXTER

The middle plot of Fig. 2 shows the uncorrected rms values for $\langle\varepsilon_{1,2}\rangle$, i.e. the deviations of the PSF from a constant anisotropy across the field. The rms of both components peaks around 1% and is reduced by a factor of 2 after PSF correction (right panel). The tail of the distribution seen in the middle plot essentially vanishes.

To evaluate the remaining residuals from PSF correction more quantitatively, we calculated the correlation function between stellar ellipticity and shear before and after PSF correction (Fig. 4), separately for the various survey data sources and over all galaxy positions,

\begin{displaymath}\langle \varepsilon_{1,2}^{*,{\rm pol}}~ \gamma_{1,2} \rangle...
...math$ \theta $ })~\gamma_{i~1,2}(\mbox{\boldmath$ \theta $ }).
\end{displaymath} (1)

Here $\gamma_{1,2}$ are the PSF-corrected ellipticities of the galaxies which serve as an unbiased estimator of the shear, and $\varepsilon_{1,2}^{*,{\rm pol}}$ are the components of the PSF correction polynomial at the position $\mbox{\boldmath$\space \theta $ }$ of a galaxy. We find that the correlation between stellar PSF and shear is greatly reduced by the PSF correction, yet some non-zero residuals remain as we expected. Another test for residual systematics from PSF correction in the final lensing catalogues is the $M_{{\rm ap}}$ (defined in Eq. (3)) two-point cross-correlation function of uncorrected ellipticities of stars and corrected ellipticities of galaxies (see Fig. 5),

\begin{displaymath}\langle M_{{\rm ap}}^* M_{{\rm ap}}^{\rm gal} \rangle=\sum_{i...
...ds}}\langle M_{{\rm ap}}^* M_{{\rm ap}}^{\rm gal} \rangle_i\;.
\end{displaymath} (2)

We find small non-zero residuals with different behaviour for the various survey parts. In particular, these residuals become increasingly non-zero with growing aperture size for our own observations and part of the ASTROVIRTEL fields. We will come back to these two points later during our analysis in Sect. 4, concluding that they do not affect our cluster detection method in a noticeable way.

To summarise, our PSF correction effectively takes out mimicked coherent shear patterns from the data, yet small residuals remain, which are much smaller than the coherent shear signal (a few percent) we expect from a typical cluster at intermediate redshift range ( $z=0.1\dots0.4$).


  \begin{figure}
\par\includegraphics[width=7.3cm]{5955fg04.ps}
\end{figure} Figure 4: Shown is the correlation between stellar ellipticity and measured shear before ( left panel) and after ( right panel) PSF correction for the 58 survey fields. The median improvement is a factor of 3, but residuals are still present.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8cm,clip]{5955fg05.ps}
\end{figure} Figure 5: $M_{{\rm ap}}$ two-point cross-correlation function of uncorrected ellipticities of stars and corrected ellipticities of galaxies, binned for the various survey data sources. The analysis of the ASTROVIRTEL fields has been split up into the B8- and C0-fields, since these exhibit different properties.
Open with DEXTER

2.4 Catalogue creation: object detection and calculation of lensing parameters

Our catalogue production can be split into three parts, the object detection, the calculation of the basic lensing quantities, and a final filtering of the catalogue obtained. An absolute calibration of the magnitudes with high accuracy is not needed for this work since we are mostly interested in the shapes of the lensed galaxies but not in their fluxes. We adopted photometric standard zeropoints which bring us, conservatively estimated, to within 0.05-0.1 mag of the real photometric zeropoint.

For the detection process we use SExtractor (Bertin & Arnouts 1996) to create a primary source catalogue. The weight map created during the coaddition (see Erben et al. 2005) is used in this first step, guaranteeing that the highly varying noise properties in the mosaic images are correctly taken into account. This leads to a very clean source catalogue that is free from spurious detections. The number of connected pixels (DETECT_MINAREA) used in this work for the object detection was 5, and we set the detection threshold (DETECT_THRESH) to 2.5. These thresholds are rather generous. $10{-}25\%$ of the objects detected are rejected again in the later filtering process since they were too small or too faint for a realiable shear measurement, or other problems appeared during the measurement of their shapes or positions.

In the second step we calculate the basic lensing quantities using KSB (Kaiser et al. 1995). This includes PSF anisotropy correction, and the recalculation of the objects' first brightness moments since SExtractor yielded positions with insufficient accuracy[*] for the purpose of our analysis. Details of our implementation of KSB are given in Erben et al. (2001), and a mathematical description of the PSF anisotropy correction process itself can be found in Bartelmann & Schneider (2001).

2.5 Catalogue creation: filtering

The third step consists in filtering the catalogue for various unwanted effects. To avoid objects in the catalogue that are in the immediate vicinity of bright stars, we explicitly set BACK_TYPE = MANUAL and BACK_VALUE = 0.0 (our coadded images are sky-subtracted) in the SExtractor configuration file. Thus SExtractor is forced to assume a zero sky background and does not model the haloes around bright stars as sky background. This proved to be a very efficient way of automatically masking brighter stars and the regions immediately surrounding them (see e.g. the lower right panel of Fig. B.4). Further filtering on the SExtractor level is done by excluding all objects that are flagged with FLAG >4 and those with negative half-light radii.

On the KSB level we filter such that only galaxies for which no problems in the determination of centroids occurred remain in the catalogue. Galaxies with half-light radii ($r_{\rm h}$) smaller than 0.1-0.2 pixels than the left ridge of the stellar branch in an $r_{\rm h}$-mag-diagramme are rejected from the lensing catalogue, as are those with exceedingly bright magnitudes or a low detection significance ( $\nu_{\rm max}<10$). See the left panel of Fig. 6 for an illustration of these cuts. From the same panel it can be seen that a significant number of galaxies have half-light radii comparable to or a bit smaller than the PSF, which makes their shape measurement noisier. Yet their number is large enough so that the shear selection of galaxy clusters profit significantly when these objects are included in the calculation. By including these objects, we gain $10{-}25\%$ in terms of the number density of galaxies, and $3{-}10\%$ in terms of signal-to-noise of the detections.

Furthermore, all galaxies with a PSF corrected modulus of the ellipticity larger than 1.5 are removed from the catalogue (the ellipticity can become larger than 1 due to the PSF correction factors, but is then downweighted), as are those for which the correction factor $({\rm Tr}~P^{\rm g})^{-1}>5$ (see Erben et al. 2001). The fraction of rejected galaxies due to the cut-off in $P^{\rm g}$ is relatively small, as can be seen from the right panel in Fig. 6.


  \begin{figure}
\par\includegraphics[width=16cm,clip]{5955fg06.eps}
\end{figure} Figure 6: Left panel: stars appear as a vertical branch in a $r_{\rm h}$-mag plot. Those brighter than R=16.5 saturate the detector and thus increase in size. The solid line encircles the galaxies which are used for the lensing analysis. The upper curved line indicates a cut in detection significance ( $\nu _{\rm max}>10$), which has proven to work significantly better than a constant cut on the faint end of the magnitudes. Right panel: $\nu _{\rm max}$ against $P^{\rm g}$. Objects left of the indicated threshold are rejected from the lensing catalogue. Typically 1% of the galaxies are removed during this step.
Open with DEXTER

An overall impression of the remaining objects in the final catalogues is given in Fig. B.1. In total, typically 10-25% of the objects are rejected from the initial catalogue due to the KSB filtering steps. The remaining average number density of galaxies per field is $n\sim 11\;{{\rm arcmin}}^{-2}$ (min: 6, max: 28), not corrected for the SExtractor-masked areas (as described at the beginning of this section; on the order of 5% per field). For the width of the ellipticity distribution we measure $\sigma_\varepsilon=0.34$ for each of the two ellipticity components, averaged over all survey fields. Both n and $\sigma_\varepsilon$ determine the signal-to-noise of the various mass concentrations detected.

  
3 Detection methods

3.1 S-statistics and an optimal filter Q

We base our detection method on the aperture mass statistics ( $M_{{\rm ap}}$) as introduced by S96. $M_{{\rm ap}}$ can be written as a filtered integral of the tangential shear $\gamma_{\rm t}$,

 \begin{displaymath}M_{{\rm ap}}(\mbox{\boldmath$ \theta $ }_0) = \int {\rm d}^2\...
...dmath$ \varphi $ };\mbox{\boldmath$ \theta $ }_0)\;Q(\varphi).
\end{displaymath} (3)

Originally, the idea of $M_{{\rm ap}}$ is to obtain a measure of the mass inside an aperture independent of the mass sheet degeneracy in the weak lensing case. Written in the form (3) we can also simply interpret it as the filtered amount of the tangential shear around a fiducial point $\mbox{\boldmath$\space \theta $ }_0$on the sky, where $\gamma_{\rm t}(\mbox{\boldmath$\space \varphi $ };\mbox{\boldmath$\space \theta $ }_0)$ is the tangential shear at position $\mbox{\boldmath$\space \varphi $ }$ relative to $\mbox{\boldmath$\space \theta $ }_0$, and Q is some radially symmetric spatial filter function.

The variance of $M_{{\rm ap}}$ for the unlensed case, respectively the weak lensing regime, is given by

\begin{displaymath}\sigma^2_{M_{{\rm ap}}} = \frac{\pi\sigma_{\varepsilon}^2}n \int_0^{\theta}{\rm d}
\vartheta~ \vartheta~ Q^2(\vartheta)~,
\end{displaymath} (4)

where $\sigma_\varepsilon$ is the dispersion of intrinsic source ellipticities, and n the number density of background galaxies. The integration is performed over a finite interval since we will have $Q\sim 0$ for $\vartheta>\theta$, i.e. for radii larger than the aperture size chosen. For the application to real data we will replace this integral in Sect. 3.2 by a sum over individual galaxies.

We then define the S-statistics, or the S/N for $M_{{\rm ap}}$respectively the measured amount of tangential shear, as

 \begin{displaymath}S(\theta;\mbox{\boldmath$ \theta $ }_0) = \sqrt{\frac{n}{\pi~...
...nt_0^{\theta}{\rm d}\vartheta~\vartheta~ Q^2(\vartheta)}}\cdot
\end{displaymath} (5)

Here $\theta$ is the aperture radius, and $\vartheta $ measures the distance inside this aperture from its centre at $\mbox{\boldmath$\space \theta $ }_0$. This expression gets a bit simplified due to assumed radial symmetry (see Appendix A). Hereafter, we will call the 2-dimensional graphical representation of the S-statistics the S-map. If we plot the S-statistics for a given mass concentration as a function of aperture size, then we refer to this curve as the S-profile.

The filter function Q that maximizes S for a given density (or shear) profile of the lens can be derived using either a variational principle (Schirmer 2004), or the Cauchy-Schwarz inequality (S96, Weinberg & Kamionkowski 2002). It is obtained for

 \begin{displaymath}Q(\vartheta)\propto\gamma_{\rm {t}}(\vartheta) \;,
\end{displaymath} (6)

where $\gamma_{\rm t}(\vartheta)$ is the tangential shear of the lens, averaged over a circle of angular radius $\vartheta $. This is intuitively clear, since a signal (i.e., $\gamma_{\rm t}$) with a certain shape is best picked up by a similar filter function (Q). We present in Sect. 3.4 a mathematically simple family of filter functions that effectively fulfills this criterion for the NFW mass profile.

  
3.2 Formulation for discrete data fields

For the application to real data, the previously introduced continuous formulation has to be discretised. First, we replace the tangential shear $\gamma_{\rm t}$ with the tangential ellipticity $\varepsilon_{\rm {t}}$, which is in the weak lensing case an unbiased estimator of $\gamma_{\rm t}$. Thus we have

 \begin{displaymath}M_{{\rm ap}}= \frac{1}{n}\sum_{i=1}^{N}\varepsilon_{{\rm t}i}\;Q_i~,
\end{displaymath} (7)

where n is the number density of galaxies, and Nthe total number of galaxies in the aperture. Introducing individual galaxy weights wi as proposed by Erben et al. (2001), this becomes

 \begin{displaymath}M_{{\rm ap}}= \frac{A}{\sum_i w_i}\;\sum_i\varepsilon_{{\rm t}i}\;
w_i\;Q_i~,
\end{displaymath} (8)

where A is the aperture area, previously absorbed in the number density n.

The noise of $M_{{\rm ap}}$ can be estimated from $M_{{\rm ap}}$ itself as was shown by Kruse & Schneider (1999) and S96. In the weak lensing case its variance evaluates as

\begin{displaymath}\sigma^2(M_{{\rm ap}}) = \langle{M_{{\rm ap}}}^2\rangle - \langle M_{{\rm ap}}\rangle^2\ =
\langle{M_{{\rm ap}}}^2\rangle\;,
\end{displaymath} (9)

since one expects $\langle M_{{\rm ap}}\rangle=0$. Substituting with equation (7) yields
                           $\displaystyle \sigma^2(M_{{\rm ap}})$ = $\displaystyle \frac{1}{n^2}\;\sum_{i,j}\langle\varepsilon_{{\rm t}i}~
\varepsil...
...i~Q_j =
\frac{1}{n^2}\;\sum_{i}\langle{\varepsilon_{{\rm t}i}}^2\rangle~{Q_i}^2$  
  = $\displaystyle \frac{1}{2n^2}\;\sum_{i} \vert\varepsilon_i\vert^2~{Q_i}^2~,$ (10)

where we used

\begin{displaymath}\langle{\varepsilon_{{\rm t}}}^2\rangle = \frac{1}{2}~\vert\varepsilon\vert^2\;,
\end{displaymath} (11)

and the fact that $\varepsilon_{{\rm t}i}$ and $\varepsilon_{{\rm t}j}$ are mutually independent and thus average to zero for $i \ne j$. The Qi are constant for each galaxy and thus can be taken out of the averaging process. Again, in the case of individual galaxy weights this becomes

\begin{displaymath}\sigma^2(M_{{\rm ap}}) =
\frac{A^2}{2\;\left(\sum_i w_i\right)^2}\;
\sum_i\vert\varepsilon_i\vert^2\; w_i^2\;Q_i^2~,
\end{displaymath} (12)

using the fact that the wi are constant for each galaxy like the Qi. Therefore, we obtain for the S-statistics

 \begin{displaymath}S = \frac{\sqrt{2}\; \sum_i \varepsilon_{{\rm t}i}~w_i~Q_i}
{\sqrt{\sum_i \vert\varepsilon_i\vert^2~w_i^2~{Q_i}^2}}\;,
\end{displaymath} (13)

which grows like $\sqrt{N}$.

3.3 Validity of the Map concept for our data

If $M_{{\rm ap}}$ is evaluated close to the border of an image or on a data field with swiss-cheese topology due to the masking of bright stars, then the aperture covers an "incomplete'' data field. Therefore, the returned value of $M_{{\rm ap}}$does no longer give a result in the sense of its original definition in S96, which was a measure related to the filtered surface mass density inside the aperture. Yet it is still a valid measure of the tangential shear inside the aperture, including the S/N-estimate in Eq. (13), and can thus be used for the detection of mass concentrations.

Since the number density of background galaxies inside an aperture is not a constant over the field due to the masking of brighter stars (and the presence of the field border), we have to check for possible unwanted effects. As long as the holes in the galaxy distribution are small compared to the aperture size, and as long as their number density is small enough so that no significant overlapping of holes takes place, the effects on the S-statistics are negligible (see Fig. B.4). In fact, the decreased number density just leads to a lowered significance of the peaks detected in such areas, without introducing systematic effects.

If the size of the holes becomes comparable to the aperture, spurious peaks appear in the S-map at the position of the holes. This is because the underlying galaxy population changes significantly when the aperture is moved to a neighbouring grid point. When such affected areas were present in our data, then we excluded them from the statistics and masked them in the S-maps, even though these spurious peaks are typically not very significant ( ${\sim}2\sigma$). Our threshold for not evaluating the S-statistics at a given grid point is reached if the effective number density of the galaxies in the aperture affected is reduced by more than 50% due to the presence of holes (or the image border). Spurious peaks become very noticeable if the holes cover about 80% of the aperture. This is rarely the case for our data unless the aperture size is rather small (2'), or a particular star is very bright. We conclude that our final statistics is free from any such effects.

  
3.4 Filter functions

As expressed in (6), an optimal filter function should resemble the tangential shear profile. In the following such a filter Q(x) is constructed, assuming that the azimuthally averaged shear dependence is caused by an NFW density profile (Navarro et al. 1997) of the lensing mass concentration. We set $x:=\vartheta/\theta$, with x being the projected angular separation $\vartheta $ on the sky from the aperture centre, in units of the aperture radius $\theta$. By varying $\theta$, shear patterns respectively mass concentrations of different extent can be detected.

Wright & Brainerd (2000) and Bartelmann (1996) derived an expression for the tangential shear of the universal NFW profile. Based on their finding we can construct a new filter function $Q_{\rm NFW}$ over the interval $x\in [0,1]$, having the shape

 \begin{displaymath}Q_{\rm NFW}(x) =
\left\{
\begin{array}{ll}
\frac{4(3y^2-2)}...
...frac{y}{2}+\frac{2}{1-y^2}
& \;\;(y > 1).
\end{array}\right.
\end{displaymath} (14)

Here we defined $y:=x/x_{\rm c}$, with $x_{\rm c}$ being a dimensionless parameter changing the width (and thus the sharpness) of the filter over the interval $x\in [0,1]$, in the sense that more weight is put to smaller radii for smaller values of $x_{\rm c}$[*]. This expression is smooth and continuous for y=1, and approaches zero as ${\rm ln}~(y)/y^2$ for $y\gg 1$.


  \begin{figure}
\par\includegraphics[width=8cm,clip]{5955fg07.ps}
\end{figure} Figure 7: Left panel: $Q_{\rm NFW}$ (solid line), shown with some representations of $Q_{\rm TANH}$. The exponential cut-off E(x) for small and large radii is not introduced in this plot in order to show the differences between the two filter types better. As can be seen, the $Q_{\rm TANH}$ filter is a very good approximation for $Q_{\rm NFW}$, giving slightly more weight to smaller radii. Right panel: $Q_{\rm TANH}$ with the cut-off introduced by E(x) at both ends, plotted against the radial coordinate $\vartheta $. The dashed line ($\theta =1$) can be compared directly to the dash-dotted line in the left panel.
Open with DEXTER

Due to the mathematical complexity of $Q_{\rm NFW}$, the calculation of the S-statistics is rather time consuming for a field with ${\sim}10^4$ galaxies. We introduce an approximating filter function with simpler mathematical form that produces similarly good results as $Q_{\rm NFW}$. It is given by

 \begin{displaymath}Q_{\rm TANH}(x) = E(x)\;\frac{{\rm tanh}~(x/x_{\rm c})}{x/x_{\rm c}},
\end{displaymath} (15)

having the 1/x dependence of a singular isothermal sphere for $x\gg x_{\rm c}$. The hyperbolic function, having a $\propto x$ dependence for small x, absorbs the singularity at x=0 and approaches 1 for growing values of x. The pre-factor E(x) is a box filter, independent of $x_{\rm c}$, with exponentially smoothed edges. It reads

\begin{displaymath}E(x) = \frac{1}{1+{\rm e}^{6~-~150 x}+{\rm e}^{-47~+~50 x}}
\end{displaymath} (16)

and lets Q drop to zero for the innermost and outermost 10% of the aperture, while not affecting the large rest (see right panel of Fig. 7). It is introduced because both $Q_{\rm NFW}$ and $Q_{\rm TANH}$ (without this pre-factor) are not zero at the centre nor at the edge of the aperture. This cut-off is very similar to that introduced in [S96] for a different radial filter profile. It suppresses stronger fluctuations when galaxies enter (or leave) the aperture, receiving significant non-zero weight. It also avoids assigning a large weight to a few galaxies at the aperture centre, which as well can lead to significant fluctuations in the S-statistics when evaluated on a grid. The effect of E(x) is rather mild though, since usually several hundred galaxies are covered by one aperture unless it is of very small size so that E(x) becomes important.

We thus have a filter function based upon the two-dimensional parameter space $(\theta~,x_{\rm c})$. The differences between $Q_{\rm TANH}$ and $Q_{\rm NFW}$ are indistinguishable in the noise once applied to real data, so that we do not consider $Q_{\rm NFW}$ henceforth.

It is not the first time that $M_{{\rm ap}}$ filters following the tangential shear profile are proposed or used. We have already utilised the filter in Eq. (15) to confirm a series of luminosity-selected galaxy clusters (Schirmer et al. 2004). Before that, Padmanabhan et al. (2003) approximated $Q_{\rm NFW}$with

 \begin{displaymath}Q_{\rm PAD}(x) = \frac{2\;{\rm ln}~(1+x)}{x^2}- \frac{2}{x(1+x)}-1/(1+x)^2,
\end{displaymath} (17)

which was later-on modified by Hennawi & Spergel (2005). They multiplied (17) with a Gaussian of certain scale radius in order to suppress the effects of the cosmic shear that become dominant for larger radii. Even though the mathematical descriptions are different, the latter two filters are in effect very similar to (15), and we could not find one of them superiour over the other based on our rather inhomogeneous data set. Hence, we do not consider them for the rest of the work. The validity of our approach has recently been confirmed by Maturi et al. (2006), who also use a filter following the tangential shear profile, individually adapted for each field to minimise the effect of cosmic shear. Based on the same data as we use in the present paper, they find that our filter defined in Eq. (15) yields very similar results as compared to their optimised filter, which means that the lensing effects of the large scale structure in our rather shallow survey are not very dominant.

Differences in the efficiency of such "tangential'' filters are thus expected to arise for very deep surveys only, and/or in case of high redshift clusters (z=0.6 and more, for which our survey is not sensitive). In all other cases they are hardly distinguishable from each other since the noise in the images and the deviations from the assumed radial symmetry of the shear field and the NFW profile are dominant. Thus we consider the $Q_{\rm TANH}$ filter to be optimally suited for our survey. For a comparison with other filters that do not follow the tangential profile, see the example shown in Fig. B.3.


  \begin{figure}
\center{\includegraphics[width=7.5cm,clip]{5955fg08.ps} }
\end{figure} Figure 8: Expected optimal S/N-ratios for NFW dark matter haloes for four different cluster masses (M200) and two different image depths. The mathematical derivation of the S/N for a particular cluster at a given redshift is given in Appendix A. Note that the filter scale is not constant along each of the curves since the shear fields get smaller in angular size with increasing lens redshift. Note also that we used a lower integration limit of $z_{\rm d}=0.2$ for the lensed background galaxies (see Sect. 3.5 for both aspects).
Open with DEXTER

  
3.5 Sensitivity

Figure 8 gives an idea of the sensitivity of our selection method for the $Q_{\rm TANH}$ filter (see Appendix A for mathematical details). This plot was calculated taking into account various characteristics of our survey and analysis. First, we introduced a maximum aperture radius of 20 $\;\!\!^{\prime}\;$to reflect the finite field of view of our fields. This affects (lowers) the S/N of very low redshift (z<0.09) clusters, whose shear fields can be very extended, and leads to a more distinct maximum of the curves. Second, the aperture size is not a constant along these curves but limited by the angular distance where the strength of the tangential shear drops below 1%, merging into the cosmic shear. Lastly, we used a lower redshift cut-off of $z_{\rm d}=0.2$ in order to roughly reflect our selection criteria for the background galaxies (see Fig. 6). The effect of the latter cut is minor, it increases the S/N on the order of 5% for the low redshift range we probe, as compared to no cut.


  \begin{figure}
\includegraphics[width=16.5cm,clip]{5955fg09.eps}
\end{figure} Figure 9: Left panel: the normalised PDF for the peaks based on the entire survey area, averaged over all scales and $x_{\rm c}$. Whereas the peak PDF for the S-statistics can be well approximated by a Gaussian, the PDF for the P-statistics is highly non-Gaussian with a broad tail. The $\nu _{\rm p}$ should therefore not be directly interpreted as S/N. The middle (S-statistics) and right panel (P-statistics) show the difference between the observed peak PDF and the PDF obtained from 10 randomisations. The middle plot is magnified by $\times 100$ for $\vert\nu _{\rm s}\vert\geq 4$ for better visualisation. Both these plots also contain the minimum (and mostly negative) peaks, i.e. underdense regions or voids, which accounts for the symmetric appearance. A significant excess of peaks and voids exists in the observed data (as compared to the randomised data) for $\vert\nu _{\rm s}\vert>2$ and $\vert\nu _{\rm p}\vert>3$. The error bars are taken from the randomisations.
Open with DEXTER

As a result, our S-statistics is insensitive for structures of mass equal or less than $M_{200}=5\times10^{13}$ $M_\odot$ for all redshifts. In data of average depth ( $n=12\;{{\rm arcmin}}^{-2}$) we can detect mass concentractions of $1,2,4\times10^{14}$ $M_\odot$ out to z=0.10, 0.22 and 0.32, respectively. The same objects would still be seen at z=0.22, 0.34 and 0.46 in the deeper exposures with twice the number density of usable background galaxies.

  
3.6 Introducing the P-statistics

In addition to the S-statistics defined above, we introduce a new measure which we call peak position probability statistics, hereafter simply referred to as P-statistics. It tells us if at one particular position on the sky the S-statistics makes significantly more detections for various filter scales than one would expect in the absence of a lensing signal.

The main idea behind the P-statistics is that a real peak has an extended shear field, i.e. it will be picked up by $M_{{\rm ap}}$ in a larger number of different filter scales. In other words, as the aperture size changes, different samples of galaxies are used and all of them will yield a signal above the detection threshold (provided a sufficient lensing strength). On the contrary, a spurious peak mimicked by the noise of the intrinsic galaxy ellipticity is not expected to show such a behaviour, thus the P-statistics will prefer a true peak over a false peak. In order to distinguish between individual S/Nmeasurements made with the P- and S-statistics, we will use the terms $\nu_{{\rm p}}~$and $\nu_{{\rm s}}~$henceforth.

We calculate the P-statistics as follows:

The P-statistics is still on an experimental basis and its performance has not yet been characterised by means of numerical simulations. Due to a strong non-Gaussian PDF (Fig. 9), the simulations are also needed to further calibrate the signal-to-noise values. So far they cannot be compared directly to those from the S-statistics. As we show in Sect. 4, the sample obtained by means of the P-statistics (the P-sample) is very similar in its properties to the one obtained by the S-statistics (the S-sample). Yet the total overlap between the two is just $\sim$30%, which we address in Sect. 4.7.

There is some arbitrariness in the way we implemented the P-statistics for this work. A further optimisation can be performed based on future simulations. For example, the lower threshold of $2.5\sigma$ for the peaks considered can be decreased or increased. The former would make the P-statistics smoother since more peaks are included, but does not yield any further advantage since it picks up too much noise. Increasing the threshold beyond 3.5 reduces the number of peaks entering the statistics significantly. This makes the determination of the noise level unstable, and one starts losing less significant peaks. Instead of choosing a constant cut at 2.5$\sigma$, a dynamic threshold as a function of filter scale would be more appropriate. This is motivated by the fact that the contamination of the S-statistics with noise depends on the aperture size chosen. Maturi et al. (2006) have implemented such a dynamic threshold for their analysis.

Further improvements could be gained by not feeding the P-statistics from the entire $(\theta~,x_{\rm c})$ parameter space. Concentrating on a smaller set of filter scales could yield a more discriminative power. In addition, the chosen smoothing length of 40 $\;\!\!^{\prime\prime}\;$yields the best compromise between smoothing out position variations in the lensing detections while maintaining the spatial resolving power of the P-statistics. This value appears optimal for our survey, but may well be different for other data sets.

3.7 Validation of the S- and P-statistics

As a consistency check for the concept of the S- and the P-statistics, one can compare the peak probability distribution function (PDF) of the observed data against randomised data sets. The presence of cluster lensing should distort the PDF in the sense that more peaks are detected for higher S/N values (see Miyazaki et al. 2002, for an example).

To this end we created 10 copies of our entire survey catalogue with randomised galaxy orientations, destroying any lensing signal, but keeping the galaxy positions and thus all other data characteristics fixed. We then calculated the PDFs for all local maxima (overdense regions) and minima (underdense regions) of the observations and the randomisations, accumulating the detections from the entire parameter space probed. The middle and right panel of Fig. 9 show the difference between the PDFs of the observed and the randomised data sets. Both PDFs, for the S- and the P-statistics, are significantly skewed, showing an excess of peaks and voids above thresholds of about $\vert\nu _{\rm s}\vert>2$ and $\vert\nu _{\rm p}\vert>3$.

  
4 Shear-selected mass concentrations

4.1 Selecting the cluster candidates

The way we established our sample of possible mass concentrations ("peaks'') is as follows. The S-statistics is evaluated The P-statistics is evaluated on the same grid space and obtained as described in Sect. 3.6. We will use as well a detection threshold of $4.0\sigma$, which provides us with a similar number of detections as the S-statistics, thus making the comparison between the two samples more meaningful. This is certainly not the most correct choice, but appears reasonable as long as the P-statistics and its performance have not been characterised further.

We show below that the samples obtained with both statistics have very similar properties, and that the two methods complement each other for peaks with low S/N. Hence, hereafter we do not distinguish between the samples drawn from the two statistics unless significant differences occur.

The detections made with both statistics are combined and summarised in Tables B.2 to B.4. Those peaks seen with the S-statistics have the best matching filter scale reported, i.e. the one yielding the highest S/N. Column 1 contains numeric labels for the detections, followed by a string indicating with which statistics it was made. The third and fourth column contain the detection significances $\nu_{{\rm p}}~$and $\nu_{{\rm s}}~$(if applicaple). Columns five, six and seven carry a classification parameter (see below), a richness estimate of a possible optical counterpart if present, and the distance of the peak from the latter. This is followed by the filter scale and the $x_{\rm c}$ parameter (if applicable), and then the name of the survey field in which the detection was made. Finally, we report the redshift of an optical counterpart if known.

  
4.2 Classification of the peaks detected

In order to characterise the line of sight for a peak in terms of visible matter, we introduce a rough classification scheme based on the R-band images. We apply this scheme to the P- and the S-sample in the same way. To this end, we visually inspected a radius of 2 $.\!\!^{\prime}$0 around the peak position for apparent overdensities of brighter galaxies, as compared to the surrounding field. Even though this is a rather crude approach, it is good enough to tell if a peak is (or is not) associated with some luminous matter. The radius of 2 $.\!\!^{\prime}$0 has been chosen since we observed from known galaxy clusters that the lensing detection can scatter up to 1 $.\!\!^{\prime}$5 with respect to the optical centre of the cluster (see Table 1 and Sect. 4.5).

The classes are defined as follows and are based on galaxies taken from the range $R\sim 17{-}22$. Thus these glaxies are not members of the catalogue of lensed background galaxies (see also the left panel of Fig. 6).

Judging from our number counts (Hildebrandt et al. 2006) for the ESO Deep Public Survey data, we expect a number density of about 11 000 galaxies with $R\leq22$per square degree. Assuming a random distribution of these foreground galaxies and neglecting clustering effects, we obtain a scattering of $\sigma=7.2$ for the total number of galaxies within the 2 arcmin radius. Thus, class 5 objects represent not more than a 2.1$\sigma$ overdensity as compared to the randomised distribution. The significances for the classes are given in parentheses.

We consider classes 1 to 4 to be reliable optical counterparts, and refer to them as bright peaks henceforth. Classes 5 are rather dubious, and go as dark peaks together with those lines of sight classified as 6. See Fig. B.5 for an illustration of bright peaks of classes 1-4.

The boundaries between the classes are permeable. For example, if we find an overdensity of 12 galaxies, and 4 or 5 of them stand out from the rest by their brightness and are of elliptical type, the class 5 object would become a class 4. Similarly, if we find 20 galaxies of similar brightness, but they show a significantly higher concentration than the rest of the sample, it becomes class 3. On the other hand, if the distance between the mass peak and the centre of the optical peak exceeds 100 arcseconds, we decrease the class by one step. The same holds if the galaxies seen appear to be at redshift higher than $\sim$0.3 or more, then we lower the rank by 1 since our selection method becomes less sensitive with increasing redshift. About 20% of our sample were up- or downgraded in this way.

4.3 Spectroscopically "confirmed'' candidates

For 22 out of the 72 bright peaks, we found spectra in the NASA Extragalactic Database[*]. Most of them come from the SDSS[*] or the Las Campanas Distant Cluster Survey (Gonzalez et al. 2001). For 6 of the 22 cases we have only two spectra, thus they just indicate a cluster or group nature, but we cannot take it as hard evidence. Those redshifts are marked with an asterisk in Tables B.2 to B.4. For the other 16 peaks the cluster nature was already known or has meanwhile been strengthened, either by spectroscopy, photometric redshifts, or by other photometric means (see e.g. Gladders & Yee 2000, for the red sequence method). Three of them (#043, 053 and 157) turn out to be clusters or groups of galaxies at different redshifts projected on top of each other (see also Sect. 4.4), with the previous two being triple. For simplicity, we refer to all these objects in the following as "confirmed'' peaks, even though more spectroscopic data have to be obtained for most of them for sufficient evidence.

Whenever spectra were available, they confirmed our assumption of spatial concentrations in all cases. In order to secure the 50 most promising candidates, we recently started a large spectroscopic survey aiming at between 20 and 50 galaxies per target. This will not only pin down the redshifts of the possible clusters, but also allow us to identify further projection effects and in some cases possible physical connections with nearby peaks (e.g. #029, 056, 074, 084, 092, 128, 136, 141, 158). We will report these results in future papers.

The $1\sigma$ redshift range of the peaks confirmed so far is $z = 0.09\dots0.31$. We therefore predominantly probe the lower redshift range of clusters, which is consistent with the theoretically expected sensitivity of our survey (see Kruse & Schneider 1999, and Fig. 8).

We also checked for possible X-ray emission at the position of the peaks, creating statistical stacks from the images of the Rosat All Sky Survey (Voges et al. 1999) for classes (1, 2), and (3, 4) and (5, 6). Only for the stack made of classes 1 and 2 do we see a signal, coming exclusively from #039 (Abell 901) and #082 (Abell 1364) which form the most prominent clusters in our sample. The other two stacks show no enhanced flux at the combined target positions.

  
4.4 Projection effects, protoclusters and dark peaks

Projection effects are common in shear-selected cluster samples and can resemble a significant contamination (Reblinsky & Bartelmann 1999; White et al. 2002). We find three mass concentrations where we have spectroscopic evidence that they arise due to projection of less massive groups along the line of sight (#043, 053, 157). There are 19 more which, judging from their galaxy distribution and brightness, appear as groups projected onto (or next to) each other. 10 of them appear very sparse and hence have class 5 assigned, i.e. we count them as dark peaks. All 19 can be at different redshifts and thus are physically disconnected. Alternatively, they could form physical entities or so-called protoclusters. Weinberg & Kamionkowski (2002) have analysed the lensing effect of the latter, i.e. massive overdensities that did not yet undergo a gravitational collapse. They argue that protoclusters consist of isolated galaxies or sparse groups, and are X-ray underluminous. Using the S-statistics with a generic filter function Q proposed by Schneider et al. (1998), a number density of n=30 lensed galaxies arcmin-2, and a detection threshold of $(\nu_{\rm s})_{\rm {min}}=5$, Weinberg & Kamionkowski (2002) predict that the fraction of such (dark) protoclusters in the lensing detections made amounts to $10{-}20\%$. Since our survey is much shallower, we expect that the fraction of protoclusters in our sample is smaller, and that most of the dark peaks seen are noise peaks. We investigate the subject of dark peaks in more detail in Sect. 4.8.

  
4.5 Positional offsets

Table 1 shows that the peaks coincide with the positions of the optical counterparts to within $0\hbox{$.\!\!^{\prime}$ }9\pm0\hbox{$.\!\!^{\prime}$ }5$, independent of the peak classification, and independent of the statistics used (not shown). Similar offsets of ${\sim}1'$ are reported by Wittman et al. (2006) based on a similar search in the Deep Lens Survey data. On the one hand, these offsets are due to statisticsal noise, since the shear field is obtained by a finite number of lensed galaxies with intrinsic ellipticities. In addition, in general the shear fields deviate from the radial symmetry assumed by the $M_{{\rm ap}}$ filter. On the other hand, in some cases these offsets can arise due to significant substructure in young or still non-virialised clusters. The two largest clusters in our sample, Abell 901 (#039) and Abell 1364 (#082), are good examples. For them, the positions of the weak lensing detections are shifted away from the cD galaxies in the direction of sub-clumps. See also Fig. B.5 for an illustration. Yet another explanation are projection effects, for which the detection is usually made in between the two (or three, as for #053) groups of galaxies.

Table 1: Average angular offsets between the peak and the optical counterpart.

  
4.6 Similarities of the S- and P-sample

We make 91 and 94 detections with the S- and the P-statistics, respectively, having 27 peaks in common. As can be seen from Table B.6, the number of bright and dark peaks is equally balanced between the two methods, both yielding slightly more dark than bright peaks. A few fields exist in which either only the S-statistics (e.g. CL1216-1201) or the P-statistics (e.g. FIELD17_P3) make detections, but the individual number of detections made per field are in general small. The statistics do not show a preference with respect to particular fields. The same holds for the spatial distribution of peaks as a function of position in the detector mosaic (see Fig. 10), or the occurrence of dark peaks as a function of exposure time (see Fig. 13 for a merged plot of the two statistics). Neither do dark peaks appear preferentially in one of the two samples (Fig. 11). The only noteable difference in this respect is that the P-statistics detects about 30% more peaks of classes 2 and 3, whereas the S-statistics returns 25% more peaks of class 4 (see Fig. 11).

  
4.7 Small overlap of the S- and P-sample

Most obvious is the small overlap of barely $30\%$ between the two samples, as can be seen from the second column in Tables B.2 to B.4. Since the P-statistics looks at a broader range of filter scales instead of one single scale, it overcomes the instability of the S-statistics against changes in the aperture size $\theta$ and the scaling parameter $x_{\rm c}$. Hence it is capable of giving significance to a peak that otherwise goes unnoticed by the S-statistics. Since we have a minimum detection threshold of $2.5\sigma$for the peaks entering the calculation of the P-statistics, we expect peaks that are not seen by the S-statistics since they never reach the $4\sigma$ selection threshold.


  \begin{figure}
\par\includegraphics[width=8cm,clip]{5955fg10.ps}
\end{figure} Figure 10: Shown is the spatial distribution of the 158 detected peaks for the WFI field of view. Open symbols indicate the bright peaks with classes 1-4, the filled ones peaks with classes 5 and 6. The symbol size represents the detection significance. The pattern is indistinguishable from a random distribution, and we also do not see differences for peaks obtained with either the S- or the P-statistics (not shown).
Open with DEXTER


  \begin{figure}
\includegraphics[width=7.5cm]{5955fg11.ps}
\end{figure} Figure 11: Shows the fraction of detections made in either the S- or the P-statistics, or in both.
Open with DEXTER


  \begin{figure}
\includegraphics[width=8.5cm]{5955fg12.eps}
\end{figure} Figure 12: Left: shows the peaks detected either with the S-statistics, or those detected with both methods, normalised to the previous and as a function of $\nu_{{\rm s}}~$. Right: same, but for the P-statistics. It can be seen that peaks detected by only one method are mainly those with the very lowest S/N. The histograms are for bright and dark peaks together as we do not see a difference in this check.
Open with DEXTER

However, there is a drawback to the increased stability gained in this way. Peaks associated to small or low shear fields will appear only for a few neighbouring filter scales over the $2.5\sigma$ level. Hence they will not be selected with the P-statistics, since we calculate the latter from the full $(\theta~,x_{\rm c})$ parameter space, which dilutes such a signal. Those peaks can be picked up by the S-statistics, though.

Figure 12 shows that the small overlap between the two samples is mainly due to peaks with a low S/N of either $\mbox{$\nu_{{\rm s}}~$ }<4.25$ or $\mbox{$\nu_{{\rm p}}~$ }<4.5$. For peaks with $\mbox{$\nu_{{\rm s}}~$ }>4.75$ the sample overlap roughly doubles ($\sim$60%), and becomes 100% for $\mbox{$\nu_{{\rm s}}~$ }>6$ $(\mbox{$\nu_{{\rm p}}~$ }>7.4$). The small overlap observed for the entire samples therefore likely arises from the fact that both methods do not work efficiently in finding all lensing signals close to the detection threshold. This is expected and has been shown on N-body simulations for the S-statistics e.g. by Reblinsky et al. (1999) (see also Hennawi & Spergel 2005; Hamana et al. 2004, and Sect. 4.8). The P-statistics has not yet been characterised in this context, but given the great similarities of the S-and the P-sample we expect that the P-statistics has a comparable efficiency.

  
4.8 Detector and survey field biases, dark peaks

To check for systematics concerning the WFI@2.2 detector array, we plotted the positions of all peaks with respect to the array geometry (Fig. 10). It appears that the right half is a bit more crowded than the left part of the detector array. After running a dozen random distributions with the same number of objects we find that this is insignificant. Thus the distribution is random-like for both bright and dark peaks, and does not prefer or avoid particular regions.

Upon counting the bright and dark peaks in the five main data sources of our survey (see Sect. 2.1), we find differences in the ratio between bright and dark peaks (Table B.5). Namely, the ASTROVIRTEL and EIS data, and our own observations, show an excess of $20{-}50\%$ in terms of dark peaks as compared to the bright peaks, and are about comparable to each other. The EDisCS survey has a factor 2.1 more dark peaks, but is also that part of our survey with the most shallow exposures. Contrary, the COMBO-17 data has twice as many bright as darks peaks, but this comes not as a surprise since the S11 and A901 fields are centred on known galaxy clusters with significant sub-structure. If we subtract the known clusters and all detections likely associated with them, we still have an "excess'' of 40% for the bright peaks in COMBO-17. Again, this is not unplausible since the COMBO-17 fields form by far the deepest part of our survey, which let us detect more mass concentrations. But the latter holds for both bright and dark peaks, as the number of detections per square degree shows (Table B.5).


  \begin{figure}
\includegraphics[width=8.2cm,clip]{5955fg13.ps}
\end{figure} Figure 13: Shown is the number of bright peaks (solid line) and dark peaks (dashed line) as a function of exposure time ( left) and galaxy number density ( right). Shallow exposures with low number density have more dark peaks than deeper exposures.
Open with DEXTER


  \begin{figure}
\includegraphics[width=8.2cm,clip]{5955fg14.eps}
\end{figure} Figure 14: Left: histogram of the exposure times. The peak at 57 ks represents the Chandra Deep Field South (CDF-S). Right: number density of galaxies in the 58 fields after all filtering steps, leaving a total number of about 710 000 usable galaxies. The distribution reflects the distribution of exposure times shown in the left panel.
Open with DEXTER

In order to check if the dark peaks might arise due to imperfect PSF correction, we compare their occurrence with the remaining PSF residuals in our lensing catalogues (Figs. 4 and 5). Therein we do not find evidence that the imperfect PSF correction gives rise to dark peaks. However, Fig. 13 indicates that small exposure times (less than 10-12 ks) and/or a low number density (less than $n\sim13{-}15\;{{\rm arcmin}}^{-2}$) of galaxies foster the occurrence of dark peaks. Yet this has to be seen with some caution, in particular because we have only a small number of deep fields (mainly COMBO-17) as compared to the shallow ones, and the deep fields are partially concentrated on known structures. If we take the known structures into account and remove them from the statistics, we are still left with a smaller fraction of dark peaks in the deep exposures, but the question remains in how far the particular pointings of those fields introduce a bias. To answer this question empirically, we would need about 10 empty fields of 20 ks exposure time each.

Due to the $M_{{\rm ap}}$ filter we use, and to the large inhomogeneity of our survey, we cannot directly compare the occurrence of dark peaks in our data to existing numerical simulations. Also, these simulations usually make significantly more optimistic assumptions in terms of usable number density of galaxies and field of view than we could realise with GaBoDS (see Reblinsky & Bartelmann 1999; Hennawi & Spergel 2005; Jain & van Waerbeke 2000, for example). In particular Hamana et al. (2004) have shown that in their simulations (n=30) they expect to detect 43 real peaks (efficiency of $\sim$60%), scaled to the same area as GaBoDS and drawn with S/N>4 from mass reconstruction maps. A similar number of false peaks appear as well, being either pure noise peaks, or peaks with an expected S/N<4 being pushed over this detection limit. The latter would be labelled as bright peaks in our case. Our absolute numbers are different (72 bright and 86 dark peaks) since we use a very different selection method. Yet, if we interpret our dark peaks as noise peaks, the ratio between our bright and dark peaks is comparable to the ratio between their true and false peaks.

This interpretation, i.e. dark peaks are mostly noise peaks, is strengthened by the fact that with increasing peak S/N the fraction of dark peaks is decreasing (see Table B.6), for both the S- and the P-statistics. However, our observational data base (sky coverage) is too small to tell if this trend, i.e. the dark peaks dying out, continues for higher values of the S/N.

5 Summary and conclusions

In the present paper we have introduced a new type of filter function for the aperture mass statistics. This filter function approximately follows the tangential shear profile created by a radial symmetric NFW density profile, but is given by a much simpler analytic expression. We have shown that it is optimally suited for an application to our 19 square degree weak lensing survey conducted with the WFI@2.2 m MPG/ESO telescope. This is a survey with very inhomogeneous depth, in which we expect to be able to detect mass concentrations in the redshift range $z=0.1\dots 0.5$.

We defined the secondary P-statistics, which is calculated from the S-statistics. Does the latter find several peaks in different filter scales at the same position, then the P-statistics makes a detection. The samples obtained with the P- and the S-statistics appear very similar, in particular in terms of the fractions of bright and dark peaks. The overlap of both samples is small for mass peaks with low S/N ( $\mbox{$\nu_{{\rm s}}~$ }\sim4.0{-}4.5$), and increases to 100% for more reliable peaks ( $\mbox{$\nu_{{\rm s}}~$ }>6$). This reflects the low detection efficiency for both statistics for less significant peaks, but also shows that they complement each other for low S/N. The performance and efficiency of the P-statistics has yet to be better investigated by means of simulations, which will also lead to an optimised choice of parameters from which it is calculated.

The global PDFs for both the S- and P-statistics show clear excess peaks for higher values of S/N as compared to randomised copies of our data set. Thus the presence of lensing mass concentrations in our survey data is confirmed.

We introduced a classification scheme in order to associate the hypothetical mass peaks detected with possible luminous matter. From the 158 detections we made with the combined S- and P-statistics, 72 (46%) appear to have an optical counterpart. For 22 of those we found spectra in the literature, confirming the above mentioned redshift range, and that indeed a mass concentration exists along those particular lines of sight. We matched all detections made with the ROSAT All Sky Survey, finding that only the two most prominent clusters show X-ray emission. Statistically stacking the other fields did not reveal any excess X-ray flux for the remaining mass concentrations.

For a smaller number of the peaks we have spectroscopic evidence that they are due to projection effects. We expect that in our currently conducted spectroscopic follow-up survey more such projection cases will be uncovered, together with a confirmation of a very significant fraction of the remaining bright peaks. In a future paper we will also compare this shear-selected sample with an optically selected sample using matched filter techniques.

We gained some insight into the subject of dark peaks, which are not preferred by the S- or the P-statistics. They appear preferentially in shallow data with a small number density of galaxies, indicating that a large fraction of those could be due to noise (i.e. instrinsic galaxy ellipticies), or they are statistical flukes (see e.g. von der Linden et al. 2006). Nevertheless, in our deep fields we also observe a significant fraction of dark peaks, but our statistics can not be unambiguously interpreted since those fields are biased towards clusters with significant sub-structure, and we have only a very small number of them. Real physical objects such as very underluminous clusters or protoclusters are expected to contribute to shear-selected cluster samples (see e.g. Weinberg & Kamionkowski 2002), albeit we estimate that their fraction is small. Lastly, at least on the mass scale of galaxies, the last year has seen astonishing examples of objects having several $10^8\;M_\odot$ of neutral hydrogen, yet they appear entirely dark in the optical as if no star formation had ever taken place in them (see the Arecibo Galaxy Environments Survey, and therein e.g. Auld et al. 2006; Minchin et al. 2006,2005). Whether similar objects can still exist on the cluster mass scale and thus give rise to dark peaks in shear-selected cluster samples is currently unclear.

Finally, we would like to repeat that the Garching-Bonn Deep Survey has been made with a 2 m telescope. Most numerical simulations done so far are much more optimistic in terms of the number density reached ( $n\sim 30\;{{\rm arcmin}}^{-2}$) and correspond to surveys that are currently conducted (or will be in the near future) with 4m- and 8m-class telescopes, such as SUPRIME33 (Miyazaki et al. 2005) or the CFHTLS[*]. One noteworthy exception will be the KIDS[*] (Kilo Degree Survey) obtained with OmegaCAM@VST, covering 1500 square degrees starting in 2007.

Acknowledgements
This work was supported by the BMBF through the DLR under the project 50 OR 0106, by the BMBF through DESY under the project 05AE2PDA/8, and by the DFG under the projects SCHN 342/3-1 and ER 327/2-1. Furthermore we appreciate the support given by ASTROVIRTEL, a project funded by the European Commission under FP5 Contract No. HPRI-CT-1999-00081. The authors thank Ludovic van Waerbeke (UBC), and Nevin Weinberg and Marc Kamionkowski (both CalTec) for fruitful discussions, and the anonymous referee for his (her) very useful comments which significantly improved this paper.

References

 

  
Online Material

  
Appendix A: Expected S/N for a NFW halo

We derive the S/N expected for a radially symmetric NFW dark matter halo, using a flat cosmology with $\Omega_0=0.3$, $\Omega_\lambda=0.7$, h=0.7 and $\sigma_8=0.85$.

According to S96, the S/N for $M_{{\rm ap}}$ for a cluster at the origin of the coordinate system evaluates as

 
$\displaystyle S(\theta) = \sqrt{\frac{4 \pi n}{\sigma_\varepsilon^2}}\;
\frac{\...
...theta)}
{\sqrt{\int_0^{\theta}{\rm d}\vartheta~\vartheta~ Q^2(\vartheta)}}\cdot$     (A.1)

Here we assumed radial symmetry (yielding a factor of $2 \pi$), write $\theta$ for the aperture radius and $\vartheta $ for the distance from the centre of the aperture. n is the number density of galaxies, and $\sigma_\varepsilon$ is the dispersion of the modulus of the galaxy ellipticities.

The tangential shear for a radially symmetric NFW profile was given by Wright & Brainerd (2000) as

 \begin{displaymath}\gamma_{\rm {t}}(x) = \frac{r_{\rm s} \delta_{\rm c} \rho_{\rm c}}{\Sigma_{\rm {cr}}} \;g(x)
\end{displaymath} (A.2)

where
                    g(x) = $\displaystyle \left\{\begin{array}{ll}
g_<(x) & \;\;(x < 1) \\  [0.3cm]
\frac{1...
...{\rm {ln}}~2 & \;\;(x = 1) \\  [0.15cm]
g_>(x) & \;\;(x > 1)
\end{array}\right.$ (A.3)
x = $\displaystyle \frac{D_{{\rm d}}\; \vartheta}{r_{\rm s}}$ (A.4)
$\displaystyle \rho_c$ = $\displaystyle \frac{3}{8 \pi G}\;H_0^2\;[\Omega_0 (1+z)^3 + \Omega_\Lambda]$ (A.5)
$\displaystyle \Sigma_{\rm {cr}}$ = $\displaystyle \frac{c_{\rm L}^2}{4 \pi G}\;
\frac{1}{D_{{\rm d}}}\left<\frac{D_{{\rm ds}}}{D_{{\rm s}}}\right>^{-1}\;.$ (A.6)

Here, $c_{\rm L}$ is the speed of light and G is Newton's constant. The NFW concentration parameter c is a function of cosmology, the mass of the cluster and its redshift. We calculate c using the censroutine kindly provided by J. Navarro[*]. $r_{\rm s}$ and $\delta_c$ are the NFW scale radius and characteristic over-density of the cluster. $\Sigma_{\rm {cr}}$ is the critical surface mass density, where we write $D_{{\rm d}}$, $D_{{\rm s}}$ and $D_{{\rm ds}}$ for the angular diameter distances between the observer and the lens, between the observer and the source, and between the lens and the source, respectively. For a flat cosmology, they are defined as

\begin{displaymath}D(z_1,z_2) = \frac{c_{\rm L}}{H_0}\;\frac{1}{1+z_2}
\int_{a_...
... d}a \left[a\;\Omega_0 + a^4\;\Omega_\Lambda\right]^{-1/2}}\;,
\end{displaymath} (A.7)

with z1<z2 and a=1/(1+z). To take into account the redshift distribution of the lensed galaxies, we assume that those follow the normalised distribution given in Brainerd et al. (1996), with parameterisation

 \begin{displaymath}p(z) = \frac{3}{2 z_0}\;\left(\frac{z}{z_0}\right)^2\;{\rm {exp}}
\left[-\left(\frac{z}{z_0}\right)^{3/2}\right]\;.
\end{displaymath} (A.8)

The ratio $D_{{\rm ds}}/D_{{\rm s}}$ is averaged over this redshift distribution, starting with the lens redshift $z_{\rm {d}}$ as a lower integration limit. The latter was chosen because galaxies with $z<z_{\rm {d}}$ are unlensed and largely removed from our catalogues by appropriate detection thresholds and cut-offs[*].

The functional expression for g(x) is identical to the one already given in Eq. (14), and contains the shape of the shear profile. Finally, fixing the remaining numerical parameters provides us with all information to calculate the S/N. From our data we have $\sigma_\varepsilon=0.48$, and we assume two different image depths which we base on empirical findings. One is shallow with $n=12\;{{\rm arcmin}}^{-2}$ and z0=0.4, and the deeper one given by $n=24\;{{\rm arcmin}}^{-2}$ and z0=0.5.

The S/N then evaluates as

\begin{displaymath}S(\theta) = \sqrt{\frac{4 \pi n}{\sigma_\varepsilon^2}}\;
\...
...nt_0^{\theta}{\rm d}\vartheta~\vartheta~ Q^2(\vartheta)}}\cdot
\end{displaymath} (A.9)

Appendix B: Further tables and figures


  \begin{figure}
\par\includegraphics[width=18cm]{5955fg15.ps}
\end{figure} Figure B.1: Indicated are the objects that are left over in the lensing catalogue after all typical filtering steps. Only fainter background galaxies are kept. Brighter sources, spurious detections, stars and highly elliptical objects such as asteroid tracks are largely absent from the catalogue. The field of view is about 3 $\;\!\!^{\prime}\;$.

Table B.1: The 58 fields used for this work.


  \begin{figure}
\includegraphics[width=18cm]{5955fg16.ps}
\end{figure} Figure B.2: Typical PSF anisotropy. Upper row: $\varepsilon _{1,2}$ scatter plot for stars before and after PSF correction. Lower left: PSF anisotropy pattern before correction. This plot is a more intuitive representation of the left panel above. Lower right: PSF residuals after a polynomial fit was used to correct for the anisotropy. No coherent shear signal is left.


  \begin{figure}
\includegraphics[width=18cm]{5955fg17.eps}
\end{figure} Figure B.3: Tangential shear and S-profile for the two largest clusters in the survey data. In the upper row the tangential shear is shown, with the TANH filter that yielded the highest S/N overlaid as a solid line. For better comparison the amplitude of TANH was scaled so that it best fits the tangential shear. All data points are mutually independent. The lower row shows the S-profile of the two clusters for different filters Q. The NFW filter is plotted for 10 different values of $x_{\rm c} \in [0.01,1.5]$, to show the scatter delivered by $x_{\rm c}$. The filters defined by Padmanabhan et al. (2003) and Hennawi & Spergel (2005) (not shown) deliver very similar results. For comparison we plot various other types of filter functions introduced in the literature. POLY is the polynomial filter defined by Schneider et al. (1998), mainly as a new measure for cosmic shear rather than cluster detection. The filters S96 were defined by Schneider (1996). EXP is based on the difference of two Gaussians of different width (Schirmer 2004). Clearly, all of them yield smaller S/N values than those following the tangential shear profile. If one of those filters yields a higher S/N for a cluster than TANH in our data, then this is marked accordingly in Tables B.2 and B.3.
Note: Although the tangential shear is smaller for A901 than for S11, the S/N is higher due to the larger number density of galaxies with measured shapes (n=15 arcmin-2 for S11, and n=24 arcmin-2 for A901).


  \begin{figure}
\center{\includegraphics[width=18cm]{5955fg18.eps} }\end{figure} Figure B.4: Upper right: S-map for randomised galaxy orientations and positions. Upper left: The same S-map, evaluated after 10 randomly positioned holes with a radius of 90% of the filter scale were cut into the data field ( lower left). Artificial features show up in the S-map at the positions of the holes, the latter being exaggerated in number and size for better visualisation. Lower right: True galaxy distribution of the field from which the galaxies were drawn for this test. The largest hole is caused by an 8th magnitude star. We conclude that holes in the data fields are in general not a cause of concern for our analysis.

Table B.2: Shear-selected mass concentrations (part 1). The first column contains a label for the peak, and the second one indicates if the detection was made with either the S- or the P-statistics (or both). The next two columns contain the corresponding significances. The classification shows if along the line of sight an overdensity of galaxies is found, with class 1 meaning a very obvious overdensity, and class 6 no overdensity. The richness indicates how many galaxies were found as compared to the average density in this field, and we give the distance between the peak and the optical counterpart. Finally, we have the name of the survey field in which the detection was made, and possibly a redshift for the counterpart. An asterisk behind the redshifts indicates that it is based on the measurement of less than three galaxies.

Table B.3: Shear-selected mass concentrations (part 2).

Table B.4: Shear-selected mass concentrations (part 3).

Table B.5: Bright (classes 1 to 4) and dark (5-6) peaks for the various survey data sources. The columns contain the data source, the number of bright and dark peaks, the ratio between dark and bright peaks, the average exposure times and image seeing, the area covered, plus the number of bright and dark peaks per unit area. For the COMBO-17 field we give in parenthesis the corresponding values when the known structures are subtracted.

Table B.6: Number of bright (classes 1-4) and dark (5-6) peaks and their ratios for the S- and the P-statistics.


  \begin{figure}
\includegraphics[width=18cm]{5955fg19.eps}
\end{figure} Figure B.5: Illustrates the typical appearance of bright clusters of various classes, as defined in Sect. 4.2. Note that the resolution is in general not high enough to distinguish smaller member galaxies from stars. The field of view is 4 $.\!\!^{\prime}$3.



Copyright ESO 2007