We used SExtractor (Bertin & Arnouts 1996) with the WEIGHT-IMAGE-option and WEIGHT-TYPE = MAP-WEIGHT for the source detection and extraction on the images. The weight-maps described above were used to account for the spatial dependent noise pattern in the co-added images, and in particular to pass the local noise level of the data to the SExtractor program.
To use SExtractor, three parameters have to be set:
i) The detection threshold t, which is the minimum
signal-to-noise ratio of a pixel to be regarded as
a detection, ii) the number n of contiguous pixels exceeding this
threshold, iii) the filtering of the data prior to detection (e.g. with a
top-hat or a Gaussian filter). We used a Gaussian filter with a
width ,
for the
values see below.
We varied these parameters to maximize the number of source detections, while minimizing false detections. The following procedure, described here for the I-band data, was used for all filters. We first considered only those pixels in the field where the exposure time equaled the total exposure time (the weight-map took care of the correct scaling of RMS for the full field later on) and called this part of data the "central field".
If there were no objects in the field and if the data reduction
resulted in a perfectly flat sky we would expect the histogram of the
pixel-values to be a Gaussian, with a width reflecting the
photon-noise and the correlated noise of the data reduction and
coaddition procedure. The actual histogram of pixel-values of the
central-field is shown in Fig. 3 (upper panel, thin
line). Even ignoring the wings, the histogram is asymmetric around its
center at zero. This stems from the non-uniformities of the sky background,
that amount to about 1% (see Sect. 4.1). Therefore, we
determined the sky-curvature on large scales and subtracted
a 2-dimensional fit to this surface from
the original data. The corrected histogram of
pixel-values (Fig. 3, upper panel, thick curve) is
now symmetric around its center at zero and the left-hand part is well
described by a Gaussian (with a width of 0.01295 ADU/s). The right
hand part shows an excess above 0.015 ADU/s, which is due to
the objects in the field (see difference curve in
Fig. 3, scaled up by a factor 10).
We have checked that it does not make any
difference for the detection and the photometry of reliable objects
whether the procedure is applied to the original or to the
corrected data: for each object, the difference between
the magnitude estimates of these two cases is smaller than the
assigned magnitude RMS-error. This implies that we can carry out the
adjustment of optimum SExtractor parameters in the
corrected version of the data.
To optimize the pre-detection filtering procedure we made the
following numerical experiment. We generated a "negative version''
of an image by multiplying it by -1 and a "randomized version''
by randomly assigning measured pixel values to new positions (the
weights of the weight-map are re-localized the same way). With no
filtering (
)
and using t = 1.7 and n = 3 SExtractor finds
about 9000 objects in the original image, 5600 in the negative one and
1100 in the randomized one. The fact that many more objects are
detected in the negative image than in the randomized one indicates
that correlated noise is present in both the negative and the positive
images. Therefore filtering must be used to specifically suppress
the small-scale noise. It is possible that large-scale noise is still present,
but there is no way to remove such a component.
By varying the width
of a Gaussian filter
we found that
is an
optimal choice. With n=3 and t=1.7 the number of objects
detected on the negative image dropped to the expected random number,
nearly zero. Of course, once
is fixed, one is still left
with the freedom of trading n for t by increasing the number of
pixels above the threshold and decreasing the
threshold value at the same time.
We decided to keep n small, in order to obtain an
unbiased detection of faint point sources. This choice allows us to
exploit the excellent seeing of the I-band data, where the FWHM is only
2.5 pixels.
Now we illustrate our procedure more quantitatively: we ran SExtractor
(for each choice of ,
n and t) on the positive, the
negative and the randomized images. We registered all pixels which
were covered by objects, removed them from the pixel-value statistics
and normalized the corresponding pixel-value histogram to the total
number of pixels in the central field, and we call that the
"background-histogram''. We expect that for good source extraction
parameters, the background histograms will look like a Gaussian, more
precisely like that Gaussian derived by fitting the negative wing of
the corrected data distribution, which we call the
"optimum-background-histogram'' below. The difference (magnified by a
factor of 10) to that optimum background histogram
is shown in the middle panel of
Fig. 3 for n=3, t = 1.7,
for detection on the positive (solid) and
negative (dotted, for negative ADU/s only) image. The negative excess
of these histograms below zero are false detections due to correlated
noise. Increasing
these false detections drop dramatically
when
pixels is reached. Then, n=3 and t=1.7 were fixed
by requiring no false detections on the negative image, i.e. no
detections due to correlated noise. We finally run SExtractor with
this set of parameters on the positive image, obtain the background
histogram and show the difference to the optimum background histogram
in the lower panel of Fig. 3 (dotted histogram,
magnified by a factor of 10). The difference is indeed very small.
Using the above parameters (
with a Gaussian
convolution, n=3 and t = 1.7), obtained from the optimum
pre-detection filtering and the requirement of
no-detection on the negative image, we find that the extended wing in
the ADU-histogram due to the presence of objects disappears and that
the histogram becomes symmetrical and Gaussian (see
Fig. 3, bottom panel). This demonstrates that with
this choice of parameters we are optimally extracting all objects
above the noise level, without getting significant false
detections. The adopted parameters give a (total) photometric
accuracy better than 5
.
The optimum parameters were finally used to run SExtractor on the (positive and negative) images of the total FDF. We found about 6900 objects on the positive and less than a handful of objects on the negative side of the entire I image. All these spurious detections occurred near discontinuities of the S/N level outside the central field and were caused by the non perfectly flat sky, which makes some of the discontinuities more pronounced than they should be according to the photon-noise and the corresponding weight-map.
The same analysis described for the I-band image was carried out for
the other filters. We emphasize here that our extraction
procedure was optimized to maximize the number of real detections for
a reliable photometry and hence reliable photometric
redshifts rather than to study
galaxy number counts at the faintest limits.
For the optical bands, we used the same extraction
parameters. For the NIR-data we opted for
pixels to match
the pixel size of the original NIR-data, which is roughly 1.5 the pixel
size of FORS, and t=2.0 and n=5 for the J band, and t=1.9 and n=5for the Ks band, to take into account the poorer
seeing and the different noise level. To illustrate the reliability of
our detection procedure we display a detection file returned from
SExtractor for a
region of the northern
part of the FDF in Fig. 4.
The photometric errors presented in the final catalog are those derived by the SExtractor routine. To make sure that the error calculation was not influenced by correlated noise in the sky background, the results of the SExtractor were verified with aperture photometry with different apertures in areas not covered by objects and by estimating the expected photometric errors from the background variations. In general we found good agreement with the SExtractor derived errors. In particular the SExtractor errors were found to be quite accurate for point sources and for small objects. Only in the case of large extended objects may non-stochastic background variations have resulted in an underestimate of the photometric errors. But the few objects possibly affected are normally bright and have small errors, which should still be correct within the numbers given in the catalog.
Finally, we calculated the 50% completeness levels in each filter band using our extraction parameters and the formula given in Snigula et al. (2002). This approach estimates the completeness limit by calculating the brightness at which the area of pixels brighter than the applied flux limit falls below the size threshold of the detection algorithm (for a given FWHM of a point source). To allow a comparison with other deep fields, the data were corrected for galactic extinction as described in Sect. 7. The results are summarized in Table 3.
Copyright ESO 2003