next previous
Up: The FORS Deep Field: catalog


Subsections

4 Data reduction

Since we intended to reach with our FDF observations magnitude limits well below those of earlier ground-based studies, dedicated data reduction procedures had to be developed. On the other hand, the first spectroscopic follow-up observations of FDF galaxies were to start a few months after the last photometric observations of the FDF. In order to have candidate galaxies available at that time, a preliminary reduction of the photometric data taken in visitor mode was made and an I-band selected catalog with photometric redshifts was created. The content of this preliminary catalog has been described by Heidt et al. (2000), the photometric redshifts for this catalog by Bender et al. (2001).

In a second step, all data including the photometric data taken in service mode were reduced as described below. This data set forms the basis for the final photometric catalog described in the present paper.

   
4.1 Optical data

Because of the time variations of the CCD characteristics and of the telescope mirror (dust accumulation) each individual run was reduced separately. However, in order to have a data set as homogeneous as possible, the data reduction strategy was identical for all 5 runs.

Firstly, the images were corrected for the bias. Since the observations were done in 4-port readout mode, each port had to be treated separately. A masterbias was formed for each port by the scaled median of typically 20 bias frames taken during each run, and subtracted from the images scaling the bias level with the overscan.

Next the images were corrected for the pixel-to-pixel variations and large-scale sensitivity gradients. Since the twilight flatfields did not properly correct the large-scale gradients, a combination of the twilight flatfields and the science frames themselves was used. The twilight flatfields taken in the morning and evening generally differed considerably, and the twilight flatfields always left large-scale gradients on the reduced science frames (probably as a result of stray-light effects in the telescope and the strong gradient of the sky background at the beginning and the end of the night). Therefore, for each science frame, the sequence of flatfields was determined, which minimized the large-scale gradient. These sequences were normalized, median filtered and used for 1st order correction of the pixel-to-pixel variations. Typically 2-3 flatfields per filter per run had to be created this way, leaving residuals of the order of 2-8% (peak-to-peak) depending on the filter. To remove the residuals, the twilight-flatfielded science frames were grouped according to similar 2-dim large-scale residuals, normalized and stacked, using a 1.8 $\sigma$ clipped median. Afterwards a correction frame was formed by a 2-dim 2nd order polynomial fit to each median frame. This was done on a rectangular grid of $50 \times 50$ points, where the level of each grid point was taken as the median of a box with a width of 40 pixels. In this way it was guaranteed that no residuals from stars affected the fit and a noise free correction frame was achieved. Finally, each science frame was corrected for the pixel-to-pixel variations by a combination of the corresponding twilight flatfield and noise free correction frame. The peak-to-peak residuals on the finally reduced science frames were typically 0.2% or less.

Cosmic ray events were detected by fitting a two-dimensional Gaussian to each local maximum in the frame. All signals with a FWHM smaller than 1.5 pixels and an amplitude >8 times the background noise were removed. Then these pixels were replaced by the mean value of the surrounding pixels. This provides a very reliable identification and cleaning of cosmic ray events (for details see Gössl & Riffeser 2002).

In order to eliminate bad pixels and other affected regions for the image combination procedure, a bad pixel mask was created for every image. The positions of bad pixels on the CCD were determined for each filter for each run using normalized flatfields. All pixels whose flatfield correction exceeded 20% were flagged. Afterwards, each science frame was inspected for other disturbed regions (satellite trails, border effects) and their positions included in the corresponding bad pixel masks.

The alignment of the images and the correction for the field distortion was done simultaneously. This ensured a minimization of smoothing and S/N reduction. As a reference frame, an I filter image of the FDF taken under the best seeing conditions in October 1999 was used. Depending on the filter, the positions of 15-25 reference stars were measured via a PSF fit on each frame. A linear coordinate transformation was then calculated to project the images with respect to the reference image. The transformation included a rotation, a translation and a global scale variation. Finally, the correction for the field distortion was applied. Following the ESO FORS Manual, Version 2.4, we derive the FORS1 distortion corrected coordinates (x',y') in pixel units as a function of the distorted coordinates (x,y):

 
             x' = x-f(r)(x-x0), (1)
y' = y-f(r)(y-y0), (2)

where (x0,y0) are the coordinates of the reference pixel, $r=\sqrt{(x-x_0)^2+(y-y_0)^2}$ and

 \begin{displaymath}f(r)=3.602\times10^{-4}-1.228\times10^{-4}~r+2.091\times10^{-9}~r^2.
\end{displaymath} (3)

The flux interpolation for non-integer coordinate shifts was calculated from a 16-parameter, 3rd-order polynomial interpolation using 16 pixel base points (for details see Riffeser et al. 2001). The same shifting procedure was applied to the corresponding bad pixel masks, flagging as "bad" every pixel affected by bad pixels in the interpolation.

The images were then co-added according to the following procedure: First, the sky value of each frame was derived via its mode and subtracted. Then the seeing on each frame was measured using 10 stars, and the flux of a non-saturated reference star was determined. Next we assigned a weight to each image relative to the first image in each filter according to:

 \begin{displaymath}
weight(n) = {\frac{f(n)}{f(1)}} \times
{\frac{h(1
)\ {\it FWHM}(1)^2}{h(n)\ {\it FWHM}(n)^2}}
\end{displaymath} (4)

where n is the frame to be weighted relative to the 1st frame (1), f the flux of the reference star, h the sky level and FWHM the seeing on the frame. Weights computed according to Eq. (4) maximize the signal-to-noise ratio of the combined image for faint ( $f\ll h\times{\it FWHM}^2$) point sources. These are the overwhelming majority of the objects studied here. Finally, the weighted sum was calculated and normalized to a 1 s exposure time. Pixels flagged as bad on the individual images were not included in the coadding procedure. Since a different number of dithered frames contributed to each pixel in the co-added images, producing a position-dependent noise pattern, a combined weight map to each frame was constructed. The latter was included into the source detection and photometry procedure using SExtractor (see Sect. 6).

The photometric calibration of our co-added frames was done via "reference'' standard stars in the FDF. We first determined the zero points for two photometric nights (Oct. 10/11 and 11/12, 1999) during which the FDF was imaged in all 5 optical filters. The colour correction and extinction coefficients on the ESO Web-page were used to derive the zero points for our FORS filter set in the Vega system. As no calibration images were available in the g-band, transformation from V to g was performed following Jørgensen (1994). We then convolved all the FDF images from the two photometric nights to the same seeing as the co-added frames and determined the magnitudes of 2 (U)-10 (I) stars. Based on a curve of growth for these stars, a fixed aperture with a diameter of $8\hbox{$^{\prime\prime}$ }$ was used. Using these reference stars, we finally determined the zero points of the co-added frames. The difference of the magnitudes between the reference stars on the individual frames on the two photometric nights and on the co-added frames is 0.01 mag or less. We verified our zero points by repeating the procedure described above using observations from two photometric nights during our November 1999 run.

4.2 NIR data

About $\sim $10-20% of the observed NIR frames were found to contain an electronic pattern caused by the fast motion of the telescope near the zenith. These frames were excluded from the analysis. The remaining data were reduced using standard image processing algorithms implemented within IRAF[*]. After dark-subtraction, for each frame a sky frame was constructed typically from the 10 subsequent frames which were scaled to have the same median counts. These frames were then median-combined using clipping (to suppress fainter sources and otherwise deviant pixels) to produce a sky frame. The sky frame was scaled to the median counts of each image before subtraction to account for variations of sky brightness on short time-scales. The sky-subtracted images were cleaned of bad-pixel defects and flat-fielded using dome flats to remove detector pixel-to-pixel variations. The frames were then registered to high accuracy, using the brightest $\sim $10 objects following the same procedure as described in the previous section, and finally co-added, after being scaled to airmass zero and an exposure time of 1 s.

The additionally observed photometric standard stars were used to measure the photometric zero point. The typical formal uncertainties in the zero-points were 0.02 mag in J and 0.01 mag in Ks.


next previous
Up: The FORS Deep Field: catalog

Copyright ESO 2003