next previous
Up: Deep BV R photometry survey


3 Data reduction

All procedures used for the data reduction are based on the MIDAS package and are routinely used at MPIA. An image processing pipeline has been developed specifically for dithered WFI survey images by us. It makes intensive use of programmes developed by Meisenheimer, Röser and Hippelein for the Calar Alto Deep Imaging Survey (CADIS). The pipeline performes basic image reduction and includes standard operations of bias subtraction, CCD non-linearity correction, flatfielding, masking of hot pixels and bad columns, cosmic correction and subsequent stacking into a deep co-added frame covering the area that is common to all frames ( $31\hbox{$.\mkern-4mu^\prime$ }5 \times 30\hbox{$^\prime$ }$). Details on the processing will be given in a forthcoming paper when the data reduction has been completed for all filters and fields (Meisenheimer et al., in preparation).

The co-added frame thus obtained is not optimal for photometry since the flux errors are not sufficiently described by photon noise only. Instead, flatfield errors and other systematic effects which are locally changing on the CCD can only be incorporated into the error analysis by measuring the photometry on the individual frames, where the object location varies due to dithering. Combining these individual measurements allows to derive flux errors from the scatter among the frames.

The deep co-added images we in fact only used for object search and visual inspection purposes. Objects have been searched only on the R-band sum frame, which provides a uniform, sharp PSF with $0\hbox{$.\!\!^{\prime\prime}$ }75$ FWHM and the best signal-to-noise ratio for almost all known kinds of objects expected in the field. We used the SExtractor software (Bertin & Arnouts 1996) with the recommended default setups in the parameter file, except for choosing a minimum of 12 significant pixels required for the detection of an object. We first search rather deep and then clean the list of found objects from those having more than $0\hbox{$.\!\!^{\rm m}$ }333$ error in the SExtractor best-guess magnitude. As a result we obtained a catalogue of 63501 objects with positions and morphology. Starting from the known object positions on the co-added R-band frame, we transform the positions of the objects onto every single frame and measure their fluxes on them.


  \begin{figure}
{\hbox{
\psfig{figure=MS10606f2.ps,angle=270,clip=t,width=8.8cm} }}\end{figure} Figure 2: Distribution of aperture correction magnitudes versus deconvolved area of bright objects. Stars are at zero level while extended objects reach down to negative values. The straight line shows the best fit to the data which has been adopted for a general aperture correction that is also applied to faint objects.

COMBO-17 is a spectrophotometric survey, where color indices are the prime observables entering a process of classification and redshift estimation later. Therefore, it is necessary to choose an optimum way to measure these indices. For ground-based observations it is important to avoid that variable observing conditions introduce offsets between bands when the observations are taken sequentially. Variable seeing, e.g., might influence the flux measurement of star-like and extended objects in a different way.

This requires us to assess the seeing point spread function on every frame very carefully. Then, we essentially convolve each image to a common effective point spread function and measure the central surface brightness of each object in a weighted circular aperture (Röser & Meisenheimer 1991). This has the disadvantage that the spatial resolution (i.e. the minimum separation of objects neighboring each other) is limited by the frame with the poorest seeing. Especially, we do not attempt to separate the fluxes among closely blended objects. For the context of this paper we use an effective PSF of  $1\hbox{$.\!\!^{\prime\prime}$ }5$.

The flux calibration is performed by identifying our spectrophotometric standard stars and convolving their spectra with the total system efficiency in the given filter (see Fig. 1). We then know the physical photon flux we have to assign to them, and establish the flux scale for all objects. Since the spectra of the standard stars have been measured with a 5 $^{\prime\prime}$ wide slit in good seeing, we are confident that we have collected basically all their light. This implies that the photometry of all stars should be accurate since the standard stars are measured with the same aperture in the images as all other objects.

  \begin{figure}
{\hbox{
\psfig{figure=MS10606f3.ps,angle=270,clip=t,width=10cm} }}\end{figure} Figure 3: Errors versus magnitudes for all objects with errors below $0\hbox{$.\!\!^{\rm m}$ }1$ observed in the WFI filters B, V and R (two epochs).

Extended objects however have their fluxes underestimated and therefore we performed a set of photometric runs with apertures increasing in steps to  $10\hbox{$^{\prime\prime}$ }$ and no weighting functions. At  $10\hbox{$^{\prime\prime}$ }$ diameter basically all fluxes have already converged allowing us to measure total magnitudes for virtually all objects except for a few very large and bright galaxies. This total magnitude can not be measured for the fainter objects since the background noise from the large aperture would be extremely high. Therefore, we measured the magnitude difference between the total magnitude and our small weighted circular aperture for the bright objects and derived an aperture correction function depending on morphological parameters (see Fig. 2). This function provides an estimated aperture correction and is uniformly applied to all objects (see Fig. 5).

The fluxes from individual frames are averaged into a final flux for each object with the error being derived from the scatter. This way, the error does not only take photon noise into account, but further sources of error, such as imperfect flatfielding and uncorrected CCD artifacts. However, we prevent chance coincidences of count rates from pretending unreasonably low errors by using the errors derived from background and photon noise as a lower limit (see Meisenheimer et al., in preparation, for a full discussion of the photometric analysis).


next previous
Up: Deep BV R photometry survey

Copyright ESO 2001