A&A 373, 38-55 (2001)
DOI: 10.1051/0004-6361:20010590

A QSO survey via optical variability and zero proper motion
in the M 92 field

I. QSO candidates and selection effects[*]

J. Brunzendorf - H. Meusinger

Thüringer Landessternwarte Tautenburg, 07778 Tautenburg, Germany

Received 8 February 2001 / Accepted 11 April 2001

Abstract
The combination of variability and proper motion constraints (VPM search) is believed to provide an unconventional yet efficient search strategy for QSOs with selection effects quite different from conventional optical QSO surveys. Previous studies in a field of high galactic latitude have shown that the VPM method is indeed an efficient search strategy. In the present paper, we describe a further variability-proper motion (VPM) QSO survey on Tautenburg Schmidt plates. The survey is based on an exceptionally large number of 162 B plates centred on M 92 with a time-baseline of more than three decades. Further U and V plates are used to measure time-averaged colour indices, and morphology is evaluated on a deep R plate. Proper motions with Hipparcos-like accuracies as well as variability indices are derived for about 35000 star-like objects down to B=20.5. With regard to both the number of plates and the number of objects to be investigated, this is the largest VPM survey performed so far. The present paper is focused on the description of the observational material, the data reduction, the definition of the selection parameters, and the construction of the QSO candidate sample. Furthermore, the selection effects of the VPM-method are discussed a priori. For the present survey, the selection effects are shown to be dominated by the magnitude-dependence of the photometric accuracy. Down to the limiting magnitude of the survey $B_{\rm lim}=19.7$, we identify 62 high-priority QSO candidates and a further 57 candidates of medium priority. Spectroscopic follow-up observations have been performed for all these candidates as well as for additional selected candidates of lower priority; the confirmed QSOs will be presented and discussed in a forthcoming paper.

Key words: galaxies: active - galaxies: Seyfert - galaxies: statistics - quasars: general - globular clusters: individual: M 92


1 Introduction

"Quasars cannot be studied until they are found'' (Weedman 1984). Optical surveys for QSOs[*] can yield very high completeness rates over a large redshift range - yet they are hampered by the small fraction of QSOs among all objects visible in this wavelength range, with most of the latter are foreground stars and galaxies. A straightforward identification of all QSOs in a given survey field, which requires spectroscopic observations of all objects up to an adequate limiting magnitude with sufficient spectral resolution and signal-to-noise ratio, would be a voluminous task with low efficiency. Hence, optical QSO surveys are conducted in two steps: 1) selection of QSO candidates from all objects in the field, based on criteria that are supposed to discriminate QSOs from non-QSOs, and 2) spectroscopic follow-up observations of all selected candidates. The properties of the resulting QSO samples are constrained by the selection criteria of the survey.

Most selection criteria are based on the different spectral energy distribution (SED) of QSOs compared to stars and galaxies. The following properties have been proven to be particularly suited for the identification of QSO candidates: peculiar optical colours (e.g., intrinsic UV-excess, blue continuum), Lyman break (for QSOs with redshifts z>3), and prominent emission lines. Surveys based on these criteria are known to be biased in several ways (for an overview, see Wampler & Ponz 1985; Véron 1993; Hewett & Foltz 1994). Here we only note that their completeness depends (among others) on the QSO redshifts, colour indices, and emission line equivalent widths. It is widely believed that these conventional QSO surveys can reach a very high degree of completeness. However, such a claim can only be verified by means of alternative QSO surveys which are not based on the same or similar selection criteria. In fact, it is still a matter of debate whether conventional QSO surveys systematically overlook hitherto unknown and possibly substantial QSO populations (e.g., Webster et al. 1995; Drinkwater et al. 1997; Kim & Elvis 1999).

Due to their cosmological distances, QSOs have non-detectable proper motions for existing observation techniques. Therefore, the search for zero proper motion objects is expected to provide a bias-free QSO candidate sample (Sandage & Luyten 1967; Kron & Chiu 1981). However, a QSO search which is essentially based on the zero proper motion constraint is not very efficient, since the resulting sample will be dominated by faint galaxies and galactic foreground objects having insignificantly small proper motions by chance. Optical variability is a further general property of quasars (Ulrich et al. 1997; Netzer 1999), and the identification of the variable objects in a given field is a further, independent QSO search method (van den Bergh et al. 1973; Heckman 1976; Usher & Mitchell 1878; Hawkins 1983; Trevese et al. 1989; Véron & Hawkins 1993; Hook et al. 1994; Trevese et al. 1994; Meusinger et al. 1994; Véron & Hawkins 1995; Cristiani et al. 1996; Bershady et al. 1998). The combination of these two constraints, i.e. the search for variable objects with zero proper motion (VPM search = Variability and Proper Motion search), should therefore provide an alternative QSO search strategy which does not explicitely rely on the SEDs of QSOs. It has been speculated that "a search for objects which are both variable and stationary is a powerful technique for efficiently finding QSOs with no selection bias with regard to colour, redshift, spectral index, or emission line equivalent widths'' (Majewski et al. 1991; Véron 1993).

Apart from the experimental and comparably small survey by Majewski et al. (1991), the only VPM QSO survey so far is being performed by Meusinger, Scholz and Irwin on 85 Tautenburg Schmidt plates of a field near the North Galactic Pole (Meusinger et al. 1995; Scholz et al. 1997). According to a priori estimates, a high survey completeness of about 90%, in combination with a success rate of about 40%, is expected, which is confirmed by the preliminary results from spectroscopic follow-up observations (Meusinger et al. 1999).

Here, we present a new VPM QSO survey, which investigates a $10\,\ifmmode\hbox{\rlap{$\sqcap$ }$\sqcup$ }\else{\unskip\nobreak\hfil
\penalty...
...p{$\sqcap$ }$\sqcup$ }
\parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi^\circ$ field centred on the globular cluster M 92. This is a more ambitious project since a quasar search in this field faces the problem of a stronger contamination by galactic foreground stars than a search at high galactic latitudes, even though the direction of the M 92 field is well off the galactic plane ( $b=35\hbox{$^\circ$ }$). On the other hand, the field is one of the "Tautenburg Standard fields'', characterized by a very large number of available plates. Further, this area has never been surveyed for QSOs before. Our search is based on 208 selected, deep photographic Schmidt plates covering epoch differences of up to 34 years. With regard to this large quantity of observational data, the present project is the largest QSO survey based on variability and/or proper motion criteria performed so far. The main aims of this project are to improve the statistics of VPM-selected QSOs and to enlarge the number of known QSOs with well-sampled light-curves measured over a time baseline of several decades. The combined sample from both VPM fields is expected to contain more than hundred QSOs with $B\le19.5$ and will be well-suited both for the comparison with QSO samples from more traditional methods and for statistical studies of quasar variability on timescales of days to decades. In addition, the present study is aimed at the detailed discussion of the selection effects of the VPM search.

The present paper is concerned with the description of the observational material (Sect. 2), the photometric and astrometric data reduction (Sect. 3), the definition of suitable indices for proper motion and variability, and the selection of the QSO candidates based on these indices (Sect. 4). The selection effects will be discussed in Sect. 5, and conclusions are summarized in Sect. 6. The identification of the QSOs among the candidates of high and medium priority by means of spectroscopic follow-up observations has been completed. The resulting QSO sample will be presented in a forthcoming paper along with the detailed discussion of the statistical properties of the VPM QSOs and the comparison with conventional optical QSO samples.

2 Observations

An efficient search for variable objects with zero proper motion requires a large number of homogeneous (e.g., the same colour system) observations of a large number of faint objects with high astrometric and photometric accuracy, spanning a time-baseline of decades. These requirements can be matched if a substantial number of deep archival plates from a large wide-field imaging telescope is available. The archive of the Tautenburg Schmidt telescope (134 cm free aperture, 4 m focal length, $3\hbox{$.\!\!^\circ$ }3\times3\hbox{$.\!\!^\circ$ }3$ unvignetted field of view) contains more than 9000 plates taken between 1960 and 1997. For several "standard fields'', more than hundred archival plates are available. With epoch differences of three decades and more, this observational material is particularly well suited for a VPM QSO search, since all plates were taken with the same telescope, through the same filters, and onto very similar emulsions. Moreover, thanks to its large focal length, compared to other large Schmidt telescopes, the Tautenburg Schmidt has less problems with distortions due to plate bending and has a better scale for astrometric work (e.g., Schilbach et al. 1995; Scholz et al. 1993, 1994, 1996, 1997; Meusinger et al. 1996).

For the present VPM survey, the field centred on the globular cluster M 92 was chosen (Table 1). In preparation for this project, 56 plates were taken in the years 1992 to 1997. Combined with the archival plates, a total number of 332 plates of the M 92 field are available. We selected 208 sufficiently deep plates in the U, B, V or R band, among them 162 B plates (Table 1). Only the B plates are used to measure variabilities and proper motions; the measurements in the other bands only provide additional colour information. The plates were taken in the years 1963 to 1997. Compared with our first VPM survey in the M 3 field, the present survey comprises about three times more B plates with a better time coverage and a slightly longer time baseline.

  \begin{figure}
\beginpicture
\setcoordinatesystem units <1.87mm,10.66mm> point a...
...8.47628526 19.82
68.48459959 19.79
68.57759715 19.79
/
\endpicture\end{figure} Figure 1: Individual limiting magnitudes of all 208 selected Schmidt plates of the M92 field versus epoch. Different symbols represent different colour bands in the Johnson system (open circle: U, filled circle: B, plus sign: V, cross: R).
Open with DEXTER


 

 
Table 1: Data on the M 92 survey field and the selected Schmidt plates.
plate centre: $\alpha_{2000} = 17\mbox{${}^{\rm h}$ }17\mbox{${}^{\rm m}$ }7\mbox{${}^{\rm s}$ }$
  $\delta_{2000} = 43\hbox{$^\circ$ }8\hbox{$.\mkern-4mu^\prime$ }2$
  ( $l=68\hbox{$^\circ$ }$, $b=+35\hbox{$^\circ$ }$)
field size: $3\hbox{$.\!\!^\circ$ }3\times3\hbox{$.\!\!^\circ$ }3$
  minus $0\hbox{$.\!\!^\circ$ }4\times1\hbox{$.\!\!^\circ$ }3$ due to
  calibration wedge
plate scale: 51 $.\!\!^{\prime\prime}$4 mm-1
number of plates: 162 B (epochs 1963-1997)
  18 U (epochs 1966-1997)
  18 V (epochs 1966-1989)
  10 R (epochs 1966-1968)



  \begin{figure}
\beginpicture
\setcoordinatesystem units <11.5mm,13.7mm> point at...
....0 2.98811 %
4.0 2.88536 4.1 2.88536 %
4.1 0 4.2 0 %
/
\endpicture\end{figure} Figure 2: Histogram of the epoch differences $\Delta \,t$ for all combinations between each two out of the 162 selected B plates.
Open with DEXTER

The limiting magnitudes and epochs of the observations are summarized in Fig. 1. Figure 2 shows the histogram of the epoch differences between all combinations of two individual B plate epochs. The frequency space is almost entirely and quite homogeneously covered for epoch differences larger than one day. The large maximum epoch difference of 34 years, in combination with the huge number of plates, allows the certain detection and subsequent thorough investigation of variable objects with variability timescales of days to decades.

3 Data reduction

All 208 selected plates have been completely digitised by means of the Tautenburg Plate Scanner TPS. A detailed description of the TPS is given by Brunzendorf (2000); an overview is given by Brunzendorf & Meusinger (1999). The resulting digital images have a linear resolution of $10~\mu{\rm
m}\times10~\mu{\rm m}$ $(0\hbox{$.\!\!^{\prime\prime}$ }5\times0\hbox{$.\!\!^{\prime\prime}$ }5)$ per pixel and a resolution depth of 4 096 grey levels (12 bit) per pixel. The digitised images are stored on CD-ROMs. Subsequent data reduction is done off-line.

  
3.1 Object search and determination of image parameters

The object identification on the digitised plates as well as the subsequent determination of the relevant image parameters, like (x,y)-position, internal (i.e., uncalibrated) magnitude and size, are based on the Münster Redshift Project (MRSP) software package (Horstmann et al. 1989). This software requires intensities as input data. Therefore, the measured photometric densities have to be transformed into relative intensities of the incident light. In principle, this transformation should be done via the individual characteristic curve of each plate. The characteristic curve can be measured if a calibration wedge is exposed on, which is not the case for all plates. For the object search, however, it is sufficient to apply a mean characteristic curve which is estimated from least-square fits of a suitable relation (Lehmann & Häupl 1987; Brunzendorf & Meusinger 1999) onto the measured characteristic curves of 94 Tautenburg Schmidt plates. In this way, the measured densities are transformed into approximate relative intensities, which then serve only as input data for the object search and the determination of the image parameters. In its original version, the MRSP software adopts a linear transformation. The use of a non-linear transformation by means of an average characteristic curve ensures that the intensity profiles of star-like images are well-approximated by a Gaussian fit. Measurements on a large number of Tautenburg Schmidt plates have shown that the MRSP Gaussian fitting procedure works well for stars over a wide magnitude range of more than 13 mag (Brunzendorf & Meusinger 1999). The transformation of all plates into a common photometric standard system is done later by means of a sequence of standard stars (Sect. 3.4).

An object is detected if its relative intensity exceeds certain threshold levels above the background. The threshold levels are measured in units of the background noise, $\sigma_{\rm BG}$, which is dominated by the grain noise of the plate. The peak intensity $I_{\rm max}$ and the total intensity $I_{\rm
total}$ of an object have to meet the constraints $ I_{\rm
max}\ge1.6\,\sigma_{\rm BG}$ and $ I_{\rm total}\ge40\,\sigma_{\rm BG}$. On the deepest B plates, these conditions are satisfied by stars with magnitudes $B\le21.7$.

  
3.2 Definition of the basic sample of objects to be investigated

After the determination of the object positions and magnitudes on all selected plates, one has to decide which objects are to be investigated further. In order to avoid strong contamination of the object list by spurious detections (grain noise, plate faults etc.) it is not efficient to consider all detected objects. A common approach is to declare the deepest plate as a "master plate'', and to identify all objects measured on this plate with the basic object sample. The disadvantage of such a procedure is, however, that all objects not detected on the master plate would be excluded.

In the present project, the basic object sample is defined in the following way: an object is included if it is detected both on at least two out of five selected deep first epoch plates (epoch $1967.1\pm0.5$) and on at least two out of five selected deep second epoch plates (epoch $1994.5\pm0.2$). It is easy to show by means of statistical considerations, that the completeness limit (here defined as the faintest magnitude at which 99% of all objects at this brightness are still detected) of this sample is deeper than for any single plate. In addition, this object sample contains virtually no spurious detections. On the other hand, the limiting magnitude is lower than for the deepest single plates. The reason is, that on single plates the faintest stars are detected with a low probability; the faintest objects are therefore suppressed in the final sample whereas some of them may be measured on a single plate. For the present study, however, a deep completeness limit is by far more important than a deep limiting magnitude caused by only a few faint objects detected by chance.

The final sample contains about 35000 objects in the magnitude range $8\le B \le 20.5$ with a completeness limit $B_{\rm compl} = 19.8$. The frequency distribution of the B magnitudes for the objects in the final sample is shown in Fig. 3.

  \begin{figure}
\beginpicture
\setcoordinatesystem units <9mm,0.048mm> point at 0...
...47500038 3
20.52499962 0
20.57500076 1
20.62500000 0
/
\endpicture\end{figure} Figure 3: Frequency distribution (number of objects per 0.05 mag interval) of the mean B magnitudes for the objects in the final basic sample.
Open with DEXTER

  
3.3 Morphological classification

Each object is to be classified as a star-like object, a galaxy, or a merged object. Only star-like objects are to be considered as QSO candidates, whereas the galaxies in the field define the extragalactic astrometric reference frame, i.e. the zero point of the absolute proper motions. Objects classified as "merged'' are rejected from further considerations as QSO candidates since they do not allow accurate measurements of both variability and proper motion. The overwhelming fraction of them will consist of two stars with images projected by chance in nearly the same direction with a projected distance of typically less than 8''. Of course, star-like images in merged objects can also include QSO images. Such QSOs, as well as those completely hidden by images of other objects in the field, will not be detected. This effect has to be taken into account when the surface density of the resulting QSO sample will be discussed (Paper II). Moreover, the subsample of merged objects can also include QSO pairs and gravitationally lensed QSOs. In spite of their great importance, the chance is exceptionally low to detect such pairs in the present survey if their positional separation is less than about 8''. However, it can be concluded from the statistics of QSO pairs in the 2dF survey (Shanks et al. 2001) that the probability is negligible of having one or more pairs within the magnitude range and field of the present study.

We also stress the importance of an accurate discrimination between galaxies and merged objects, since any stellar contamination of the galaxy sample leads to a systematic non-zero absolute proper motion of the astrometric reference frame.

The morphological classification is performed on the deepest R plate (plate 2787) which contains about 60% more objects than the deepest Bplates and which allows in particular a better identification of faint galaxies. The classification is done in two steps: 1) manual identification of the galaxies by visual inspection, and 2) automatic identification of all objects which have a nonstellar image profile, i.e. galaxies and merged objects. The visual inspection yields 1366 galaxies. Image profile parameters are determined for the overwhelming majority of these galaxies from the automatic classification. Hence, these galaxies can be used to check the results from the automatic classification and to define the morphological selection criteria. Moreover, 534 of these galaxies are identified with objects in the final basic object sample (Sect. 3.2) and are used to define the astrometric reference system (Sect. 3.5). All remaining galaxies are, on the B plates, either too faint and/or too extended and/or too fuzzy to serve as astrometric reference points.


   
Table 2: Photometric standard sequences in the Johnson UBV bands around M 92, which served as reference system for the photometric calibration in the present work. The references, in the first column, are: (1) Sandage & Walker (1966), (2) Sandage (1969), (3) Sandage (1970), (4) Christian et al. (1985), (5) Stetson & Harris (1988).
ref. type magnitude range number of stars
    B U B V
(1) photoelectric $11.3\ldots17.2$ 44 45 45
(1) photographic $14.5\ldots16.1$ - 14 15
(2) photoelectric $15.3\ldots16.0$ - 11 11
(3) photoelectric $16.9\ldots22.4$ 30 40 40
(3) photographic $18.1\ldots19.5$ - 3 3
(4) photoelectric $14.8\ldots18.2$ - 9 9
(5) CCD $14.8\ldots22.4$ - 286 286
total   $11.3\ldots22.4$ 74 408 409

For the automatic classification, two different methods are applied. The first one is based on the separation between resolved and unresolved objects on the radius-magnitude diagram. We define a nonstellar index

\begin{displaymath}I_{\rm nonstellar} = \frac{r-r_{\rm m}(R)}{\sigma_r(R)},
\end{displaymath} (1)

where r is the effective radius provided by the MRSP software and $r_{\rm m}(R)$ is the median of the r distribution at the given R magnitude. The standard deviation $\sigma_r(R)$of the scatter of the individual r(R) around $r_{\rm m}(R)$ is exclusively derived from objects with $r(R)\le r_{\rm m}(R)$in order to exclude galaxies and merged objects. Objects with $I_{\rm nonstellar}>6$ are considered nonstellar. The second automatic classification method is based on the profile residuum parameter $\Psi$ as described by Maddox et al. (1990). Using this parameter, all objects with $\Psi>2\,000$ are considered to be nonstellar.

  
3.4 Photometric calibration


  \begin{figure*}
\beginpicture
\setcoordinatesystem units <4.8mm,4.5mm> point at ...
...1.5227
20.58359 21.6394
/
\put {\bf (a)} at 10 20.6
\endpicture
\end{figure*} Figure 4: Photometric calibration of the plate No. 8629: internal magnitudes $m_{\rm intern}$ a) and calibrated magnitudes b), respectively, versus catalogued magnitudes $B_\textrm{\tiny lit}$ for 331 standard stars. The solid lines represent the best fit. The data marked by plus signs are not included in the regression.
Open with DEXTER

An accurate photometric calibration of each plate is crucial for the present study. The MRSP software provides internal magnitudes $m_{\rm intern}$ for all objects in the uncalibrated photometric system of the individual plates. A photometric calibration by standard stars has to be applied in order to transform $m_{\rm intern}$ into the photometric standard system m (e.g., Johnson UBV system). In the vicinity of M 92, several sequences of photometric standard stars have been published (Table 2). The combination of these sequences covers the entire magnitude range of the present study, and was used as photometric reference system in this work. Only those standard stars whose data obviously did not fit into the general sequence (due to whatever reason) on several different plates have been excluded from the reference sample. The number of stars in the final sample of photometric standards is listed in Table 2.

The Tautenburg photographic UBV system is known to match very closely the Johnson UBV system (van den Bergh 1964; Börngen & Chatchikjan 1967; Andruk & Kharchenko 1994). The deviations are generally smaller than the measurement uncertainties. This result is confirmed by the measurements of standard stars in the present study. Hence, a colour correction of the photometric data is not essential. This is a major advantage, since colour equations are calibrated onto stars and are, therefore, not simply transferable to QSOs with their fundamentally different spectral energy distribution (see also Hewett et al. 1995).

On the other hand, geometrical terms have to be included in the photometric calibration (Andruk & Kharchenko 1994). Since there are no published standard stars in the outer parts of our field, i.e. outside M 92, the calibration is done in three steps:

1.
individual calibration of each plate using the M 92 standard stars as reference system;
2.
calculation of the mean magnitude of each object in the field by averaging over the measurements from all plates;
3.
a second, improved calibration of each plate including geometrical terms using the system of the mean magnitudes of all objects as reference system.
Because the internal magnitudes $m_{\rm intern}$ were derived adopting a mean characteristic curve (Sect. 3.1), instead of that for the individual plate, the relationship $m_{\rm intern}=f(m)$ is nonlinear (Fig. 4a). A good approximation of f is given by

 \begin{displaymath}m_{\rm intern} = f(m) \approx
(1-\omega)\,\left(a_0+a_1\,m\right)+\omega\,\sum\limits_{i=0}^n c_i\,m^i,
\end{displaymath} (2)

where ai and ci are coefficients, $n=5\ldots9$, and $\omega(m)$ is a weight-function expressed by

\begin{displaymath}\omega = \left\{
\begin{array}{l@{\,\,\forall\,}l}
1 & m\le m...
...<m<m_{\rm max}\\
0 & m\ge m_{\rm max}. \\
\end{array}\right.
\end{displaymath} (3)

This expression takes into account that the photometric accuracy degrades towards fainter magnitudes, which must result in a lower degree of the polynomial fit compared to brighter magnitudes in order to avoid an overfitting of the data. The choice of $m_{\rm min}$ and $m_{\rm
max}$ depends on the limiting magnitude $m_{\rm lim}$ and on the individual goodness of fit of the approximation, with $m_{\rm max} =
m_{\rm lim}-(0\ldots1)$ mag and $m_{\rm min} = m_{\rm
max}-(2.5\ldots4)$ mag. The faint end of f is linear, whereas the fit is a polynomial one at brighter magnitudes. Thus, the desirable increasing stiffness of the fit towards fainter magnitudes is achieved. The regression between $m_{\rm intern}$ and m has to be calculated in the form $m_{\rm intern}=f(m)$, since the random variable is $m_{\rm intern}$, and not m. For the photometric calibration, the inverse function $m=f^{-1}(m_{\rm intern})$ is derived from f by a spline interpolation. Figure 4a illustrates the photometric calibration for the example of the B plate No. 8629. A good approximation can be achieved, in principle, over the magnitude range $B =8\ldots21$ (Fig. 4b). In the present study, the calibration is restricted to the relevant magnitude range $B=13\ldots B_{\rm lim}$.

After the individual calibration of all plates, the mean magnitude of each object is calculated. These magnitudes then serve as the reference system for the second photometric calibration which is done in the same way as the first one, except for the additional inclusion of geometrical terms

 \begin{displaymath}\Delta m(x,y) = \sum\limits_{i=1}^4 a_i\,x^i+\sum\limits_{i=1}^4 b_i\,y^i.
\end{displaymath} (4)

The values for the coefficients ai and bi are determined for each plate separately by multiple linear regression. The geometrical correction term $\Delta m$ reaches values up to 0.5 mag, which confirms its importance for an accurate photometric calibration.

The accuracy of the photometric calibration, which is limited by the grain noise of the emulsion, can be expressed by the standard deviation $\sigma$ of the calibrated magnitudes of non-variable objects measured on all plates. The dependence of $\sigma _B$ on the mean B magnitudes is shown in Fig. 5. The photometric accuracy $\sigma _B$ is better than 0.1 mag for $12\le B\le18$ and better than 0.07 mag for $14\le B\le17.5$.

  \begin{figure}
\includegraphics[width=8.4cm,clip]{fig5.eps}
\end{figure} Figure 5: Standard deviation $\sigma _B$ of the magnitudes Bi measured for each object on the plates i versus mean B magnitude. The continuous line represents the median value.
Open with DEXTER

The U and V plates were calibrated in exactly the same way as the B plates. The resulting photometric accuracies are similar, yet the limiting magnitudes are lower than for the B plates ( $V_{\rm
lim}\lesssim20.5$ and $U_{\rm lim}\lesssim19.5$; Fig. 1). Thus, U magnitudes were available for only 2/3 of all objects. For this reason, 11 deep U plates were digitally co-added to produce a deeper U band image. A detailed description of the digital stacking technique is given by Froebrich & Meusinger (2000). The object search, the determination of the object parameters, and the photometric calibration were performed on the stacked plate in the same way as on ordinary single plates. Figure 6 illustrates the good accuracy of the resulting calibrated U magnitudes derived from the co-added plate. Within the measuring accuracies, the U magnitudes derived from single plates agree with the photometric data derived from the stacked plate.

  \begin{figure}
\beginpicture
\setcoordinatesystem units <6mm,6mm> point at 0 0
\...
...4769
20.2900 20.4310
20.5300 20.2298
20.9300 21.2614
/
\endpicture\end{figure} Figure 6: Photometric calibration of the co-added U plates: derived magnitudes $U_{\rm add}$ versus published values $U_{\rm lit}$ of the photometric standard stars.
Open with DEXTER

  
3.5 Astrometric calibration

For the determination of proper motions, only deep B plates were used ( $B_{\rm lim}\ge19$) taken at zenith distances less than $45\hbox{$^\circ$ }$, and having no large plate faults. Among the 162 selected B plates, there are 135 plates found to match these constraints. The plate-to-plate transformation between the measured coordinate mapping on a given plate and the reference system, defined by a master plate, is modelled by two-dimensional second-order polynomials. Higher order polynomials and/or the introduction of magnitude- and/or colour-dependent terms do not significantly improve the fit. The advantage of low-order polynomials is that the fit is rigid and is not sensitive to small-scale systematic proper motions in the field. Eventually remaining small-scale systematic residuals are averaged out due to the large number of plates.

The relative proper motion vector $\vec{\mu}_{\rm rel}$ of a given object is calculated from the linear least-square fit of the positions $(x(t_i),\,y(t_i))$, measured on the plate i, as a function of the epoch ti. The residuals $\sigma _r$ between the positions of a given object measured on the individual plates and the mean positions as calculated from the linear regression are used as a measure of the positional accuracy. Figure 7 displays $\sigma _r$ as a function of the mean B magnitude. The residuals are smaller than $0\hbox{$.\!\!^{\prime\prime}$ }1$ for the majority of stars with $B \le 18$ and smaller than $0\hbox{$.\!\!^{\prime\prime}$ }2$ for most stars with $B\le19.5$. The median accuracy $\sigma_{\mu_{\rm rel}}$ of the derived relative proper motions is 0.5 masyr-1 for objects with $13\le B\le17.5$ and 1.0 masyr-1 for objects with $B\le 19.7$.

  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{fig7.eps}
\end{figure} Figure 7: Distribution of the total astrometric least-square errors $\sigma _r$ of all objects versus mean B magnitude. Continuous line: median value.
Open with DEXTER

The vector point diagram of the derived relative proper motions $\vec{\mu}_{\rm rel}$ is shown in Fig. 8. Due to their peculiar velocities, a considerable fraction of the star-like objects have large proper motions of more than e.g. $\sim$10 masyr-1 and are, therefore, easily recognised as galactic foreground stars. Stars are clearly much more scattered in this diagram than galaxies. Moreover, a systematic offset of the relative proper motions of the galaxies relative to the bulk of the field stars is clearly indicated. (For the stars in the M 92 cluster area, i.e. within a given distance from the cluster centre, we find a strong concentration on the vector point diagram, as well. The proper motion of M 92, however, will be the subject of a separate paper.)

  \begin{figure}
\par\includegraphics[width=8.7cm,clip]{fig8.eps}
\end{figure} Figure 8: Components of the relative proper motions vectors $\vec{\mu}_{\rm rel}$ of all objects in the field with distances $r\ge 8'$ from the centre of M 92 (gray dots). Visually identified galaxies are shown as black bullets.
Open with DEXTER

The absolute proper motion of a given object with respect to the reference frame calculates as

\begin{displaymath}\vec{\mu} = (\mu_\alpha\,\cos\delta,\,\mu_\delta)=\vec{\mu}_{\rm rel}
-\vec{\mu}_{\rm ref}
\end{displaymath} (5)

with the accuracy

\begin{displaymath}\vec{\sigma_{\mu}} = \vec{\sigma}(\vec{\mu}_{\rm rel}) + \vec{\sigma}(\vec{\mu}_{\rm ref}).
\end{displaymath} (6)

The relative proper motion of the extragalactic reference system $\vec{\mu}_{\rm ref}$ is derived from the accuracy-weighted mean of the relative proper motions of the 534 galaxies (see Sect. 3.3) in the basic object sample. We find

\begin{displaymath}\vec{\mu}_{\rm ref}=(3.430\pm0.097,\,4.130\pm0.093)~{\rm mas\,yr}^{-1}.
\end{displaymath} (7)

The frequency distribution of the accuracies $\sigma_\mu=\vert\vec{\sigma}_\mu\vert$is shown in Fig. 9 for four different magnitude intervalls. It should be noted that we reach Hipparcos-like accuracies of $\bar{\sigma}_\mu \approx 1.5$ masyr-1 (mean value) for a complete sample of stars up to B = 19.7.
  \begin{figure}
\beginpicture
\setcoordinatesystem units <15mm,0.06mm> point at 0...
...1
4.9499998093 19
4.9699997902 19
4.9899997711 21
/
\endpicture\end{figure} Figure 9: Frequency distribution of the accuracy $\sigma _\mu $ of the measured absolute proper motions $\mu $, shown for four magnitude intervals. The width of the binning interval is 0.02 masyr-1.
Open with DEXTER

4 Selection of the QSO candidates

  
4.1 Proper motion index


  \begin{figure}
\beginpicture
\setcoordinatesystem units <15mm,1.2mm> point at 0 ...
....98 0.0010929947
4.99 0.0010419340
5.00 0.0009931531
/
\endpicture\end{figure} Figure 10: Frequency distribution of the proper motion indices $I_\mu $ of all galaxies in the field. The continuous line indicates the probability density of the parameter-free Weibull distribution $f(I_\mu )={\rm d} p / {\rm d} I_\mu $.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{fig11.eps}
\end{figure} Figure 11: Proper motion index $I_\mu $ of all objects in the basic object sample (gray dots) versus B magnitude. The horizontal lines indicate the limits above which proper motions are to be considered as significantly non-zero: $I_\mu \ge 3$ (dashed) and $I_\mu \ge 4.3$ (solid). Galaxies are marked as black dots.
Open with DEXTER

The probability p for an object to have a non-zero proper motion is measured by the proper motion index

 \begin{displaymath}I_\mu = \frac{\mu}{\sigma_\mu}\cdot
\end{displaymath} (8)

For stationary objects, the probability p follows a Weibull-distribution

 \begin{displaymath}p=1-{\rm e}^{-\frac{1}{2}\,I_\mu^2},
\end{displaymath} (9)

if the accuracies $\sigma_x$ and $\sigma_y$ of the measured object positions (x,y) on the individual plates are limited by random, Gaussian-distributed errors, as is expected in the present study. Indeed, the measured $I_\mu $ distribution of the galaxies is found to be in good agreement with the density distribution $f(I_\mu )={\rm d} p / {\rm d} I_\mu $ of the (parameter-free) Weibull-distribution (Fig. 10). A closer inspection of Fig. 10, however, reveals a relative overabundance of observed galaxies with large $I_\mu $. Most of these galaxies are close to the borders of the field, which suggests to exclude the border region from further considerations.

The proper motion indices of all objects as a function of B are shown in Fig. 11. According to Eq. (9), the probability p for an object with measured $I_\mu \ge 3$ to have a non-zero proper motion is $p\ge 0.99$; for an object with $I_\mu \ge 4.3$ the probability for non-zero proper motion is $p\ge 0.9999$. Both limits are indicated in Fig. 11. The vast majority of the objects with $B \le 18.5$ have measured $I_\mu $ well above these limits. At fainter magnitudes, the mean $I_\mu $ decreases due to the increase of $\sigma _\mu $ (Eq. (8)). Nevertheless, significant proper motions $I_\mu > 3\ (4.3)$ are measured for 86% (76%) of the star-like objects with $B \le 19$. These objects are to be excluded from the QSO candidate list.

  
4.2 Variability indices

The procedure of separating optically variable objects from the non-variable ones is different from the separation of stationary objects as described in the previous subsection. Since the variability of a given object is not a priori known (in contrast to the proper motion, which must be zero for QSOs), the approach cannot be to exclude all objects which do not satisfy a given variability criterion. Instead, we have to accept those objects for the candidate list which exhibit a significant variability. This different approach is unavoidable, yet causes a significant selection bias (Sect. 5).

  \begin{figure*}
\beginpicture
\setcoordinatesystem units <1.8mm,16mm> point at 1...
...
124 16.70
125 17.10
126 17.15
127 17.19
128 17.12
/
\endpicture\end{figure*} Figure 12: Light-curve of the most variable object in our basic object sample, the Seyfert galaxy RXS J17220+4315. The measured B magnitude are shown as a function of the epoch a) and of the running number of the photometric data sorted by the epoch b), respectively. In panel b), the data within the epochs 1966-1968, 1973-1976, 1980-1983, 1989.4-1989.5, and 1992-1997 are interconnected for the sake of clarity.
Open with DEXTER

Generally speaking, there are two different ways to identify variable objects. The external method compares the measured brightness fluctuations $\sigma _B$ of different yet comparable objects and identifies those objects with significantly enhanced $\sigma _B$. The internal method, on the other hand, is based on the analysis of the light-curve of each object to search for systematic structures which can not be explained by random noise and which are, therefore, taken as indications for variability. In the present study, both appoaches are followed up.

  
4.2.1 Overall variability index

A useful measure of the variability is given by the weighted standard deviation $I_{\rm variab}$ of the measured magnitudes Bi on the plates i in units of the photometric random errors $\sigma_i(B_i)$of plate i at magnitude Bi:

 \begin{displaymath}I_{\rm variab}^2
=\frac{1}{n-1}\sum\limits_{i=1}^n\left(\frac{B_i-\overline{B}}{\sigma_i(B_i)}\right)^2\cdot
\end{displaymath} (10)

For non-variable objects, $I_{\rm variab}\approx1$. In the general case, the measured standard deviation $\sigma_{\rm total}$ of a given object may be due to both the photometric uncertainties, expressed by $\sigma_{\rm obs}(B)$ (Fig. 5), and the intrinsic variability $\sigma _{\rm intrin}$ of this object. Hence, Eq. (10) can be generalized to

 \begin{displaymath}I_{\rm variab}^2 = \frac{\sigma_{\rm total}^2(B)}{\sigma_{\rm...
...(\frac{\sigma_{\rm intrin}}{\sigma_{\rm obs}(B)}\right)^2\cdot
\end{displaymath} (11)

An object is significantly variable if

 \begin{displaymath}I_{\rm variab}^2\ge\frac{\chi_{n-1,\alpha}^2}{n-1},
\end{displaymath} (12)

where $\chi_{n-1,\alpha}^2$ is the value of the $\chi^2$-distribution for n-1 degrees of freedom and the chosen significance level $\alpha$. For large n ($n\gtrsim5$), the value of $\chi_{n-1,\alpha}^2$ can be approximated by

 \begin{displaymath}\chi^2_{n-1,\alpha}=\frac{1}{2}\left(\sqrt{2n-3}+\Psi(\alpha)\right)^2
\end{displaymath} (13)

(Göhler 1987), where $\Psi(\alpha)=z_\alpha$ is the inverse of the Gaussian probability function $\Phi(z_\alpha)=\alpha$. By combining Eqs. (10), (12), and (13), we obtain the definition of a new variability index

 \begin{displaymath}I_\sigma=\Psi(\alpha)=\sqrt{2(n-1)}\,I_{\rm variab}-\sqrt{2n-3},
\end{displaymath} (14)

which we call the overall variability index, since it registers variability modes of different timescales with the same sensitivity (as long as the timescale does not exceed the baseline of the observations). The index $I_{\sigma }$ is identical with the inverse Gaussian probability function $\Psi(\alpha)$ and is, therefore, directly related to the probability $\alpha$ that the object is variable. In particular, the probability for variability is $\alpha\ge 0.95$ if $I_\sigma >1.645$, and $\alpha\ge 0.98$ if $I_\sigma >2$, respectively. Note that the test is one-sided.

4.2.2 Long-term variability index

For the thorough evaluation of light-curves, a number of different techniques have been proposed, including the correlation function analysis (e.g., Edelson & Krolik 1988) and the structure function analysis (e.g., Simonetti et al. 1985, 1985), up to complex nonlinear methods, like multi-fractal or chaos analysis (e.g., Vio et al. 1992). Here, we present a different approach since we primarily aim at a measure which expresses the probability that a given light-curve shows indications for any variability at all.

From the statistical point of view, the hypothesis of a random distribution of the data points can be disregarded by means of significance tests. A simple yet powerful trend test is provided by the evaluation of the mean square successive differences $\Delta^2$ of the data points (Neumann et al. 1941; Moore 1955): if a light-curve exhibits brightness fluctuations on timescales longer than the typical epoch difference of successive data points, then the square of the differences between successive data points

 \begin{displaymath}\Delta^2=\frac{1}{n-1}\sum\limits_{i=1}^{n-1}\left(B_i-B_{i+1}\right)^2
\end{displaymath} (15)

tends to be smaller than the square of the differences between randomly chosen data points (for illustration see Fig. 12). The latter term can be expressed in units of the standard deviation $\sigma$,

 \begin{displaymath}\sigma^2=\frac{1}{n-1}\sum\limits_{i=1}^{n}\left(B_i-\overline{B}\right)^2,
\end{displaymath} (16)

of the time series. For uncorrelated data points, $\Delta^2\approx2\,\sigma^2$. If there is, however, a long-term trend in the data, i.e. the data points are correlated, then $\Delta^2<2\,\sigma^2$. The trend is significant if

 \begin{displaymath}\frac{\Delta^2}{\sigma^2}<U_{n,\alpha},
\end{displaymath} (17)

with $U_{n,\alpha}$ being the limit for n Gaussian distributed data points Bi at the significance level $\alpha$, which can be approximated as (Sachs 1992)

 \begin{displaymath}U_{n,\alpha}=2-2\,\sqrt{\frac{n-2}{(n-1)(n+1)}}\,\Psi(\alpha).
\end{displaymath} (18)

Since the data points Bi of the light-curve of a given object do not have all the same accuracy, Eqs. (15) and (16) have to be modified by introducing individual weighting factors which are equal to the inverse square of the photometric errors $\sigma_i$:

 \begin{displaymath}\Delta^2=\frac{\sum\limits_{i=1}^{n-1}\frac{\left(B_i-B_{i+1}...
...{\sum\limits_{i=1}^{n-1}\frac{1}
{\sigma_i^2+\sigma_{i+1}^2}}
\end{displaymath} (19)

and

 \begin{displaymath}\sigma^2=\frac{\sum\limits_{i=1}^{n}\left(\frac{B_i-\overline...
...i}\right)^2}
{\sum\limits_{i=1}^{n}\frac{1}{\sigma_i^2}}\cdot
\end{displaymath} (20)

By combining Eqs. (17)-(20), we obtain the variability index

 \begin{displaymath}I_\Delta=\Psi(\alpha)=\left(1-\frac{\Delta^2}{2\sigma^2}\right)\sqrt{\frac{(n-1)(n+1)}{n-2}},
\end{displaymath} (21)

which measures predominantly long-term trends and increases with the variability timescale (cf. Fig. 23). In the following, we will call $I_{\Delta }$, therefore, the long-term variability index. As for $I_{\sigma }$, the long-term variability index $I_{\Delta }$ is directly related to the probability $\alpha$ that an object is long-term variable. In particular, the probability for long-term variability is $\alpha\ge 0.95$ if $I_\Delta>1.645$, and $\alpha\ge 0.98$ if $I_\Delta>2$, respectively.


  \begin{figure}
\par\includegraphics[width=18cm,clip]{1126f13.eps}\end{figure} Figure 13: Long-term variability index $I_{\Delta }$ versus overall variability index $I_{\sigma }$ for the QSO candidates of the high-priority (filled circles), medium-priority (open circles), and low-priority (dots) subsample, respectively. The most variable object, the Seyfert galaxy RXSJ17220+4315 ( $I_\sigma =118,\,I_\Delta =10.8$), is outside the scale of this plot.
Open with DEXTER

4.3 QSO candidate list

From the original list of about 35000 objects in the field (Sect. 3.2), all objects are excluded from further investigations which do not match one or several of the following constraints:

The reduced sample comprises 19819 star-like objects in an $8.58\,\ifmmode\hbox{\rlap{$\sqcap$ }$\sqcup$ }\else{\unskip\nobreak\hfil
\penal...
... }$\sqcup$ }
\parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi\hbox{$^\circ$ }$ field on the sky. Among these objects, the QSO candidates are selected according to their proper motion and variability indices. It should be emphasized that both variability indices, $I_{\sigma }$ and $I_{\Delta }$, are applied. The long-term variability index $I_{\Delta }$ is particularly important for the efficiency of the survey since QSOs are known to vary on long timescales (e.g., Angione 1973; Hawkins 1983; Smith et al. 1993; Hook et al. 1994; Meusinger et al. 1994; Smith & Nair 1995; Véron & Hawkins 1995; Sirola et al. 1998; Meusinger et al. 1999). According to Smith & Nair (1995) the average observed base-level timescale for radio-quiet QSOs is about one decade. It has been suggested by Hawkins (1983) to reduce the stellar contamination of a QSO variability survey by searching for objects with variability timescales of about 1 yr or longer. A combination of an overall variability index and a long-term variability index has been applied for the VPM search in the M 3 field as well, but the indices used there were different from those defined in the present paper.


 

 
Table 3: Selection of the QSO candidates.
criterion high-priority medium-priority low-priority
  QSO candidates QSO candidates QSO candidates
proper motion $I_\mu\le3$ $I_\mu\le4.3$ $I_\mu\le4.3$
overall variability $I_\sigma \ge 2$ $I_\sigma \ge 1.645$ -
long-term variability $I_\Delta\ge2$ $I_\Delta\ge1.645$ -
B magnitude $16.5\le B\le19.7$ $16.5\le B\le19.7$ $13\le B\le19.7$
number of objects in list 62 57 (+62) 5709 (+62+57)


The QSO candidate sample is sub-divided into three subsamples of different priority. The low-priority sample comprises all stationary star-like objects with $13\le B\le19.7$. It is reasonable to expect that all of the QSOs in the specified magnitude range are included in this sample. Those objects from this sample with variability indices indicating significant variability ( $\alpha = 0.95$) are regarded as QSO candidates of medium priority, objects with highly significant ( $\alpha = 0.98$) variability and highly significant zero-proper motion ( $\alpha = 0.9999$) are QSO candidates of high priority. The selection criteria for these three subsamples are summarized in Table 3 along with the corresponding numbers of candidates. The variability properties of the QSO candidates are shown in Fig. 13.


 

 
Table 4: Fraction of star-like objects in the current sample that match the different selection criteria.
magnitude proper motion overall variability long-term variability variability all
  $I_\mu\le4.3$ / $I_\mu\le3$ $I_\sigma \ge 1.645$ / $I_\sigma \ge 2$ $I_\Delta\ge1.645$ / $I_\Delta\ge2$ combined combined
$16.5\le\overline{B}\le19.7$ 35% / 20% 12% / 9% 10% / 6% 1.9% / 1.0% 0.8% / 0.4%
$16.5\le\overline{B}\le19$ 24% / 13% 11% / 7% 10% / 6% 2.0% / 1.1% 0.6% / 0.3%
$16.5\le\overline{B}\le18$ 14% / 7% 10% / 6% 10% / 6% 2.4% / 1.4% 0.4% / 0.2%
$13.0\le\overline{B}\le16.5$ 8% / 4% 20% / 17% 15% / 11% 7.9% / 5.8% 0.8% / 0.2%


The efficiency of the various selection criteria as well as combinations of them are summarised in Table 4. As already noticed in Sect. 1, the proper motion selection is less efficient than the combined variability selection criteria. The efficiency of the proper motion selection depends strongly on the measuring uncertainties and is higher at brighter magnitudes. Nevertheless, the combination of the variability search with the zero-proper motion constraint reduces the sample by a further factor of about three. In addition, the proper motion criterion is essentially bias-free.

In addition to the constraints listed in Table 3, both the medium- and the high-priority candidate samples have been confined to objects fainter than B=16.5. This is motivated by the low surface density of QSOs with $B\le16.5$ (Kembhavi & Narlikar 1999). Moreover, bright QSOs tend to have low redshifts (Wisotzki 2000), and should be, therefore, easily detectable by their typical UV excess. However, there is no UV excess object ( $U-B\le-0.4$) among the low-priority candidates with $B\le16.5$.

For our search field, the catalogue of Quasars and Active Galactic Nuclei by Véron-Cetty & Véron (2000) lists four QSOs (brighter than MB=-23). One of them (Q1715+4316) is located in the immediate cluster region and is therefore excluded from our object list. The high-redshift QSO FIRSTJ1709+4201 (z=4.23) is too faint in B for the present survey. The remaining two QSOs, B31707+420 (z=0.307, MB=-23) and RXSJ17150+4429 (z=0.154, MB=-23), are both detected in the present study as high-probability candidates with the following properties $(I_{\mu},I_{\sigma}, I_{\Delta})=(1.4,31.4,8.2)$ and (2.0,26.9,8.6). The catalogue by Véron-Cetty & Véron comprises two Seyfert galaxies in our field, FIRST J1718+4249 and RXS J17220+4315. Both galaxies have also been identified as stationary and highly-variable objects with $(I_{\mu},I_{\sigma},I_{\Delta})
=(3.4,19.1,6.2)$ and (2.1,118,10.8) and are therefore high-priority QSO candidates in the sense of the present survey. With $I_{\rm nonstellar} = 5.4$ and $\Psi = 2018$, the optical counterpart of the radio source FIRST J1718+4249 is, however, slightly above the demarcation between stellar and nonstellar objects.

  \begin{figure}
\beginpicture
\setcoordinatesystem units <25mm,-25mm> point at 0 ...
....065 -0.425
0.036 -0.963
0.720 -0.710
1.210 0.273
/
\endpicture\end{figure} Figure 14: Colour-colour diagram of the QSO candidates (symbols as in Fig. 13).
Open with DEXTER

We have also cross-correlated the QSO candidate list from the Asiago-RASS/ESO survey (Grazian et al. 2000) against our basic object sample. Only one object from the former catalogue, 1RXSJ171935.9+424518, is located within our survey field. This source is clearly identified with a non-variable ( $I_\sigma=-0.76$, $I_\Delta=-1.99$) foreground star ( $B=16.23,\,
B-V=0.76,\, U-B=0.33$) with highly significant proper motion ($I_\mu=15$).

The colour-colour diagram of all QSO candidates is shown in Fig. 14. The time-averaged colour indices were derived from the U,B,V magnitudes measured on the plates from the epoch $1968.2\pm2$. The colour distribution of both the high- and medium-priority candidates is similar to that for the QSO candidates from the VPM search in the M 3 field (Scholz et al. 1997; their Fig. 16). It should be emphasized that we do not use colours for candidate selection but merely as a way of checking on how the VPM search compares with more traditional methods. The colours suggest that a large fraction of the VPM-selected candidates are typical QSOs such as those which would be selected in more traditional surveys. A UV excess of $U-B\le-0.4$ is found for more than 70% of the high-priority candidates and for more than 20% of the medium-priority candidates. The two-colour QSO selection criterion $U-B \le -0.75(B-V)-0.05$ (see Scholz et al. 1997) is matched as well by more than 70% of the high-priority QSO candidates. On the other hand, we find a substantial fraction of red QSO candidates, among them a few objects with very red colours. To clarify the nature of these objects is obviously of particular interest in the context of the present project. As shall be described in detail in Paper II, follow-up spectroscopy revealed 65 QSOs/Seyfert1s, but none of them is found to be unusually red.

  
5 Selection effects of the VPM survey

The selection criteria of the present survey are based on the following properties: B magnitude, image structure, proper motion, and B band variability. In this section, we will discuss the influence of each criterion on the resulting QSO sample.

5.1 Magnitude

For objects with redshifts $z\le0.55$, the present QSO survey is complete with regard to magnitude limitations. This results simply from the definition of QSOs (i.e., $M_B\le-23$) in combination with the completeness limit $B_{\rm lim}=19.7$ for the candidate samples (Table 3). For larger z, the survey is consequently magnitude-limited, where the limiting absolute magnitude is given by

 \begin{displaymath}
M_{B,\rm lim} = B_{\rm lim} + 5 -K_{B}(z)-A_{B}-\Delta\,m_{z}
\end{displaymath} (22)

with

\begin{displaymath}\Delta\,m_{z} = -5\,\log_{10}\,\left[\frac{c\,z}{H_0}\,\left(1+\frac{z(1-q_0)}{\sqrt{1+2\,q_0\,z}+1+q_0\,z}\right)\right],
\end{displaymath}

$A_B=(R_V+1)\,E_{B-V}=0.08$ ( EB-V=0.02; Sandage 1969 and RV=3.1), and adopting H0=50 kms-1Mpc-1 and q0=0. Since the values of the K-correction in the Johnson B-band were not available for the entire relevant redshift range, we calculated KBadopting a mean SED of QSOs as given by Francis et al. (1991). The results are shown in Fig. 15.
  \begin{figure}
\beginpicture
\setcoordinatesystem units <24mm,32mm> point at 6 -...
...7 -0.6419
2.98 -0.6353
2.99 -0.6286
3.00 -0.6219
/
\endpicture\end{figure} Figure 15: K-correction for the Johnson B system as a function of redshift z.
Open with DEXTER

Due to the low fraction of high-luminosity QSOs with MB < -28, the redshift limit of the survey is estimated to $z_{\rm lim} \approx 3$. In addition, the detection of QSOs with z>3 suffers from the strong decrease of the B magnitudes when the Lyman-break is shifted into the B band.

5.2 Morphology

As in most optical QSO surveys, we exclude objects with significant nonstellar morphology from the candidate list. In order to estimate whether this criterion leads to a significant incompleteness or not, we have to consider the possibility that low-redshift QSOs appear nonstellar on Tautenburg Schmidt plates.

According to Eq. (22), QSOs with B>16.5 must have redshifts z>0.12. The morphological classification parameters as defined in Sect. 3.3 correspond to a nonstellar classification for objects with intrinsic full-width at half-maximum (FWHM) diametres dFWHM > 3'', corresponding to a linear FWHM diametre of more than 10 kpc at $z\ge0.12$. Therefore, it can not be completely ruled out that a low luminosity QSO in a giant galaxy at low redshift may be excluded due to the morphology criterion. From the local QSO density (e.g., Kembhavi & Narlikar 1999) we expect less than two QSOs with z < 0.2 in our survey field. Therefore, a possible selection effect introduced by the morhological classification is statistically insignificant for the resulting QSO sample, yet may be important for lower-luminosity AGNs. On the other hand, low-redshift QSOs are expected to have a strong UV excess $U-B\le-0.4$. Among the nonstellar objects in the M 92 field with $B\le16.5$, only two have such a UV excess. These two objects are catalogued low-redshift galaxies (NGC 6323 and PGC 060118) with absolute magnitudes MB>-21 (Marzke et al. 1996).

5.3 Proper motion

The fraction of QSOs erroneously excluded from the candidate list due to their proper motion index is 1% or less for the high-priority sample ($I_\mu\le3$), and 0.01% or less for the medium- and low-priority sample ( $I_\mu\le4.3$) (Sect. 4.1). Compared to the selection effects due to the variability criterion (Sect. 5.4), the incompleteness introduced by the proper motion criterion is negligible. In addition, such an effect does obviously not depend on the apparent magnitudes, redshifts, or intrinsic properties of the QSOs. It is expected, therefore, that the proper motion selection does not produce a significant selection bias.

  
5.4 Variability

The selection of variable objects introduces a significant bias. The most important factors are the photometric accuracy, the number of observational epochs, and the timescales and amplitudes of the QSO variability. These items will be discussed separately below.

  
5.4.1 Number of epochs

The dependence of the variability indizes $I_{\sigma }$ and $I_{\Delta }$on the number n of epochs can be expressed easily by the Taylor series of the corresponding definition relations (Eqs. (14), (21)):

 
$\displaystyle I_\sigma$ = $\displaystyle \sqrt{2}\,(I_{\rm variab}-1)\,n^{\nicefrac{1}{2}}
+\sqrt{2}\,\left(\frac{3}{4}-\frac{1}{2}I_{\rm variab}\right)\,n^{\nicefrac{-1}{2}}$  
    $\displaystyle +O(n^{\nicefrac{-3}{2}})$  
  $\textstyle \approx$ $\displaystyle \frac{1}{\sqrt{2}}\left(\frac{\sigma_{\rm intrin}}{\sigma_{\rm obs}(B)}\right)^2\,\sqrt{n}$ (23)

and
 
$\displaystyle I_\Delta$ = $\displaystyle \left(1-\frac{\Delta^2}{2\,\sigma^2}\right)\,\left[n^{\nicefrac{1}{2}}+n^{\nicefrac{-1}{2}}+O(n^{\nicefrac{-3}{2}})\right]$  
  $\textstyle \approx$ $\displaystyle \left(1-\frac{\Delta^2}{2\,\sigma^2}\right)\,\sqrt{n}\,\,.$ (24)

Both $I_{\sigma }$ and $I_{\Delta }$ are proportional to $\sqrt{n}$. Figure 16 illustrates the tight correlation between the apparent magnitude B of the objects and their number n of observations. The mean number of observations decreases by a factor of two from $n\approx150$for $B \le 18$ down to $n\approx73$ for $B_{\rm lim}=19.7$. Thus, the magnitude-dependent decrease of n results in a decrease of the variability indices by 40%.
  \begin{figure}
\par\includegraphics[width=8.1cm,clip]{fig16.eps}
\end{figure} Figure 16: Number of observations n for each object as a function of its mean apparent magnitude B. The tight correlation is illustrated by the median curve (solid). The dashed vertical line indicates the survey limit $B_{\rm lim}=19.7$.
Open with DEXTER

According to Eq. (23), the minimum intrinisic variability $\sigma_{\rm intrin}^{\rm min}$ a QSO needs to reach the detection limit $I_\sigma ^{\rm min}=1.645$ scales with $\sigma_{\rm intrin}^{\rm min}\sim
n^{\nicefrac{-1}{4}}$. Hence, QSOs at the limiting magnitude $B_{\rm lim}=19.7$must exhibit a 20% higher intrinsic variability than bright QSOs ($B \le 18$) to exceed the variability detection limit.

  
5.4.2 Intrinsic variability and photometric accuracy

  \begin{figure}
\beginpicture
\setcoordinatesystem units <21.4mm,96mm>
\setplotar...
...201
19.98 0.1564
19.99 0.1778
20.00 0.1990
/
\setsolid
\endpicture\end{figure} Figure 17: Standard deviation of the intrinsic magnitude fluctuation, $\sigma _{\rm intrin}$, for the QSOs from Hook et al. (1994) versus apparent mean B magnitude (plus signs). The curves indicate the detection limits $\sigma_{\rm intrin}^{\min}$ of the present survey for $I_\sigma \ge 2$ (dashed) and $I_\sigma \ge 1.645$ (solid), respectively; the dotted curve represents the case $I_\sigma \ge 1.645$if a constant number of n=153 measurements per object is assumed.
Open with DEXTER


  \begin{figure}
\beginpicture
\setcoordinatesystem units <21.4mm,0.44mm>
\setplot...
...0
19.7 31.579
19.8 25.0
19.9 19.048
20.0 28.889
/
\endpicture\end{figure} Figure 18: Fraction $\eta (B)$ of the QSOs from Fig.17 with intrinsic variabilities $\sigma _{\rm intrin} > \sigma _{\rm intrin}^{\rm min}$ for $I_\sigma \ge 2$ (dashed curve) and $I_\sigma \ge 1.645$ (solid curve). The binning interval is [B-0.2,B+0.2].
Open with DEXTER

For a given detection limit $I_\sigma^{\rm min}$ and number n of epochs, the minimum intrinsic variability $\sigma_{\rm intrin}^{\rm min}$, above which QSOs are regarded to be significantly variable, is directly proportional to the photometric accuracy $\sigma _{\rm obs}$ (Eq. (23)). The latter one is a function of the apparent magnitude B. In the magnitude range of the present survey, the mean photometric accuracy changes by a factor of five (Fig. 5, solid line). Hence, the detection limit for intrinsic variability is changed by 500% due to the variation of $\sigma _{\rm obs}$, compared to 20% due to the variation of n (see previous subsection).

In order to discuss the influence of $\sigma _{\rm obs}$ on the completeness of the present survey, we use Eq.(23) to calculate the detection limit $\sigma_{\rm intrin}^{\rm min}(B)$ for variability index limits $I_\sigma ^{\rm min}=1.645$ and $I_\sigma^{\rm min}=2$, respectively, assuming a mean number n(B) of observations according to Fig. 16 (solid line) as well as n=153. The results (Fig. 17) show that, for $B\approx16$, objects with an intrinsic variability $\sigma_{\rm intrin} = 0.03$ mag can still be detected as variables. The limit increases to 0.05 mag at B=17.8, 0.1 mag at B=18.8, and 0.2 mag at $B=B_{\rm lim}=19.7$.

Now, let us confront the detection limits with the measured intrinsic variabilities $\sigma _{\rm intrin}$ of a real sample of QSOs. For this aim, the variability data provided by Hook et al.(1994) for a large sample of 332 QSOs in a field near the South Galactic Pole are well suited. This sample (which is called hereafter the "Hook-sample'') was taken from several optical surveys which did not use variability as a selection criterion. For the sake of a simple estimation, we assume that the observed $\sigma _{\rm intrin}$ of the Hook-QSOs are comparable to those of the QSOs in the present survey despite the shorter time-baseline (16 years vs. 34 years) and the smaller number of epochs in the former study. Hence, the ratio of the number of Hook-QSOs above the variability detection limit to the total number of Hook-QSOs at a given B illustrates the local completeness rate $\eta$ as a function of B expected for the present survey. As can be seen from Fig. 18, the incompleteness due to the limited photometric accuracy is insignificant for $B \le 18$, yet increases strongly for fainter B. At the limiting magnitude $B_{\rm lim}=19.7$ of the present survey, the local completeness rate amounts to $\eta(B_{\rm
lim})\approx30$%.

The number of QSOs with $I \ge I_{\sigma}^{\rm min}$ which will be detected up to the limiting magnitude $B_{\rm lim}$ is given by


 \begin{displaymath}
N(B_{\rm lim},I_{\sigma}^{\rm min})
=A\,\int\limits_{B=-\inf...
...B_{\rm lim}}\mu(B)\,\eta(B,\,I_\sigma^{\rm min})\,{\rm d}\/B,
\end{displaymath} (25)

where A is the effective size of the search field, and $\mu(B)$ is the mean QSO surface density at the mean magnitude B.

A significant difference between our VPM survey and most conventional QSO surveys is that we consider mean QSO magnitudes time-averaged over more than three decades instead of time-dependent magnitudes at a single epoch. The surface densities $\mu(B)$ out of single-epoch magnitudes would lead to an over-completeness due to the QSO variability. We estimate the corresponding correction factors according to Kembhavi & Narlikar (1999)

\begin{displaymath}\frac{\Delta \mu(B)}{\mu(B)} = \frac{\overline{\sigma^2}}{2}(a\,\ln10)^2
\end{displaymath} (26)

for a given slope a of the $\log\,\mu$-B relation and based on the frequency distribution of the $\sigma _{\rm intrin}$ of the Hook-QSOs. For $a=0.31\ldots0.88$ (Kembhavi & Narlikar 1999) and a mean variance of the Hook-QSOs $\overline{\sigma^2}=0.053$mag2, the over-estimation factor $\Delta
\mu(B) / \mu(B)=1.4\ldots11\%$ with the higher limit being valid for bright QSOs and the lower limit for faint ones. We therefore note that a direct comparison of the surface densities derived in this paper with uncorrected ones derived by single-epoch surveys is not advisable. For Eq. (25), we adopt the mean surface densities given by Hartwick & Schade (1990), which were derived from single-epoche observations but were corrected by these authors for the over-estimation. Their correction factors may differ from the values derived above by a few per cent which is however negligible for the further estimations.
  \begin{figure}
\beginpicture
\setcoordinatesystem units <21.4mm,0.44mm>
\setplot...
...19.7 64.3761
19.8 60.0519
19.9 55.4746
20.0 51.6757
/
\endpicture\end{figure} Figure 19: Expected total completeness $\epsilon (B_{\rm lim})$ of the present survey up to a given limiting magnitude $B_{\rm lim}$, estimated from Eq.(27) for the data shown in Fig.17. It is assumed that the QSO candidate selection is based on the $I_{\sigma }$ criterion only with $I_\sigma \ge 2$ (dashed curve) and $I_\sigma \ge 1.645$(solid curve), respectively.
Open with DEXTER

Adopting $\eta (B)$ from Fig. 18, we estimate N=62 for the present survey with $A=8.58\,\ifmmode\hbox{\rlap{$\sqcap$ }$\sqcup$ }\else{\unskip\nobreak\hfil
\pen...
... }$\sqcup$ }
\parfillskip=0pt\finalhyphendemerits=0\endgraf}\fi\hbox{$^\circ$ }$, $B_{\rm lim}=19.7,$ and $I_\sigma ^{\rm min}=1.645$. This number is to be compared with the total number of 93 QSOs expected from Eq. (25) for $\eta(B)={\rm const}=1$. Hence, the selection criterion $I_\sigma \ge 1.645$ yields a total survey completeness of 67% (cf.Fig. 19). The completeness is only marginally influenced by the choice of $I_\sigma^{\rm min}$ (Figs. 18 and 19); a decrease of the detection limit below 1.645 does not significantly increase the completeness rate yet leads to a rapid increase in the contamination rate of the candidate sample. The total survey completeness $\epsilon$ as a function of the limiting magnitude $B_{\rm lim}$

 \begin{displaymath}
\epsilon(B_{\rm lim}) =
\frac{\int\limits_{B=-\infty}^{B_{\r...
...\/B}
{\int\limits_{B=-\infty}^{B_{\rm lim}}\mu(B)\,{\rm d}\/B}
\end{displaymath} (27)

is shown in Fig. (19).

With regard to further variability-based QSO surveys, it is of general interest to derive an estimator for the expected completeness rate as a function of both the photometric accuracy $\sigma _{\rm obs}$ and the given number of epochs n. We take for granted that such a survey has a sufficiently long time-baseline, i.e. two decades or more. We construct the distribution $N(\sigma_{\rm intrin})$of the intrinsic variabilities of the Hook-QSOs, assuming that $\sigma _{\rm intrin}$ is not correlated with B(cf. Fig. 17). Now, the completeness $\eta(\sigma_{\rm intrin}^{\rm min})$ is given by the fraction $N(\sigma_{\rm intrin} \ge \sigma_{\rm intrin}^{\rm min})/N_{\rm tot}$and is derived again from the Hook-sample where $N_{\rm tot}$ is the total number of all Hook-QSOs. The results (Fig. 20) can be used, in combination with Eq. (23), to derive the fraction of QSOs with a variability probability $\alpha\ge 0.95$, i.e. $I_\sigma \ge 1.645$ (Fig. 21).

  \begin{figure}
\beginpicture
\setcoordinatesystem units <150mm,0.48mm>
\setplota...
...4
0.497 2.454
0.498 2.454
0.499 2.454
0.5 2.454
/
\endpicture\end{figure} Figure 20: Fraction of Hook-QSOs with an intrinsic variability $\sigma _{\rm intrin} > \sigma _{\rm intrin}^{\rm min}$.
Open with DEXTER


  \begin{figure}
\beginpicture
\setcoordinatesystem units <250mm,0.48mm>
\setplota...
...4
0.298 3.374
0.298 3.374
0.299 3.067
0.3 2.761
/
\endpicture\end{figure} Figure 21: Fraction of Hook-QSOs with significant variability $I_\sigma \ge 1.645$ as a function of a given photometric accuracy $\sigma _{\rm obs}$ and number of observations n (see Eq. (23)).
Open with DEXTER


  \begin{figure}
\beginpicture
\setcoordinatesystem units <25mm,0.44mm> point at 2...
...60 057.9
2.60 066.7
2.80 066.7
2.80 050.0
3.00 050.0
/
\endpicture\end{figure} Figure 22: Fraction of Hook-QSOs with significant variability $I_\sigma >1.645$ (solid line) and $I_\sigma >2$ (dashed line) as a function of the redshift z. Only objects with $B\le 19.7$ are considered. The effect is due to the magnitude-dependent completeness $\eta (B)$ (see text).
Open with DEXTER

The interpretation of Fig. 21 is straightforward: let us assume that a photometric accuracy of e.g. $\sigma_{\rm obs}=0.2$ mag is reached (at a given magnitude, or for the whole survey). With n=16, we have $\sigma_{\rm obs}/\sqrt[4]{n}=0.1$mag, which corresponds to a completeness of $\eta=61$% (Fig. 21). With n=64, however, the completeness rate is raised to $\eta=91$%. The same high completeness rate is reached for n=16 if the photometric accuracy is $\sigma_{\rm obs}=0.1$ mag. In general, if a variability-based QSO survey aims at a completeness rate $\eta=90\%$ or more, it has to meet the requirement

 \begin{displaymath}
\sigma_{\rm obs}/\sqrt[4]{n}\le0.05\,\mbox{mag}.
\end{displaymath} (28)

Obviously, the completeness of the survey can be improved by the co-addition of plates of similar epochs, since the gain in the photometric accuracy, which $\sigma _{\rm obs}$ scales with $\sqrt{n}$, outweighs the loss in the number n of observations (Eq. (28)). A variability survey based on co-added Schmidt plates was proposed for the first time by Hawkins (1994).

It was noticed by Hook et al. (1994) that surveys using a variability selection criterion will be biased. This can be demonstrated by the comparison of the z distribution of all Hook-QSOs with the distribution of the variable Hook-QSOs with $I_\sigma \ge 1.645$. If the sample of Hook-QSOs can be taken as representative, the result (Fig. 22) illustrates the selection function of the present variability search in the redshift space. A bias against high-luminosity, and therefore high-redshift QSOs is expected as a consequence of the anticorrelation between variability and luminosity. However, we have also to consider the fact that the two subsamples have different completeness functions $\eta (B)$ and therefore different distributions of apparent magnitudes B. Hence, let us compare the redshift distribution of the subsample of variable Hook-QSOs ( $I_\sigma \ge 1.645$) with randomly-drawn subsamples of Hook-QSOs which have the same local completeness rate $\eta (B)$ and hence the same Bdistribution as the former one, yet are not selected by a variability criterion. The comparison was done by means of the parameter-free Wilcoxon paired signed rank test (Sachs1992) which provides a sensitive statistical test to evaluate whether two subsamples are drawn from the same distribution or not. We do not detect significantly different redshift distributions at the 90% significance level. This obviously means that the resulting QSO sample is primarily determined by the completeness function $\eta (B)$, i.e. by the photometric accuracy of the survey, rather than by the intrinsic relationship between variability and luminosity.

  \begin{figure*}
\beginpicture
\setcoordinatesystem units <17.5mm,5mm> point at 2...
...1515
1.95333385 5.51215172
2.00000048 6.87040472
/
\endpicture
\end{figure*} Figure 23: Variability indices $I_{\sigma }$ and $I_{\Delta }$ as a function of the period T for simulated sinusoidal light curves. The solid lines indicate the detection limits $I_\sigma ^{\rm min}=1.645$ and $I_\Delta ^{\rm min}=1.645$. For the assumption $\sigma _{\rm intrin}=\sigma _{\rm obs}$ (open circles), the measured $I_{\sigma }$ scatter around the expected value $\bar{I}_\sigma=7.25$ (dashed line). In this case, the objects have significant overall variability ( $I_\sigma \gg I_\sigma ^{\rm min}=1.645$) for all T, and the long-term variability $I_{\Delta }$ is signifcant for $T\ge 0.03$ yr. For objects with an intrinsic variability $\sigma _{\rm intrin}=0.44\,\sigma _{\rm obs}$ (filled circle), the measured $I_{\sigma }$ scatter around $I_\sigma ^{\rm min}=1.645$, and significant long-term variability ( $I_\Delta \gtrsim I_\Delta ^{\rm min}$) is detected only for $T\gtrsim 1$ yr.
Open with DEXTER

5.4.3 Timescales of variability

We discuss the dependence of $I_{\Delta }$ on the variability timescales by means of a simple model assuming a sinusoidal light-curve superimposed on Gaussian noise representing the photometric errors. It is, of course, well-known that the light-curves of real QSOs are irregular and non-periodic. However, we argue that any QSO light-curve can be approximated by a Fourier series of sinusoidal curves, and that the present simple model can be regarded as such a series truncated after the first term. In this sense, $I_{\Delta }$ and $I_{\sigma }$ were derived from the numerically simulated light-curves as functions of the period T (observer frame) and the amplitude $a=\sigma_{\rm
intrin}/\sigma_{\rm obs}$ of the variability for n=153 epochs. The results (Fig. 23) confirm that $I_{\sigma }$ is indeed independent of T, while $I_{\Delta }$ increases with T, as was expected (Sect. 4.2).

An intrinsic variability amplitude a=0.44 corresponds to the $I_{\sigma}=1.645$ detection threshold of the present survey. As can be seen from Fig. 23, the corresponding $I_{\Delta }$ exceeds the long-term veriability threshold $I_\Delta=1.645$ only for $T\gtrsim 1$ yr. Thus, the present survey is constrained by the long-term variability index $I_{\Delta }$ for variability timescales $T\ll1$ yr and by the overall variability index $I_{\sigma }$ for objects with $T\gg1$ yr. The characteristic variability timescales of typical optically selected QSOs samples were found to be of the order of 1 yr (Hook et al. 1994, and references therein). A sample-averaged timescale of more than 1 yr in the quasar-frame, corresponding to about 4 yr in the observer frame, have been derived by Meusinger et al. (1994) for a small sample of QSOs with a time-baseline comparable to that one of the present study. A similar result was found by Sirola et al. (1998) for a larger sample with a shorter baseline.

The fraction of QSOs which match the $I_\sigma >1.645$ criterion was estimated to about $\nicefrac{2}{3}$ of all QSOs in the field with $B\le B_{\rm lim}=19.7$ (Sect. 5.4.2). The simulations discussed above suggest that the present QSO selection is constrained by $I_{\sigma }$ and $I_{\Delta }$ on approximately the same level. Hence, the total fraction of QSOs in the field with $B\le 19.7$, which meet both the $I_\sigma \ge 1.645$ and the $I_\Delta\ge1.645$ criterion, is estimated to be $\left(\nicefrac{2}{3}\right)^2$. This corresponds to a total number of 42 QSOs expected among the QSO candidates in the high-priority and medium-priority candidates, whereas about 50 further QSOs obviously failed to match both variability criteria. These latter QSOs are expected to be comprised completely in the low-priority sample having magnitudes close to the survey limit.

6 Conclusions

In this paper, we have presented the basic observational data, the data reduction, and the selection of QSO candidates by means of a variability and proper motion search. The survey uses a large number of 208 digitised Schmidt plates covering a time-baseline of more than three decades. This long baseline is important for the detection of QSO long-term variability, but also for the accurate measurement of proper motions. Due to proper motion data with Hipparcos-like accuracy, the combination of the variability search with the zero-proper motion constraint yields a substantial enhancement of the efficiency.

The selection criteria of a VPM survey are fundamentally different from those of conventional optical surveys. The main observational selection effect is due to the magnitude-dependent accuracy of the photometric measurements. As a rule of thumb, a completeness of 90% (80%) is reached, when for a given number of observations n the ratio $\sigma_{\rm
obs}/\sqrt[4]{n}$ does not exceed 0.05 mag (0.07 mag). For the present survey, the a priori estimation suggests a number of 42 QSOs ( $M_B\le-23$) to be contained in the high- and medium-priority candidate samples. The low-priority sample is expected to comprise the remaining QSOs. The resulting QSO sample from the high- and medium-priority candidates is expected to be biased against higher redshifts ( $z\gtrsim1.5$), mainly due to the strong magnitude-dependence of the completeness function $\eta (B)$.

The follow-up spectroscopy of the candidates from the high-priority and the medium-priority subsamples has been completed. The properties of the discovered VPM QSOs will be discussed in Paper II of this series.

Acknowledgements
This paper formed part of J.B.'s Ph.D. Thesis. J.B. acknowledges financial support from the Deutsche Forschungsgemeinschaft under grants Me1350/3 and Me1350/8. This project would not have been possible without the help from several colleagues, above all C. Högner and U. Laux, in the process of scanning the large number of plates. We are also grateful to A. Bruch and R. Ungruhe for providing the MRSP software and to H.-J. Tucholke and R.-D. Scholz for supporting the installation of this software. Finally, the referee, Prof. P.Véron, is grately acknoweledged for his constructive criticism that has helped to improve the paper.

This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The research has also made use of the SIMBAD database, operated at CDS, Strasbourg, France.

References

 


Copyright ESO 2001