next previous
Up: A difference boosting neural


Subsections

4 Separating stars and galaxies

The star-galaxy classification problem addresses the task of labeling objects in an image either as stars or as galaxies based on some parameters extracted from them. Classification of astronomical objects at the limits of a survey is a rather difficult task, and traditionally has been carried out by human experts with intuitive skills and great experience. This approach is no longer feasible, because of the staggering quantities of data being produced by large surveys, and the need to bring objectivity into the classification, so that results from different groups can be compared. It is thus necessary to have machines that can perform the task with the efficiency of a human expert (but at much greater speed) and with robustness in the classification, over variations in observing conditions.

Processing the vast quantities of data produced by new and ongoing surveys and generating accurate catalogs of the objects detected in these surveys is a formidable task, and reliable and fast classifiers are much in demand. Following the work by Odewahn et al. (1992), there has been a growing interest in this area in the past decade. SExtractor (Bertin & Arnouts 1996) is a popular, publicly available general purpose tool for this application. SExtractor accepts a FITS image of a region of the sky as input and provides a catalog of the detected objects as output. It has a built-in back propagation neural network which was trained once for all by the authors of SExtractor using about 106 simulated images of stars and galaxies, generated under different conditions of pixel-scale, seeing and detection limits. In SExtractor an object is classified quantitatively by a stellarity index ranging from zero to unity, with index zero representing a galaxy and unity representing a star. The stellarity index is also a crude measure of the confidence that SExtractor has in the classification. A stellarity index of 0.0 or 1.0 indicates that SExtractor confidently classifies these objects as galaxy and star respectively while a stellarity index of 0.5 indicates that SExtractor is unable to classify the object. The input to the neural network used by SExtractor consists of nine parameters for each object, extracted from the image after processing it through a series of thresholding, deblending and photometric routines. Of the nine input parameters, the first eight are isophotal areas and the ninth one is the peak intensity for each object. In addition to these nine parameters, a control parameter, the seeing full width at half maximum (FWHM) of the image, is used to standardize the image parameters against the intrinsic fuzziness of the image due to the seeing conditions. In practice, some fine tuning of this control parameter is required for obtaining realistic output from the network, due to the wide range of observing conditions encountered in the data. A scheme for carrying out such tuning is described in the SExtractor manual.

Among other packages proposed recently for star-galaxy classification in wide field images is NExtractor (NExt) by Andreon et al. (2000). NExt claims to be the first of its kind that uses a neural network both for extracting the principal components in the feature space, as well as for classification. The performance of the network was evaluated over twenty five parameters that were expected to be characteristic to the class label of the objects, and it was found that six of these parameters, namely, the harmonic and Kron radius, two gradients of the PSF, the second total moment and a ratio that involves the measures of intensity and area of the observed object were sufficient to produce optimum classification. A comparison of NExt performance with that of SExtractor by Andreon et al. (2000) showed that NExt has a classification accuracy that is as good as or better than SExtractor. The NExt code is not publicly available at the present time (Andreon, personal communication) and a comparison with DBNN is not possible.

4.1 Constructing the training set

The first requirement for the construction of any good classifier is a complete training set. Completeness here means that the training set consists of examples with all possible variations in the target space and that the feature vectors derived from them are distinct in the feature space of their class labels. In the context of star-galaxy classification, this means that the training set should contain examples of the various morphologies and flux levels, of both stars and galaxies, spanning the entire range of parameters of the objects that are to be later classified.

We decided to construct our training set from an R band image from the publicly available NOAO Deep Wide Field Survey (NDWFS). This survey will eventually cover 18 square degrees of sky. The first data from the survey obtained using the MOSAIC-I CCD camera on the KPNO 4 m Mayall telescope were released in January 2001. We chose to use data from this survey because of its high dynamic range, large area coverage and high sensitivity that allowed us to maintain uniformity between the moderately large training set and numerous test sets. The training set was carefully constructed from a subimage of the R band image NDWFSJ1426p3456 which has the best seeing conditions among the data currently released. Details of the image are listed in Table 2.

 

 
Table 2: Summary of the NDWFS field used for constructing training and test sets.
Field Name NDWFSJ1426p3456
Filter R
R.A. at field center (J2000) 14:26:01.41
Dec. at field center (J2000) +34:56:31.67
Field size 36.960 $\hbox{$^\prime$ }\times$38.367 $\hbox{$^\prime$ }$
Total Exposure time (hours) 1.2
Seeing FWHM (arcsec) 1.16



 

 
Table 3: Values of important SExtractor parameters used in construction of the training and test sets.
Parameter Value
DETECT_MINAREA 64
DETECT_THRESH 3
ANALYSIS_THRESH 1.0
FILTER N
DEBLEND_NTHRESH 32
DEBLEND_MINCONT 0.01
CLEAN N
SATUR_LEVEL 49999.0
MAG_ZEROPOINT 30.698
GAIN 46.2
PIXEL_SCALE 0.258
SEEING_FWHM 1.161
BACKPHOTO_TYPE LOCAL
THRESH_TYPE RELATIVE


We used SExtractor as a preprocessor for selection of objects for the training set and for obtaining photometric parameters for classification. The values of some critical configuration parameters supplied to SExtractor for construction of the object catalog are listed in Table 3. Saturated stars were excluded from the training set by setting the SATUR_LEVEL parameter. SEEING_FWHM was measured from the point spread function (PSF) of the brightest unsaturated stars in the image. The DETECT_MINAREA parameter was set so that every selected object had a diameter of at least 1.8 times the FWHM of the PSF. DETECT_THRESH was set conservatively to 3 times the standard deviation of the background which was estimated locally for each source. ANALYSIS_THRESH was set to a lower value to allow for more reliable estimation of the classification parameters we used. No cleaning or filtering of extracted sources was done. DEBLEND_NTHRESH and DEBLEND_MINCONT were set by trial and error using the guidelines in the SExtractor documentation. The following parameters were obtained from descriptions of the NDWFS data products in the NOAO archives - PIXEL_SCALE, MAG_ZEROPOINT and GAIN. SExtractor computes several internal error flags for each object and reports these as the catalog parameter FLAGS. Objects with a FLAGS parameter ${\geq} 4$ were deleted from the training set. This ensured that saturated objects, objects close to the image boundary, objects with incomplete aperture or isophotal data and objects where a memory overflow occurred during deblending or extraction were not used.

 

 
Table 4: Details of the regions used in constructing the training and test data sets. Stars and galaxies from NDWF10 were used in training the network, while the other two data sets were used to test the performance of the network after training. Each region was $2001\times 2001$ pixels in size.
Data Label RA (J2000) Dec (J2000) Stars Galaxies Total
NDWF10 14:26:28.76 34:59:19.94 83 319 402
NDWF5 14:27:11.23 34:50:50.92 65 239 304
NDWF14 14:26:28.18 35:07:55.69 89 319 408


The training set was constructed from objects satisfying the above criteria from a $2001\times 2001$ pixel region of the image described in Table 2. The image region we used was selected at random. The objects were largely in the Kron-Cousins magnitude range 20-26. Objects brighter than this limit are mostly saturated stars which were not used. Each object in the training set was visually classified as a star or galaxy by two of the authors working separately, after examining the radial intensity profile, surface map and intensity contours. Less than 2% of the sources were differently classified by the two authors. These discrepancies were resolved by a combined examination by both authors. It was not possible to visually classify 35 of the objects, and these were deleted from the training set. All the deleted objects are fainter than magnitude 26. Some details about the training set, named NDWF10, are given in Table 4. Visual classification of many of the brighter stars was aided by the perceptibly non-circular PSF of the image. After visual classification was complete, SExtractor classification for all sources in the training set was obtained. An object-by-object comparision of the visual and SExtractor classification showed that the latter was successful in 97.76% of the cases in reproducing the results of the visual classification (see Table 4). The number of stars in the training set is considerably smaller than the number of galaxies because of the high galactic latitude of the field and the faint magnitudes of objects in the training set.

4.2 Obtaining optimum parameters for classification

Once a training set is available, the next task is to select the parameters that the network will use for classification. We tested all available parameters extracted by SExtractor for their suitability as classification parameters. We also derived some new parameters from the basic parameters obtained from SExtractor. For the classification we sought parameters which were (a) not strongly dependent on the properties of the instrument/telescope and on observing conditions; (b) would not depend on photometric calibration of the data, which is not always available; and (c) resulted in the clearest separation between stars and galaxies. To meet the last requirement, we plotted each parameter against the FWHM of the intensity profile and identified the parameters which provided the best separation. After extensive experimentation with our training set data, we found that three parameters were most suitable. These are:

1.
Elongation measure: this is the logarithm of the ratio of second order moments along the major and minor axis of the lowest isophote of the object. For a star, the ratio should be near unity. For our training set, this ratio is different from unity because of the slightly elliptical PSF.

2.
The standardized FWHM measure: this is the logarithm of the ratio of the FWHM of the object (obtained from a Gaussian fit to the intensity profile) to the FWHM of the point spread function for the image.

3.
The gradient parameter (slope): this is the logarithm of the ratio of the central peak count to the FWHM of the object, normalized to the FWHM of the point spread function for the image.
We trained the DBNN using the values for these parameters for the visually classified set of stars and galaxies as the training set. In Fig. 1 we show plots of the three final DBNN parameters against each other, with stars and galaxies marked differently. It is clear that excellent separation between stars and galaxies is obtained.
  \begin{figure}
\par\includegraphics[width=10cm,clip]{MS1431f1.eps} %
\end{figure} Figure 1: The figures show clusters formed by stars and galaxies in the feature space. Galaxies are shown as dots and stars as stars.

   
4.3 Testing the network performance

We tested the network on 2 sub-regions (2001 $\times$ 2001 pixels each) of the NDWFSJ1426p3456 field. The central coordinates of the two test set images are listed in Table 4. Using a different region of the same field for testing ensures that erroneous classification due to variations in data quality is not an issue. As in the case of the training set, these sub-regions were also selected at random. The object catalogs for the test sets were constructed using the same SExtractor configuration as for the training set. DBNN marked some objects as boundary examples, meaning that their confidence level was not more than 10% above the plain guess estimate (50%) regarding the class of the object.

In test set 1 (NDWF5), 32 out of 336 objects were deleted as they could not be classified visually. Of the remaining 314 objects, DBNN found 15 as marginal but classified 10 of these correctly. Two objects were misclassified. In test set 2 (NDWF14), 14 out of 422 objects were deleted for which visual classification was not possible. Of the remaining, DBNN marked 17 objects as marginal but classified 12 of these correctly. One object was misclassified. The results for the two test sets are summarized in Table 5. The classification accuracy is marginally better than that of SExtractor. The marginal superiority of DBNN, in the test set data, is not significant if some allowance is made for subjectivity in the construction of the test set. However, the fact that DBNN can obtain high classification accuracy with only 3 parameters as compared to 10 (9+ 1 control) parameters used by SExtractor is of some importance.

 

 
Table 5: Comparison of classification accuracy of the DBNN and SExtractor on the NDWFS data. There is no entry under DBNN for NDWF10, since this data set was used in training the DBNN network.
Data Label Classification Accuracy Classification Accuracy
  SExtractor DBNN
NDWF10 97.76%  
NDWF5 96.05% 97.70%
NDWF14 96.32% 98.52%


4.4 Effects of image degradation

An important consideration is to check the performance of DBNN (and SExtractor) on low signal to noise images. In such images even visual classification becomes difficult. In order to examine the effects of noise on the classification, we have therefore chosen to degrade the training image NDWF10 by adding progressively higher levels of noise, rather than use additional low S/N data. We have used the IRAF task mknoise to increase the noise level of our training set. The level of noise was controlled by using progressively higher values for background counts. The original image has a background of 879 counts. Four additional images were created having a background count of 20%, 40%, 60% and 80% of the original background. mknoise was used to add Poisson noise to each of these 4 images, and they represent progressively higher levels of background noise and lower S/N ratio as compared to the original image. Note that the noise being added by us here is in addition to the noise introduced during the acquisition of the NDWFS data (which is already present in the original undegraded image). Sources are extracted from the degraded images with the same SExtractor parameters used for the original training set. The number of sources found in the degraded images are listed in 6. As expected, the noisier the image, the lower was the number of objects selected. The DBNN was not retrained; sources in the degraded images were classified using the DBNN trained with the original training set.

 

 
Table 6: Number of objects selected in the degraded images. The first line gives values for the undegraded data. The 4 degraded images are in decreasing order of S/N ratio. The criteria for object selection were the same as those for the undegraded training set image NDWF10.
Image Background Objects with Number of objects
    mR> 25 Selected
NDWF10 (undegraded image) 879.0 313 402
NDWF104X5 175.8 313 402
NDWF103X5 351.6 2 49
NDWF102X5 527.4 1 37
NDWF10X5 703.2 0 30



 

 
Table 7: Classification accuracy of SExtractor and DBNN as the NDWFS image is gradually degraded. Objects that failed with a confidence level greater than 60% are marked as real failures. Number of objects with R magnitude greater than 25 in each set are shown in square brackets.
Image Marginal Objects Marginally Passed Marginally Failed Real failures
  DBNN SEx DBNN SEx DBNN SEx DBNN SEx

NDWF10

31 34 21 [17] 31 [28] 10 [8] 3 [3] 0 6 [3]
NDWF104X5 31 34 21 [17] 31 [28] 10 [8] 3 [3] 0 6 [3]
NDWF103X5 4 3 3 [0] 3 [0] 1 [0] 0 0 1 [0]
NDWF102X5 5 2 2 [0] 2 [0] 3 [0] 0 0 1 [0]
NDWF10X5 1 1 0 1 1 [0] 0 1[0] 0


We have listed in Table 7 the performance of SExtractor and DBNN on the degraded images. We find that DBNN performance is slighter poorer than that of SExtractor on the fainter sources. This may be due to the fact that SExtractor uses the magnitudes at 8 different isophotes as input parameters while DBNN looks for gradients. For fainter objects, gradients are smaller, making DBNN fail for a few faint objects. A factor in favour of DBNN is that it was trained with possibly contaminated training data (due to limitations of the humans who constructed the training and test sets) and can be retrained, while for SEx, the training data was pristine (simulated) and frequent retraining is not practical.

The second observation from the table is that, at brighter magnitudes, DBNN produces more accurate classification on marginal objects compared to SEx. Also on objects that produce high confidence levels, results from DBNN are marginally better than those of SExtractor. It is important to keep in mind that the the confidence levels reported by a neural network do not indicate the difficulty in visual classification by humans. The confidence levels are parameter dependent and merely quantify the appropriateness of a set of parameters. The actual measures of efficiency of a classifier are (1) the total number of objects it can classify with good confidence; a good classifier should have a minimum number of marginal objects at all magnitudes and (2) it should produce minimum errors at high confidence levels. The table shows that DBNN does at least as well as SExtractor in overall efficiency of classification on both these counts.


next previous
Up: A difference boosting neural

Copyright ESO 2002