A&A 370, 689-706 (2001)
DOI: 10.1051/0004-6361:20010264
I. Valtchanov - M. Pierre - R. Gastaud
CEA/DSM/DAPNIA Service d'Astrophysique, 91191 Gif-sur-Yvette, France
Received 31 October 2000 / Accepted 8 February 2001
Abstract
Procedures based on current methods to detect sources in X-ray
images are applied to simulated XMM-Newton images. All significant
instrumental effects are taken into account, and two kinds of sources
are considered - unresolved sources represented by the telescope
PSF and extended ones represented by a -profile model.
Different sets of test cases with controlled and realistic input
configurations are constructed in order to analyze the influence of
confusion on the source analysis and also to choose the best methods
and strategies to resolve the difficulties.
In the general case of point-like and extended objects the mixed
approach of multiresolution (wavelet) filtering and subsequent
detection by SExtractor gives the best results. In ideal cases of
isolated sources, flux errors are within 15-20%. The maximum
likelihood technique outperforms the others for point-like sources
when the PSF model used in the fit is the same as in the images.
However, the number of spurious detections is quite large.
The classification using the half-light radius and SExtractor stellarity
index is successful in more than 98% of the cases. This suggests
that average luminosity clusters of galaxies (
erg/s) can be detected at redshifts greater than
1.5 for moderate exposure times in the energy band below 5 keV,
provided that there is no confusion or blending by nearby sources.
We find also that with the best current available packages,
confusion and completeness problems start to appear at fluxes around
6 10-16 erg/s/cm2 in [0.5-2] keV band for XMM-Newton deep surveys.
Key words: methods: data analysis - techniques: image processing - X-rays: general
X-ray astronomy has entered a new era now that Chandra and XMM-Newton are in orbit. Their high sensitivities and unprecedented image qualities bear great promises but also pose new challenges. In this paper, we outline problems of object detection in X-ray images that were not previously encountered. In doing so, we compare the performances of various detection techniques on simulated XMM-Newton test images, incorporating the main instrumental characteristics.
The X-ray observations consist of counting incoming photons one by one, recording their time of arrival, position and energy. Later, the event list is used to create images for a given pixel scale and energy band. Various X-ray telescope effects complicate this simple picture - the point spread function (PSF) and the telescope effective area (the vignetting effect), both dependent on the off-axis angle and incoming photon energy; detector effects like quantum efficiency variations, different zones not exposed to X-ray photons; environmental and background effects like solar flares and particle background. Even for relatively large exposures, the X-ray images could contain very few photons, and some sources could contain only a few tens of photons spread over a large area. Consequently, it is important for the source detection and characterization procedures to be able to cope with these difficulties.
As an example, the same hypothetical input situation is shown
schematically in Fig. 1 for
ROSAT,
XMM-Newton
and
Chandra
. XMM-Newton's rather large
PSF, coupled with its higher sensitivity, leads to the detection of
more objects but also to blending and source confusion, which become
severe for long exposures depending on the energy band. Confusion
problems in the hard band above 5 keV are less important, given the
smaller number of objects and smaller count rate of energetic photons.
Thus, we concentrate our analysis mainly on source detection
problems for the more complicated case of the XMM-Newton energy bands below
5 keV.
Each X-ray mission provides data analysis packages - EXSAS for ROSAT
(Zimmermann et al. 1994), CIAO for Chandra (Dobrzycki et al.
1999) and XMM-Newton Science Analysis System
(XMM-SAS). They include
procedures for source detection, and in this paper we estimate and
compare their performances on simulated images using various types of
objects. These procedures make use of techniques such as Maximum
Likelihood (ML), Wavelet Transformation (WT), Voronoi Tessellation and
Percolation (VTP).
In Sect. 2 we describe the X-ray image simulations. A
short presentation of the detection procedures is given in
Sect. 3. Tests using only point sources are presented in
Sect. 4, and extended sources in Sect. 5.
We have analyzed realistic simulations of a shallow and a deep
extragalactic field with only point sources in Sect. 6
and with extended objects in Sect. 7 for an exposure of
10 ks. Finally, we investigate the problems of confusion and
completeness in two energy bands - [0.5-2] and [2-10] keV for two
exposures - 10 ks and 100 ks (Sect. 8).
Section 9 presents the conclusions. (H0=50 kms-1/Mpc,
h=0.5, q0=0.5 and
are used).
The simulations are essential to understand and qualify the behavior of the different detection and analysis packages. We have developed a simulation program that generates X-ray images for given exposure times with extended and point-like objects. It takes into account the main instrumental characteristics of XMM-Newton and the total sensitivity of the three EPIC instruments. The procedure is fast and flexible and is made of two independent simulation tasks: object generation (positions, fluxes, properties) and instrumental effects. A possibility to apply the instrumental response directly over images is also implemented, especially useful when one wants to use sky predictions from numerical simulations (cf. Pierre et al. 2000).
A summary of the simulated images parameters are given in Table 1.
Parameter | |
Image scale |
![]() |
Image size |
![]() |
Exposure time | 10 ks & 100 ks |
Energy bands | [0.5-2] & [2-10] keV |
PSF on axis |
![]() |
![]() |
|
Total background (pn+2MOS) | |
[0.5-2] keV | 1.78 10-5 cts/s/pixel |
0.0041 cts/s/arcmin2 | |
[2-10] keV | 2.4 10-5 cts/s/pixel |
0.0055 cts/s/arcmin2 |
The point-like sources are assumed to be AGNs or QSOs with a power law
spectrum with a photon index of 2 and flux distribution following the
relations of Hasinger et al. (1998,
2001) and Giacconi et al. (2000) in the two energy
bands.
The PSF model is derived from the current available calibration
files. On-board PSF data is
generally in very good agreement with the previous ground based
calibrations (Aschenbach et al. 2000). We must stress that
the model PSF is an azimuthal average and in reality, especially for
large off-axis angles, its shape can be very distorted. However, in
the analytic model (Erd et al. 2000), the off-axis and
energy dependences are not available yet. This is not crucial, as the
energy dependence in the bands used is moderate and we confine all
the analysis inside
from the centre of the field-of-view
where the PSF blurring is negligible.
The extended objects are modeled by a -profile (Cavaliere &
Fusco-Femiano 1976) with fixed core radius
kpc and
.
A thermal plasma spectrum (Raymond & Smith
1977) is assumed for different temperatures, luminosities and
redshifts.
The source spectra (extended and point-like) are folded with the spectral response function for the total sensitivity of the three XMM-Newton EPIC instruments (MOS1, MOS2 and pn with thin filters) by means of XSPEC (Arnaud 1996) to produce the count rates in different energy bands. The actual choice of the energy bands is not important for this comparison study, although some objects can be more efficiently detected in particular energy ranges.
As an example, we show in Table 2 (available on line) the resulting count rates for extended sources assuming that they represent an average cluster of galaxies.
The background in the simulations includes realistic background values derived from the XMM-Newton in-orbit measurements in the Lockman Hole (Watson et al. 2001).
The calculated count rates for the objects and the photons of the
background are subject to the vignetting effect - some photons are
lost due to the smaller telescope effective area at given off-axis
angle, depending on the incoming photon's energy. We have parametrized
the vignetting factor - the probability that a photon at an off-axis
angle
to be observed - as polynomials of fourth order in two
energy bands: [0.5-2] and [2-10] keV, using the latest XMM-Newton on-flight
calibration data. For example, a photon at
has a
53% chance of being observed in [0.5-2] keV and 48% in [2-10] keV.
Further instrumental effects such as quantum efficiency difference between the CCD chips, the gaps between the chips, out-of-time events, variable background, pile-up of the bright sources are not taken into account - their inclusion is not relevant for our main objective.
Without attempting to provide a review of the available techniques in
the literature, we briefly describe here the procedures we have
tested. They are summarized in Table 3.
Procedure | Implementation | Version | Method |
EMLDETECT | XMM-SAS v5.0 | 3.7.2 | Cell detection + Maximum likelihood |
VTPDETECT | Chandra CIAO | 2.0.2 | Voronoi Tessellation and percolation |
WAVDETECT | Chandra CIAO | 2.0.2 | Wavelet |
EWAVELET | XMM-SAS v5.0 | 2.4 | Wavelet |
G+SE | Gauss + SExtractor | 2.1.6 | Mixed - Gauss convolution followed by SExtractor detection |
MR/1+SE | MR/1 + SExtractor | 2.1.6 | Mixed - multi-resolution filtering followed by SExtractor detection |
Historically, the sliding cell detection method was first used for Einstein Observatory observations (e.g. EMSS - Gioia et al. 1990). It is included in ROSAT, Chandra and XMM-Newton data analysis tools and a good description can be found in the specific documentation for each of those missions.
The X-ray image is scanned by a sliding square box and if the signal-to-noise of the source centered in the box is greater than the specified threshold value it is marked as an object. The signal is derived from the pixel values inside the cell and noise is estimated from the neighboring pixels. Secondly, the objects and some zone around them are removed from the image forming the so-called "cheese'' image which is interpolated later by a suitable function (generally a spline) to create a smooth background image. The original image is scanned again but this time using a threshold from the estimated background inside the running cell to give the map detection object list.
The procedure is fast and robust and does not rely on a priori assumptions. However it has difficulties, especially in detecting extended features, close objects and sources near the detection limit. Many refinements are now implemented improving the sliding cell method: (1) consecutive runs with increasing cell size, (2) matched filter detection cell where the cell size depends on the off-axis angle. However, the most important improvement was the addition of the maximum likelihood (ML) technique to further analyze the detected sources.
The ML technique was first applied to analyze ROSAT observations (Cruddace et al. 1988, 1991; Hasinger et al. 1993). It was used to produce all general X-ray surveys from ROSAT mission (e.g. RASS - Voges et al. 1999, WARPS survey - Ebeling et al. 2000). The two lists from local and map detection passes can be merged to form the input objects list for the ML pass. It is useful to feed the ML procedure with as many candidate objects as possible, having in mind that large numbers of objects could be very CPU-expensive. The spatial distribution of an input source is compared to the PSF model - the likelihood that both distributions are the same - is calculated with varying the input source parameters (position, extent, counts) and the corresponding confidence limits can be naturally computed. A multi-PSF fit is also implemented which helps in deblending and reconstructing the parameters of close sources. In the output list, only sources with a likelihood above a threshold are kept.
The ML method performs well and has many valuable features, however, it has some drawbacks - it needs a PSF model to perform the likelihood calculation and thus favours point-like source analysis, extent likelihood could be reliably taken only for bright sources, it cannot detect objects which are not already presented in the input list (e.g. missing detections in the local or map passes).
Here we have used EMLDETECT - an implementation of the method
specifically adapted for XMM-SAS (Brunner 1996). In the map mode
sliding cell pass we used a low signal-to-noise ratio ()
above the background in order to have as many as possible input
objects for the ML pass. The likelihood limit (given by
,
where P is the probability of finding an excess above the
background) was taken to be 10, which corresponds roughly to
detection. A multi-PSF fitting mode with the maximum of 6 simultaneous
PSF profile fits was used.
VTP - the Voronoi Tessellation and Percolation method (Ebeling & Wiedenmann 1993; Ebeling 1993) is a general method for detecting structures in a distribution of points (photons in our case) by choosing regions with enhanced surface density with respect to an underlying distribution (Poissonian in X-ray images). It treats the raw photon distribution directly without any recourse to a PSF model or a geometrical shape of the objects it finds. Each photon defines a centre of a polygon in the Voronoi tessellation image and the surface brightness is simply the inverse area of the polygon (assuming one single photon per cell). The distribution function of the inverse areas of all photons is compared to that expected from a Poisson distribution and all the cells above a given threshold are flagged and percolated, i.e. connected to form an object. This method was successfully used with ROSAT data (Scharf et al. 1997) and is currently incorporated in the Chandra DETECT package (Dobrzycki et al. 1999).
Apart from these advantages, VTP has some drawbacks which are especially important for XMM-Newton observations: (1) because of the telescope's high sensitivity and rather large PSF with strong tails, the percolation procedure tends to link nearby objects; (2) excessive CPU time for images with relatively large number of photons; (3) there is no simple way to estimate the extension of objects.
In the past few years a new approach has been extensively used: the
wavelet technique (WT). This method consists in convolving an image
with a wavelet function:
![]() |
(1) |
![]() |
(2) |
The WT procedure consists of decomposing the original image into a given number of wavelet coefficient images, wa, within the chosen set of scales a. In each wavelet image, features with characteristic sizes close to the scale are magnified and the problem is to mark the significant ones, i.e. those which are not due to noise. In most cases, this selection of significant wavelet coefficients cannot be performed analytically because of the redundancy of the WT introducing cross-correlation between pixels. For Gaussian white noise, wa are distributed normally, allowing easy thresholding. This is not the case for X-ray images which are in the Poissonian photon noise regime.
Various techniques were developed for selecting the significant wavelet coefficients in X-ray images. In Vikhlinin et al. (1997) a local Gaussian noise was assumed; Slezak et al. (1994) used the Ascombe transformation to transform an image with Poissonian noise into an image with Gaussian noise; in Slezak et al. (1993), Starck & Pierre (1998) a histogram of the wavelet function is used. In recent years a technique based on Monte Carlo simulations is used successfully (e.g. Grebenev et al. 1995; Damiani et al. 1997; Lazzati et al. 1999).
Once the significant coefficients at each scale are chosen, the local maxima at all scales are collected and cross-identified to define objects. Different characteristics, such as centroids, light distribution etc., can be computed, as well as an indication of the source size at the scale where the object wavelet coefficient is maximal.
WT has many advantages - the multiresolution approach is well suited both for point-like and extended sources but favours circularly symmetric ones. Because of the properties of the wavelet function a smoothly varying background is automatically removed. Extensive description of wavelet transform and its different applications can be found in Starck et al. (1998).
In this work we have tested two WT procedures:
The use of such a mixed approach is motivated by the fact that procedures for source detection in astronomical images have been developed for many years and the steps and problems of deblending, photometry, classification of objects are now quite well understood. The raw photon image manipulations can be performed with very simple smoothing procedures (for example a Gaussian convolution) or with more sophisticated methods like the "matching filter'' technique, adaptive smoothing or multiresolution (wavelet) filtering.
We have used two different types of raw image filtering:
We address the problem of point-like sources separated by
(half-energy width of the on-axis PSF),
and 60
with different flux ratios. We include the PSF model and background
but do not apply the vignetting effect.
The raw input test image is shown in Fig. 2 (available on line) together with its Gaussian convolution, MR/1 wavelet filtering and WAVDETECT output image. Visually, the Gaussian image is quite noisy, while there are few spurious detections in the WT images.
The number of missing detections and false objects are shown in
Table 4.
Method | Missed | False |
EMLDETECT | 4 | 13 |
G+SE (![]() |
6 | 1 |
MR/1+SE | 7 | 1 |
WAVDETECT | 7 | 21 |
EWAVELET | 6 | 4 |
VTPDETECT | 12 | 19 |
The one sigma input-detect position differences are less than the FWHMof the PSF (
)
for all procedures and the maximum occurs for
the blended objects, as expected. Note the large number of spurious
detections with WAVDETECT, VTPDETECT and EMLDETECT.
The results for the photometry in terms of the inferred to the input counts are shown in Fig. 3.
After this simple test we can eliminate the VTPDETECT: in addition to the
very large execution time, some of the VTP-detected object centres
were shifted by more than
from their input positions - a
consequence of its ability to detect sources with different shapes
where the object center can be far from the input position. Moreover,
VTPDETECT percolates all the double sources into single objects at
,
which all other procedures were able to separate.
No procedure unambiguously shows best results - both in terms of the detection rate, spurious sources and photometric reconstruction. EMLDETECT outperforms the others in terms of detection rate but with the price of many spurious detections. Using exactly the same PSF model as the one hard-coded in EMLDETECT leads to much better photometric reconstruction.
All other procedures are comparable: EWAVELET showing better detection
but its photometric reconstruction is far from satisfactory - about
half of the photons were lost at
and
,
because of the assumed Gaussian shape used to derive analytically
extension and counts. We have applied a simple correction for the
object size to arrive at the good photometric results for EWAVELET
presented in Fig. 3.
![]() |
Figure 4:
Test 2. The raw X-ray photon image for 10 ks exposure time (upper
left). As in Fig. 2 three cases of separations are
indicated, as well as the corresponding input source counts. The
extended objects are in the right columns. The Gaussian
convolution with
![]() |
The number of missed and false detections are shown in
Table 5. An increase of the searching radius to
was needed: at
the blending tends to
shift the centroid towards the point-like source. Note that this
situation is a clear case for source confusion: if we take the closest
neighbour (the point-like source in some cases) as the
cross-identification from the input list, we shall overestimate the
flux more than two-fold, while the true representation is the extended
object.
Some changes were needed for the procedures not based on the wavelet
technique in order to avoid splitting of the bright extended objects
into sub-objects: increase of the Gaussian convolution FWHM to
,
and multi-PSF fit for EMLDETECT. In the Gaussian case, the
larger smoothing length smears some of the point-like sources, leading
to non-detection. EMLDETECT splitting persists even with the maximum
number of the PSFs fitted to the photon distribution (currently it is
capable of simultaneously fitting up to 6 PSFs).
Method | Missed | False |
EMLDETECT | 1+1 | 89 |
G+SE ![]() |
6+0 | 1 |
MR/1+SE | 6+0 | 6 |
WAVDETECT | 6+0 | 18 |
EWAVELET | 4+2 | 5 |
The inferred-to-input source counts ratio is shown in
Fig. 5.
![]() |
Figure 5: Test 2. As in Fig. 3 except that the squares now represent extended objects and the point-like sources at fixed counts of 100 (circles) are put beside their corresponding neighbors (rather than being put at 100). Circles with arrows and numbers denote the ratio when it is above 2 |
Figure 7 shows the WAVDETECT classification - the ratio of
the object size to the PSF size (
). The results are more
ambiguous with WAVDETECT (Fig. 7) compared to MR/1+SE.
The results with EMLDETECT and its classification parameter (extension
likelihood) were very unsatisfactory due to the extended object
splitting. More comprehensive discussion of the simulations and the
classification is left for Sect. 7.
![]() |
Figure 6: Test 2. MR/1+SE detection classification based on R50 (left panels) and stellarity (right panels) as a function of the off-axis angle (upper panels) and detected source counts (lower panels). Identified extended (filled circles) and point-like objects (triangles) are plotted over the results from 10 simulated images with only point-like sources (see Sect. 6) |
Clearly EMLDETECT and WAVDETECT have problems in restoring the fluxes of extended objects. We have already discussed the splitting difficulties of EMLDETECT. The explanation for WAVDETECT's bad results is that the wavelet scale at which the detected object size is closer to the PSF size defines the source detection cell (in which the flux is computed). If the characteristic size of an object is larger than the PSF size (i.e. an extended object) this procedure will tend to underestimate the flux.
We can safely accept the MR/1+SE procedure as the best performing for
detection and characterization both for point-like and extended
objects. We must stress however that one cannot rely on the flux
measurements when there are extended and point-like sources separated
by less than
.
The proximity affects also the
classification of the point-like sources. Using the classification and
then performing more complicated analysis like profile fitting and
integration for the extended sources can improve a lot the
restoration. In realistic situations we can expect very often
problems of this kind, especially with XMM-Newton.
We simulate an extragalactic field including only point-like sources
with fluxes drawn from the
relation (Hasinger et al.
1998, 2001; Giacconi et al. 2000). PSF,
vignetting and background models are applied as described in
Sect. 2. The aim is to test the detection procedures in
more realistic cases where confusion and blending effects are
important and not controlled. The raw photon image is shown in
Fig. 8 together with its visual representation - the
same input configuration for a much larger exposure time and no
background, only keeping the objects with counts greater than 10. It
displays better the input object sizes, fluxes and positions and it is
instructive to compare it to the MR/1 filtered and WAVDETECT images shown
on the same figure.
![]() |
Figure 7:
Test 2. WAVDETECT classification based on object size to PSF size ratio
-
![]() |
We need to define a searching radius in order to cross-identify the
output and the input lists. The input list contains many objects with
counts well below the detection limit (
extends to very
faint fluxes) and a lower limit must be chosen. For each detected
object, we search for the nearest neighbour inside a circle within the
reduced input list.
Procedure |
![]() |
number |
EMLDETECT | 2.9 | 13 |
G+SE | 3.5 | 14 |
MR/1+SE | 3.2 | 13 |
WAVDETECT | 4.1 | 12 |
The positional difference for the brightest detected sources (more
than 100 counts) in the inner
from the center of the FOV
is shown in Table 6. The region beyond
is
subject to serious problems caused by the vignetting and PSF blurring,
the detected object centroid can be few PSFs widths from the true
input identification.
We therefore adopt the following cross-identification parameters: the
input list is constrained to counts greater than 10; a
searching radius; we consider only the central
of the FOV.
The detection rate and flux reconstruction results are shown in Fig. 9. There are different effects playing a role in the distribution and the numbers of missed and false detections:
The results in terms of detection rate are similar for all procedures. The best detection rate shows G+SE but at the price of twice as many false detections.
The photometry reconstruction for the sources above 50 counts shows a
spread about 10-15% for the WT based methods and
for
EMLDETECT. However, EMLDETECT clearly outperforms the other procedures when we
use the same PSF model as the one hard-coded into the programme. This
fact shows that using a correct PSF representation has a crucial
importance for the ML technique. More discussion about the detection
limits, completeness and confusion is left for Sect. 8.
The input configuration and wavelet filtered and output images are given in Fig. 10.
The detection rate, input-detect position offsets, detected counts and detected-to-input counts ratio are shown in Table 7.
redshift | ![]() |
Input | Detect | Detect/Input |
[arcsec] | [counts] | [counts] | [%] | |
EMLDETECT | ||||
0.6 | 2.1 | 1316 | 94 | 7 |
1.0 | 8.0 | 465 | 12 | 3 |
1.5 | 12.3 | 200 | 161 | 81 |
1.8 | 4.6 | 136 | 228 | 167 |
2.0 | 14.8 | 109 | 32 | 29 |
G+SE | ||||
0.6 | 1.2 | 1316 | 1043 | 79 |
1.0 | 5.2 | 465 | 355 | 76 |
1.5 | 1.9 | 200 | 220 | 110 |
1.8 | Not detected | |||
2.0 | 15.3 | 109 | 80 | 73 |
MR/1+SE | ||||
0.6 | 0.2 | 1316 | 1016 | 77 |
1.0 | 2.3 | 465 | 340 | 73 |
1.5 | 1.8 | 200 | 223 | 111 |
1.8 | 11.8 | 136 | 83 | 61 |
2.0 | 10.8 | 109 | 185 | 169 |
WAVDETECT | ||||
0.6 | 5.8 | 1316 | 344 | 26 |
1.0 | 10.6 | 465 | 193 | 41 |
1.5 | 0.1 | 200 | 39 | 19 |
1.8 | Not detected | |||
2.0 | 15.3 | 109 | 27 | 24 |
As to the positional errors, it was already concluded that the centres
of the extended object can be displaced by more than the adopted
searching radius for point-like sources (Sect. 5). The
differences in positions shown in Table 7, especially for
fainter objects, are 3-4 times larger than the one sigma limit for
point-sources inside the inner
of the FOV
(Table 6).
It is confirmed again that EMLDETECT and WAVDETECT are not quite successful in charactering extended objects. But note that this time the results for MR/1+SE and G+SE are worse than the results in Test 2 - the rate of lost photons being about 20-30%. Also, the flux of the clusters at z=1.5 and 2 is overestimated, suggesting blending with faint nearby point-like sources.
Two classification parameters were used: the half-light radius (R50) and the stellarity index from SExtractor based procedures. The results are shown in Fig. 11. We do not show results with WAVDETECT and EMLDETECT classification parameters because their unsatisfactory results were confirmed, as in Test 2.
We can see the excellent classification based on the stellarity index
and half-light radius: in the inner
,
for objects with more
than 20 detected counts, stellarity less than 0.1 and
we have 15 incorrect assignments from 1287 detections
(
).
In this section we investigate the confusion and completeness problems for XMM-Newton shallow and deep observations like the first XMM Deep X-ray Survey in the Lockman Hole (Hasinger et al. 2001).
A set of 10 images with exposure times of 10 ks and 100 ks in the
energy bands [0.5-2] and [2-10] keV were generated; the fluxes were
drawn using the latest
relations from Hasinger et al. (2001) and Giacconi et al. (2000). Detection and
analysis were performed with exactly the same parameters for all
simulations: detection threshold, analysis threshold, background map
size, detection likelihood, etc. (see Sect. 3).
Cross-identification was achieved using the input sources above 10
counts and 30 counts for 10 ks and 100 ks exposures respectively.
Lowering the count limits yields more cross-identifications but
increases considerably the number of spurious detections.
The input image for 100 ks and [0.5-2] keV band is shown in
Fig. 12. The inner
zone where all analysis
is performed is indicated, as well as the total XMM-Newton field-of-view.
It is informative to compare it with the images for 10 ks in
Fig. 8.
![]() |
Figure 12:
Simulated 100 ks XMM-Newton deep field in [0.5-2] keV with the same
parameters (
![]() ![]() |
In order to estimate the effect of confusion we have generated images with only point-like sources, distributed on a grid such to avoid close neighbours, and with fixed fluxes spanning the interval [10-16,10-13] erg/s/cm2.
In the following we discuss some important points.
[0.5-2] keV | [2-10] keV | |
ECF (cts/s per erg/s/cm2) | ||
6.70 10-13 | 3.66 10-12 | |
Flux limits (erg/s/cm2) | ||
10 ks | 2 10-15 | 10-14 |
100 ks | 6 10-16 | 3 10-15 |
![]() |
Figure 16: Photometry reconstruction for all 10 simulated images at 10 ks (left) and 100 ks (right) in the [0.5-2] keV band. The solid line is exact match between detected and input counts while the dashed lines are for two-fold differences. The vertical dashed line marks the 90% completeness limit (see Table 8) and mean and st.dev. (in brackets) above this limit are denoted |
Various procedures for detecting and characterizing sources were tested by simulated X-ray images. We have concentrated our attention mainly on images with XMM-Newton specific characteristics, because the problems arising from its high sensitivity and relatively large PSF are new and challenging.
We have analyzed the detection rate and the recovery of all
characteristics of the input objects: flux, positional accuracy,
extent measurements and the recovery of the input
relation. We have also investigated confusion problems in large
exposures.
Concerning detection rate and characteristics reconstruction, we have shown that the VTPDETECT implementation of the Voronoi Tessellation and Percolation method is not suited to XMM-Newton images. EWAVELET provides very good detection rate and photometric reconstruction for point-like sources after a simple correction, but shows unreliable results for extended sources.
One of the best methods for point-like source detection and flux measurements is EMLDETECT but we stress again that the PSF model used for the ML procedure needs to be close to the image PSF for the most accurate photometry. Serious drawbacks are the relatively large number of spurious detections as well as the splitting of the extended sources, which we were not able to suppress even with 6 simultaneous PSF profile fits in the multi-PSF mode; this seriously hampers the analysis of the extended sources.
WAVDETECT is a flexible method giving good detections even in some complicated cases. But, here again, spurious detections are quite numerous. WAVDETECT does not assume a PSF model but requires the PSF size as a function of the encircled energy fraction and the off-axis distance in order to define the object detection cell. However, the way the detection cell is defined leads to bad photometry for extended objects.
Our choice is the MR/1+SE method. The mixed approach involving first a multiresolution iterative threshold filtering of the raw image followed by detection and analysis with SExtractor. Our tests have shown that this is the best strategy for detecting and characterizing both point-like and extended objects. Even though this mixed approach consists of two distinct steps, it is one of the fastest procedures (Table 9), allowing easy checks of different stages in the analysis (filtering, detection, photometry).
Procedure | Number of | CPU time |
detections | [min] | |
EMLDETECT | 528 | 12.0 |
EWAVELET | 364 | 0.4 |
MR/1+SE | 370 | 1.9 |
G+SE | 365 | 0.1 |
WAVDETECT | 378 | 10.3 |
VTPDETECT | 1307 | 10.7 |
Without blending or confusion effects, the photometry is accurate within 10-20% for both point-like and extended objects. This uncertainty can be regarded as an intrinsic error due to the Poissonian nature of the X-ray images. For extended objects, only the MR/1+SE method gives satisfactory photometric results.
Blending between extended and point-like sources is quite serious at
separations below
.
Better results for photometry may
eventually be obtained if the intrinsic shape of the extended objects
is known, and if the two objects are detected. However, in most of the
cases with small separation, there is no indication of blending -
which is a dangerous situation for flux reconstruction. In such
cases, there may exist some spectral signatures of the effect.
The identification process of X-ray sources relies on their positional
accuracy. We have shown that for objects with more than 100 counts in
10 ks exposure images and within the inner
of the
field-of-view, the one sigma positional error is of the order of one
half of the FWHM of the PSF (
,
Table 6). For extended objects, because of their shallower
profiles and depending on the number of photons and the off-axis
distance, the detected centre could even be at about
from
its input position.
Comparing series of simulations with 100 ks and 10 ks in two energy
bands - [0.5-2] and [2-10] keV, we show that the effects of
confusion and completeness are absent for 10 ks, but quite significant
for 100 ks in the lower energy band. Moreover, for faint fluxes, these
effects tend to be masked by the large number of spurious detections
with EMLDETECT. Although this method seems to give correct results for the
down to fainter fluxes than MR/1+SE, in real
situations it is impossible to asses the contribution of the numerous
spurious detections. From our simulations, we estimate that about
60-65% of the sources are lost between
3 10-16 and
6 10-16 erg/s/cm2 for a 100 ks exposure with the current best
method (MR/1+SE).
One of the most important conclusions that will have deep cosmological
impact concerns the detection and classification of extended objects.
We have shown that the MR/1+SE mixed approach is capable of detecting
galaxy cluster-like objects with moderate luminosity (
erg/s) at redshifts
1.5 < z< 2 in 10 ks XMM-Newton simulated images. A criteria based on the half-light radius and the
stellarity index classifies them correctly, with a confidence level
greater than 98%.
Acknowledgements
We are thankful to J.-L. Starck for many discussions regarding wavelet filtering and detections and for the MR/1 software, R. Ogley and A. Refregier for valuable comments on the manuscript, H. Bruner and J. Ballet for comments and help on XMM-SAS and EMLDETECT, E. Bertin for help on SExtractor. We thank also the referee for valuable comments and suggestions on the manuscript.
z | Core radius | Count-rate, photons/s | ||
arcsec | [0.4-4] keV | [0.5-2] keV | [2-10] keV | |
0.6 | 32.8 | 0.1687 | 0.1316 | 0.0362 |
0.7 | 31.3 | 0.1238 | 0.0963 | 0.0253 |
0.8 | 30.4 | 0.0942 | 0.0734 | 0.0185 |
0.9 | 29.7 | 0.0737 | 0.0577 | 0.0139 |
1.0 | 29.3 | 0.0593 | 0.0465 | 0.0107 |
1.1 | 29.1 | 0.0486 | 0.0382 | 0.0085 |
1.2 | 29.0 | 0.0406 | 0.0319 | 0.0068 |
1.3 | 29.0 | 0.0343 | 0.0270 | 0.0055 |
1.4 | 29.1 | 0.0293 | 0.0231 | 0.0046 |
1.5 | 29.2 | 0.0253 | 0.0200 | 0.0038 |
1.6 | 29.4 | 0.0220 | 0.0175 | 0.0032 |
1.7 | 29.6 | 0.0193 | 0.0154 | 0.0027 |
1.8 | 29.9 | 0.0171 | 0.0137 | 0.0023 |
1.9 | 30.2 | 0.0152 | 0.0122 | 0.0020 |
2.0 | 30.5 | 0.0137 | 0.0109 | 0.0018 |