A&A 470, 467-473 (2007)
V. Chantry - P. Magain
Institut d'Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août 17, Sart Tilman (Bât. B5C), Liège 1, Belgium
Received 29 November 2006 / Accepted 28 March 2007
Archival HST/NICMOS-2 images of the Cloverleaf gravitational lens (H1413+117), a quadruply-imaged quasar, were analysed with a new method derived from the MCS deconvolution algorithm (Magain et al. 1998). This method is based on an iterative process which simultaneously allows us to determine the Point Spread Function (PSF) and to perform a deconvolution of images containing several point sources plus extended structures. As such, it is well-adapted to the processing of gravitational lens images, especially in the case of multiply-imaged quasars. Two sets of data were analysed: the first one, which was obtained through the F160W filter in 1997, basically corresponds to a continuum image, while the second one, obtained through the narrower F180M filter in 2003, is centered around the forbidden [O III] emission lines at the source redshift, thus probing the narrow-line region of the quasar. The deconvolution gives astrometric and photometric measurements in both filters and reveals the primary lensing galaxy as well as a partial Einstein ring. The high accuracy of the results is particularly important in order to model the lensing system and to reconstruct the source undergoing the strong lensing. The reliability of the method is checked on a synthetic image similar to H1413+117.
Key words: gravitational lensing - techniques: image processing - quasars: general
The aim of the present paper is to present a method which simultaneously allows us to perform PSF determination and deconvolution on images containing several point sources superimposed on a diffuse background, and to apply it to HST images of the Cloverleaf gravitational lens. We show that this method permits a more accurate astrometry of the system and a better characterisation of the lensing galaxy. Moreover, it also allows the detection of additional structures, such as parts of an Einstein ring.
This method is based on the MCS deconvolution algorithm (Magain et al. 1998) which, unlike most deconvolution methods, ensures that the deconvolved image, which has a well-defined Point Spread Function (PSF), conforms to the sampling theorem. The method also leads to a decomposition of the light distribution into a sum of point sources (of known shape) and a diffuse background.
More recently, Magain et al. (2007) presented a method, derived from MCS, to determine the PSF on images consisting of (possibly blended) point sources. This method works well, even in very crowded fields, when no point source is sufficiently isolated to derive an accurate PSF from standard techniques.
The process presented here extends the method of Magain et al. (2007) to images containing a mixture of point sources and diffuse background. It is based on an iterative scheme, in which both the PSF and the diffuse background are improved step-by-step.
In Sect. 2 we describe the input data and their reduction. The method used to obtain both the PSF and the deconvolved images is described in Sect. 3. The results are presented and discussed in Sect. 4. The accuracy of our results is tested by applying the method to a synthetic image with the same basic configuration as the Cloverleaf (see Sect. 5). Finally, we conclude in Sect. 6.
The first set of HST data was obtained on the of December 1997 by the camera 2 of NICMOS (Near Infrared Camera and Multi-Object Spectrometer) with the F160W filter (wide band filter), corresponding approximately to the near-IR H-band (PI: E. Falco). We used the 4 calibrated images, i.e. treated by the HST image reduction pipeline (CALNICA). Each of them has an exposure time of 639.9389 s and a mean pixel size of 0 07510 according to Tiny Tim software v 6.3 (Kris & Hook 2004). These images were obtained in the MULTIACCUM mode: each of them is a combination of several samples (19 in the present case). A combination of these 4 images is shown on the left panel of Fig. 1.
|Figure 1: Left: combination of the 4 calibrated images from the F160W filter data set obtained with the HST/NICMOS-2, the grey scale going from 0% (black) to 3.2% (white) of the maximum intensity; Right: combination of the 8 calibrated images from the F180M filter data set obtained with the HST/NICMOS-2, the grey scale going from 0% (black) to 4.7% (white) of the maximum intensity. The structure of the PSF is obvious. North is to the top and East to the left.|
|Open with DEXTER|
The second set of images was obtained on the of July 2003 with the same instrument and the F180M medium-band filter (PI: D. A. Turnshek). As for the F160W filter we used the calibrated images, here 8 images, 4 of them being a combination of 18 samples and the other 4 being a combination of 16 samples. The first 4 have an exposure time of 575.9418 s and the latter 4 an exposure time of 447.9474 s. The mean pixel size is, again according to Tiny Tim software, 0 07568. A combination of these calibrated images is shown on the right panel of Fig. 1.
The wavelength ranges of these two filters are partly superimposed: the passband of the F160W filter is while it amounts to for the F180M filter. The latter was chosen in order to include the oxygen [O III] forbidden-line doublet (499-501 nm) at the redshift of the QSO.
The image reduction is divided into two parts: the image cleaning and the calculation of the sigma images (i.e. images containing the standard deviations of the pixel intensities). The first step of the first part consists in computing the intensities in counts per pixel. The second step consists in removing the sky background. As the NIC-2 detector is composed of 4 quadrants, it is necessary to subtract a different constant value for each of them. These constants were derived from the parts of the image where there is no obvious light source.
The second step consists in the calculation of the sigma images. We start from the sigmas calculated by the pipeline CALNICA. Two effects are then corrected. First, we take into account the underevaluation of the standard deviation for the negative pixels (by replacing all negative intensities by a null value). Secondly, we make use of the HST flag files indicating bad pixels, e.g. cold or hot pixels. It allows us, using the inverted sigma images, to put the statistical weight of such bad pixels to zero so that the information they provide has no weight in the deconvolution.
Let us mention that we do not remove the cosmic-ray impact from the images during the reduction process. We use the deconvolution residuals (see below) to spot the pixels likely of having been contaminated by a cosmic ray. We then put the inverted sigma value of such pixels to zero.
All these manipulations are carried out with the IRAF package.
|Figure 2: PSF constructed by the Tiny Tim software for one of the frames in each set. We can easily see the spikes and the complex structure of the NIC-2 PSFs whatever the filter. Left: F160W, the grey scale goes from 0% (black) to 0.13% (white) of the peak intensity; Right: F180M, the grey scale goes from 0% (black) to 0.16% (white) of the peak intensity.|
|Open with DEXTER|
The same technique, based on the MCS deconvolution algorithm, was applied to both sets of images in order to improve their resolution and sampling and, most importantly, to detect any significant extended structure which might be hidden by the complex PSFs. The method is based on the simultaneous deconvolution of all images from a set, as explained, e.g., in Courbin et al. (1998). This means that we attempt to find a light distribution that is compatible with all images obtained in a given instrument configuration (e.g. through a given filter). To do this we allow a spatial translation in between the individual images and, in some cases, a variation of the point source intensities. In order to improve the resolution while keeping a well-sampled light distribution, we use a sampling step 2 times smaller than the original pixel size and we choose, as the final PSF (i.e. the PSF of the deconvolved images), a Gaussian with 2 pixels Full-Width-at-Half-Maximum (FWHM). Let us mention that, since the HST PSF varies with the position in the focal plane, and since the object is located in different parts of the detector at each exposure, each original image has its own individual PSF.
The originality of the present method is that the same images are used both to determine the PSF and to perform the deconvolution (basically to detect the diffuse background and to obtain the astrometry and photometry of all objects). It works only if there are several point sources in the field: this makes it possible to distinguish the structure belonging to the PSF (and thus appearing in the vicinity of each point source) from the diffuse background, assumed not to be identical around each source.
This new method is based on an iterative process. We start with a first approximation of the PSF constructed by the Tiny Tim software (see Fig. 2) with a sampling step two times smaller than the original one. That PSF is deconvolved by the final Gaussian PSF in order to obtain the deconvolution kernel that we call the PSF . This is a reasonable first approximation, although not accurate enough to obtain trustworthy deconvolved images. Indeed, when using that PSF for deconvolving the original images, which we call , significant structure appears around each point source, clearly showing that the Tiny Tim PSF departs from the actual one (see Fig. 3).
Since no extra images of stars are available in the field to improve this PSF, we have to use the information in the point sources of the Cloverleaf itself. However, we know that there might be some extra structure under the 4 point sources, as well as a contribution from the lensing object. That is why we proceed as follows:
|Figure 3: Results of the simultaneous deconvolution for the F160W data set using a deconvolved Tiny Tim PSF. Left: deconvolved image, the grey scale going from 0% (black) to 0.45% (white) of the maximum intensity. Right: residuals (see text) of the deconvolution. The remnant structure around each point sources is obvious.|
|Open with DEXTER|
We now consider the application of this iterative process to the two sets of HST/NIC-2 images of the Cloverleaf.
For the F160W data set, 7 iterations were necessary while, for the F180M data set, convergence was reached after 3 iterations. This difference is due to the fact that the diffuse background is less intense relative to the point sources in the latter filter. Figures 4 and 5 illustrate the evolution of the PSF in the iterative scheme: they show the corrections applied at different stages. We can see that the first step of the iterative process changes significantly the PSF obtained with Tiny Tim. The next steps allow smaller adjustments and smaller details. In the case of the F180M filter, it is obvious that only 3 iterations are necessary, as the corrections already become negligible after the second step. The same happens after the sixth iteration for the F160W data set.
|Figure 4: Corrections applied to the PSFs at different stages of the process for one of the images of the F160W data set. The grey scale goes from -2.6% (black) to +2.6% (white) of the peak intensity of the deconvolved Tiny Tim PSF. Top left: corrections to the PSF in the first iteration (starting from the deconvolved Tiny Tim PSF). Top right: corrections at the second iteration. Bottom left: corrections at the fourth iteration. Bottom right: corrections at the last iteration.|
|Open with DEXTER|
|Figure 5: Corrections applied to the PSFs at different stages of the process for one of the images of the F180M data set. The grey scale goes from -4.8% (black) to +4.8% (white) of the peak intensity of the deconvolved Tiny Tim PSF. Left: corrections to the PSF in the first iteration (starting from the deconvolved Tiny Tim PSF). Right: corrections at the last iteration.|
|Open with DEXTER|
Now that we have an idea about the evolution of the PSFs, we can focus on the results of the deconvolution itself. Figures 6 and 7 show the deconvolved frames from the last iteration, respectively for the F160W and the F180M data set. The partial Einstein ring, which is the gravitationally-lensed image of the quasar host galaxy, and the lensing object can be seen for both sets on the background frame (top left) and on the background plus point source frame (top right). The lens galaxy appears less intense compared to the point sources in the F180M filter, which is expected as this is a medium-band filter including the [O III] emission lines (499-501 nm) at the redshift of the QSO and no expected emission line at the redshift of the lens. The partial Einstein ring also has a different structure: compared to the F160W filter, it appears more intense close to the point sources and less intense in between them. This suggests that the narrow-line region (NLR) is more compact than the global lens galaxy, which could have been expected.
|Figure 6: Final results of the simultaneous deconvolution for the F160W data set. North is to the top and East to the left. Top left: smooth background common to all images of the set where the lensing galaxy is encircled. Top right: deconvolved image (point sources plus smooth background); the point sources are labelled as in Magain et al. (1988). Bottom left: mean residual map of the simultaneous deconvolution. Bottom right: image reconvolved to the instrument resolution, with the point sources removed.|
|Open with DEXTER|
The residuals ri from the deconvolution after the
iteration are defined as follows:
Another important guide through the different stages of the process is the reduced chi squared ( ) which, theoretically, should be close to unity for a perfect deconvolution with a perfect PSF. In the last iterations it barely changes: the PSF is not improved significantly anymore and the iterative process has converged. We calculate it for each set and each iteration step in the zone of interest, i.e. in a square containing the four point sources and the extended structures (ring plus lens). We obtain a of 3.845 after the seventh iteration for the F160W data set, and a of 1.125 for the F180M data set after the third iteration, which is really good. Let us mention that these values are computed taking into account all images of a given set, so that any slight incompatibility between some of the input images results in an increase of the that cannot be lowered by changing the model. A final of 1 means that the model is perfectly compatible with all the images of the set. It implies that all the images are statistically compatible with each other and that the PSF is perfectly known. Any inaccuracy in the data acquisition or reduction will increase the final .
|Figure 7: Final results of the simultaneous deconvolution for the F180M data set. North is to the top and East to the left. Top left: smooth background common to all images of the set where the lensing galaxy is encircled. Top right: deconvolved image (point sources plus smooth background); the point sources are labelled as in Magain et al. (1988). Bottom left: mean residual map of the simultaneous deconvolution. Bottom right: image reconvolved to the instrument resolution, with the point sources removed.|
|Open with DEXTER|
Table 1 gives the relative astrometry and photometry for the quasar images as well as for the lens galaxy in both filters. The coordinates are measured relative to component A (see Figs. 6 and 7). The apparent magnitudes are given in the Vega system.
Table 1: Relative astrometric and photometric measurements for the four components of the system and the lensing galaxy (G). The right ascensions and the declinations are given in arcsecond relative to component A. The photometry is given in apparent magnitudes in the Vega system. The internal error bars are also indicated (see text for an explanation on how they are derived).
As the geometric distortions depend on the position on the detector, their proper corrections require an individual deconvolution of each image. We then obtain the position of each point source (relative to source A) on each deconvolved image, corrected for distortion according to the formulae given in the NICMOS Data Handbook (Noll et al. 2004), and compute average values. For the point sources, this gives more accurate results than a simultaneous deconvolution with a mean correction on the coordinates. On the other hand, this is not true for the lensing galaxy and Einstein ring. As these are much fainter objects, it is better to rely on the results of the simultaneous deconvolution, where the signal in the whole set of images is used to constrain the shape of these objects. A mean geometric correction can then be applied, whose internal errors are lower than the random uncertainties on these fainter components.
The error bars given in Table 1 are internal errors. They are calculated by deconvolving each image individually and comparing the coordinates and magnitudes obtained. The listed values are the standard deviation of the means.
The astrometric precision for the point sources is about 0.5 milliarcsec in the F160W filter and 0.3 milliarcsec in the F180M filter. The higher precision in the medium band filter may be explained by the fact that the partial ring and the lens galaxy appear fainter relative to the point sources and thus have a lower contribution to the error bars.
Of course, the precision on the position of the lens galaxy is significantly lower. This is due to the facts that (1) it is a diffuse object; (2) it is much fainter than the point sources (about 4.5 mag in the F160W filter and 6.4 mag in the F180M filter) and (3) it is mixed with the PSF wings of the point sources.
Table 1 also shows that the results derived from both filters are not compatible within their internal error bars. As the geometry of the system is not expected to vary on the time scale of a few years, this disagreement suggests that the actual error bars are significantly larger than the internal errors. The causes may be diverse. As the two sets of data were acquired 6 years apart, with a different orientation of the HST and thus of the detector, and in different cycles of NICMOS (pre- and post-NCS, NICMOS Cooling System), some geometrical distortions may not have been completely taken into account. The uncertainties concerning the coefficients of the formulae used to correct for the geometrical distortions, as given in the NICMOS Data Handbook (Noll et al. 2004), account for an uncertainty of the order of 0.1 milliarcsec in each filter, which is about an order of magnitude smaller than the external errors we obtain. It is thus possible that a residual distortion of the NICMOS images remains, at the 10-3 level (0.001 arcsec per arcsec). An imperfect separation of the partial Einstein ring from the point sources in the deconvolution process as well as some inaccuracies in the PSF recovery may also play a role.
The external errors, computed by comparing the source positions derived from the two data sets, are the following: the average difference between the point source positions amounts to 1.4 milliarcsec. Assuming that the errors in both data sets contribute equally to this difference, we derive a value of milliarcsec (i.e. 0.013 pixel) for the estimated accuracy in the position of the point sources.
Our measurements are compared to those of Magain et al. (1988) and Turnshek et al. (1997) in Table 2. The latter were derived from images acquired with another HST intrument (Wide Field Planetary Camera) and with a completely different image processing technique, while the first ones were obtained from much lower resolution ground-based images. For both sets of results we indicate the error bars (which do not appear in the original paper of Magain et al.). The average difference between our results and those of Magain et al. (1988) amounts to 4 milliarcsec, which approximates the error bars on the measurements performed by these authors. The same comparison with Turnshek et al. (1997) gives an average difference of 2.6 milliarcsec, also compatible with their error bars.
Table 2: Relative astrometry of the Cloverleaf from Magain et al. (1988) and from Turnshek et al. (1997). The right ascension and the declination are given in arcsecond relative to component A. The error bars are also indicated.
The primary lens, a single galaxy, was detected in 1998 by Kneib et al. After a PSF subtraction of the four lensed images they obtained the following relative positions for the lensing galaxy:
Finally, as already mentioned, the intensity distribution along the partial Einstein ring is significantly different in the two filters: it is more regular in the wide band F160W filter than in the narrower F180M one. As the latter filter was chosen to emphasize the [O III] emission lines (499-501 nm) and thus to obtain a mapping of the narrow emission line region in the quasar host galaxy, such a difference is not unexpected. The partial ring observed in the broad-band filter is a distorted image of the full host galaxy, while the narrow emission-line region is more prominent in the F180M filter. In particular, two bright knots are seen close to the A and C images of the quasar in Fig. 7. These knots cannot correspond to deconvolution artifacts, which might be caused, e.g., by an imperfect modelling of the PSF. Indeed, such artifacts would be expected around all point sources and at the same position relative to these point sources, which is not the case. Moreover the observed positions fit well with the inverted parity expected between two neighbouring images in such a lensed system. These bright knots must therefore correspond to the emission line region in the quasar host galaxy, which is thus probably brighter on one side than on the other. A detailed modelling of the system, including an inversion of the lens equation, should allow the reconstruction of an image of the host galaxy and of the narrow line region. This would be the first time one could map the host and narrow line region of a BAL QSO at such a high redshift.
The accuracy of our results is further tested by carrying the same procedure on a synthetic image having characteristics similar to those of the HST/NICMOS F160W Cloverleaf image: 4 point sources, a faint lensing object and a partial Einstein ring (see Fig. 8). This synthetic image was convolved with a PSF similar to the actual one, but unknown to the test performer. Random noise was then added to get a S/N comparable to that of the combined HST image (see Fig. 9).
|Figure 8: Synthetic image of a gravitationally-lensed quasar with a configuration similar to the Cloverleaf: 4 point sources, a faint lensing object, and a partial Einstein ring. The orientation is the same as the original F160W Cloverleaf images.|
|Open with DEXTER|
|Figure 9: Synthetic image convolved with a HST-type PSF unknown to the test performer and with added random noise similar to the actual observation.|
|Open with DEXTER|
The results obtained after three iterations are presented in Fig. 10, which displays the background alone, the point sources plus background, and finally the residual map. Some remnant structures can be seen under the point sources on the residual map. They are slightly weaker than those observed in the residual maps of the actual images, but show similar characteristics.
|Figure 10: Results of the last iteration on the synthetic image. Top: diffuse background. Middle: diffuse background plus point sources. Bottom: residual map of the deconvolution.|
|Open with DEXTER|
On average, the flux in the background (ring + lens) is recovered within 4%, which can be considered as excellent since this diffuse background is very weak compared to the point sources. However, because of the smoothing constraint, the deconvolved ring and lens appear slightly smoother than the original ones. The largest differences are found under the brightest point source (A), where the deconvolved ring is about 43% below the original one.
Table 3: Relative astrometry of the artificial Cloverleaf. The two coordinates are given in arcsecond relative to component A.
Table 3 summarizes the astrometry carried out on this artificial Cloverleaf: the first pair of columns present the measurements made on the final deconvolved image resulting from the iterative process, the second pair of columns the results when using a deconvolved Tiny Tim PSF for a unique deconvolution, and the last one the measurements made on the original image.
The differences between the positions obtained for a particular source reach a maximum of about 0.3 milliarcsec with a mean value around 0.1 milliarcsec, which is slightly better than the internal precision estimated in Table 1. On the other hand, the lens galaxy position is not as accurate: the maximum difference amounts to 20 milliarcsec (i.e. a quarter of a pixel). Indeed, the position of such very faint diffuse objects is rather sensitive to inaccuracies in the PSF: any error in the wings of the bright point source PSFs may have impacts on the faint neighbouring objects.
Given these possible sources of errors and the results of the simulations, we estimate the accuracy on the lens galaxy position to amount to some 20 milliarcsec.
We have elaborated a new image-processing method, based on the MCS deconvolution algorithm, which allows, at the same time, to determine the Point Spread Function and to deconvolve a set of images. It is applicable to images which contain at least 2 point sources so that the algorithm can separate the contributions of background objects from those of the PSF itself.
This technique is particularly well-suited to the analysis of multiply-imaged quasars: it allows the separation of extended structures (lensing galaxy, arcs or rings) from the point sources. It provides accurate photometry and astrometry, which is very important for modelling the lensed systems.
Our internal error bars on the source positions, taking into account the error coming from the deconvolution only, are of the order of 0.4 milliarcsec. When comparing the astrometry coming from two different sets of images, we find external errors of the order of 1 milliarcsec. They probably find their origin in an incomplete correction of the geometric distortions.
Moreover, we detect the lensing galaxy and measure its position with an accuracy of the order of 20 milliarcsec, and discover a partial Einstein ring, which should allow us to constrain the deflection model and, through inversion of the lens equations, to estimate the light distribution in the quasar host galaxy and narrow line region.
The authors would like to thank Sandrine Sohy for her help and commitment in the programing part of the work. This work was supported by ESA and the Belgian Federal Science Policy Office under contract PRODEX 90195.