In addition one can show in Fig. 5 that for reasonable errors on the known defocus, the propagation error coefficient is equal to one. Hence, the uncertainty on the known defocus distance yields directly the uncertainty on the estimated defocus aberration.
This uncertainty on the defocus distance can be due to:
The camera pixel scale is needed to calculate the oversampling factor. An error on this factor induces an error on the coefficients of all radially symmetric aberrations (defocus, spherical aberration ...) as shown in Fig. 6.
![]() |
Figure 7: Influence of the pixel scale error. The reference value is set to 13.25 mas. For this value it is assumed that the error on the defocus coefficient estimation is zero (the experimental conditions are the same as in Fig. 6). |
An error of the pixel scale is essentially propagated to the defocus aberration estimation. The difference between the maximal and the minimal estimated value of the defocus coefficient is 17 nm. A slight error of 6 nm can be seen on spherical aberration but remains negligible in comparison to the one on the defocus. In Fig. 7 the evolution of the estimation error of the defocus coefficient is plotted as a function of a pixel scale measurement error. It is assumed that the true value is 13.25 mas measured during the firs on-sky tests of the AO system. Since the accuracy on the pixel scale measurement is better than 0.2 mas, one can estimate the wavefront error (WFE) due to this uncertainty to be less than a few nanometers and therefore to remain negligible.
![]() |
Figure 8:
Comparison of the estimated aberrations of various pinhole pairs. Camera
C50S with FeII narrow band filter (1.644 |
The maximal detector translation along the optical axis is not enough to introduce significant diversity between focused and defocused images. For the calibration of CONICA aberrations, we introduce defocus by translating "the object'' in the entrance focal plan. As we said before (Sect. 3.1), this implementation does not create exactly a pure defocus but also in first order, some spherical aberration. We have quantified the deviation from a pure defocus by using the optical design software ZEMAX and shown that it can be neglected (a translation of 4 mm in the entrance focal plan induces defocus and a negligible spherical aberration a11=0.14 nm).
![]() |
Figure 9:
Evolution of the wavefront error as function of |
The principle of the PD method is the minimization of a criterion (Eq. (10)) which is based on a convolution image model (Eq. (1)). Thus the image should be perfectly corrected for all instrumental features (background, dead pixels, etc.) in order to match the model. In practice, residual features are still present. In particular, in the case of CONICA images, a background fluctuation due to pick-up noise can induce residual features on the images even after a proper background calibration (see Sect. 6.2). These features are interpreted as signal by the phase-diversity algorithm. Therefore they induce bias on the aberration estimation. The effect of such fluctuations is highlighted in Fig. 10 on experimental data. The difference of the PD results obtained with and without residual background features yield the WFE which is plotted as a function of the image size. This can be understood as a function of the residual background influence, too, because it obviously depends on the image size: the smaller the images, the less important the residual background in comparison to the signal. Nevertheless, the image size should be large enough to contain the whole signal. Furthermore, the modelisation of the pupil shape (see Sect. 5.3.3) must also be taken into account to choose the right image size. In Sect. 6.2 we describe a pre-processing algorithm which allows to remove these residual background features.
![]() |
Figure 10: Evolution of the wavefront error as a function of the image size. Experimental data have been used. |
As presented in Sect. 2.3, the phase
regularization in our algorithm is provided by a truncation of the solution
space through the use of a finite (and small) number of unknowns (typically
the first twenty Zernike coefficients). Figure 11 shows, on
experimental data, the influence of
the number of estimated Zernike on the reconstruction quality. Note that,
in the case of measurements of CONICA stand-alone aberrations, the pupil is
unobscured and thus the Zernike polynomials are strictly orthogonal.
![]() |
Figure 11: Evolution of the aberration estimation as a function of the number of Zernike in PD algorithm. Experimental data have been used. |
There exists a limit to the number of Zernike polynomials that can be estimated with a reasonable accuracy. Of course this limit depends on the signal to noise ratio on the images. In the present case, this number is equal to 36. Note that if a more sophisticated regularization term is introduced in the PD algorithm both on the object and the aberrations this limitation should be overcome. Nevertheless such a regularization is not needed here since the aberration amplitudes are negligible (less than a few nanometers) for Ziabove i=11 (i.e. the spherical aberration). The WFE between estimated Zernike coefficients 4-15 and estimated Zernike coefficients 4-36 is about 1.3 nm RMS. This WFE is very small and thus shows that the aliasing of the Zernike polynomials above 15 on the estimated coefficients 4-15 is negligible.
The phase diversity concept proposed here is a monochromatic wave-front sensor
(theoretically the concept can be applied on polychromatic images but it
induces an important modification of the algorithm to model the data
(Seldin & Paxman 2000)). Nevertheless it has been shown (Meynadier et al. 1999) that
the use of broadband filters does not significantly degrade the accuracy as
long as
is lower than a few tens of percents
(typically
).
As mentioned above, the PD algorithm can not estimate a tip-tilt between the
two images larger than
.
Therefore, a fine centering between focused and
defocused images must be done before the aberration estimation (see Sect. 6.2).
Since we consider here experimental data (see
Sect. 6), the modelisation of the pupil shape in the
algorithm is critical, in particular the pixelisation effects. Indeed, in PD
algorithm the pupil definition depends on the image size and on the
oversampling factor. For example, images of camera C50S in K band oversample
with a factor of 2 and a
image will lead to a pupil diameter of
8 pixels (see Fig. 12). In this case, the pixelisation
effects on the shape of the pupil will induce aberration estimation error.
These effects are illustrated in Fig. 13.
Therefore, large images are recommended to well model the pupil and to obtain accurate results. Nevertheless, two problems may occur with the processing of large images:
The evolution of the reconstruction error as a function of pupil sampling is
presented in Fig. 13. To minimize the residual
background effects, all the background pixels (that is pixels with no PSF
signal) has been put to zero in the images.
![]() |
Figure 13: Evolution of the wavefront reconstruction error as function of pupil sampling in PD algorithm. The x axis gives the pupil diameter in pixel in the PD algorithm. |
Considering the results shown in Figs. 13 and 10
along with the computation load lead us to choose an image
size equal to
pixels for the K band and
pixel
for the J band.
In this part, we have analyzed and quantified, on experimental and simulated data, the possible sources of errors in the static aberration estimation for NAOS-CONICA. It is shown that the main source of errors is due to an imperfect knowledge of the system (that is calibration errors). In particular a precise knowledge of the defocus distance between focused and defocused planes is essential.
If very high precisions are required on the estimation and on the correction of static aberrations (for instance in the case of a future very high SR AO for exo-planet detection), the PD must be taken into account in the early design of the system in order to optimize with respect to the constraints and error sources listed above.
Copyright ESO 2003