A&A 488, 375-381 (2008)
DOI: 10.1051/0004-6361:200809894

Speckle interferometry with adaptive optics corrected solar data

F. Wöger1 - O. von der Lühe2 - K. Reardon1,3


1 - National Solar Observatory, PO Box 62, Sunspot, NM 88349, USA
2 - Kiepenheuer-Institut für Sonnenphysik, Schöneckstr. 6, 79104 Freiburg, Germany
3 - INAF - Osservatorio Astrosico di Arcetri, 50125 Firenze, Italy

Received 2 April 2008 / Accepted 17 May 2008

Abstract
Context. Adaptive optics systems are used on several advanced solar telescopes to enhance the spatial resolution of the recorded data. In all cases, the correction remains only partial, requiring post-facto image reconstruction techniques such as speckle interferometry to achieve consistent, near-diffraction limited resolution.
Aims. This study investigates the reconstruction properties of the Kiepenheuer-Institut Speckle Interferometry Package (KISIP) code, with focus on its phase reconstruction capabilities and photometric accuracy. In addition, we analyze its suitability for real-time reconstruction.
Methods. We evaluate the KISIP program with respect to its scalability and the convergence of the implemented algorithms with dependence on several parameters, such as atmospheric conditions. To test the photometric accuracy of the final reconstruction, comparisons are made between simultaneous observations of the Sun using the ground-based Dunn Solar Telescope and the space-based Hinode/SOT telescope.
Results. The analysis shows that near real-time image reconstruction with high photometric accuracy of ground-based solar observations is possible, even for observations in which an adaptive optics system was utilized to obtain the speckle data.

Key words: techniques: high angular resolution - techniques: image processing - techniques: interferometric - sun: photosphere

  
1 Introduction

Adaptive optics (AO) systems have been introduced to many solar telescopes in the recent years, making large aperture facilities feasible. However, it is evident that any AO correction is only partial. Thus, to achieve diffraction limited performance of the telescope, further post-processing of the observations becomes necessary. Several algorithms for image reconstruction have evolved as computational power has increased rapidly. On the one hand, techniques based on blind deconvolution like multiframe blind deconvolution or the even more general multiobject multiframe blind deconvolution have evolved and become popular in the recent years (van Noort et al. 2005). On the other hand, techniques based on speckle interferometry that have evolved since the mid-1970s have been refined and improved (Lohmann et al. 1983; Knox & Thompson 1974; Weigelt 1979; Labeyrie 1970) during the 1980s.

The rapid development of computer technology, especially in the field of multi-core processors, makes a real-time application of reconstruction algorithms to speckle interferometric data feasible and warrants further development. The need for real-time - or at least near real-time - processing is clear when considering that speckle data is observed at high data rates: in general, a single ``speckle burst'' consists of approximately 100 images observed at a frame rate of around 15 images per second (or higher). When observing several hours a day this leads to a data volume of several hundred gigabytes of unreduced data per day. Even though the cost per byte is continually decreasing, the handling (transfer and distribution) is a costly and lengthy process. Thus, the reduction of speckle data at the telescope site is an important step to increase the telescope's efficiency because the data amount is reduced by a factor of around 100. Thus, the possibility of real-time data reduction becomes attractive when post-processing techniques like speckle interferometry are considered for image reconstruction. Some of the aspects of the application of post-processing algorithms to speckle data in near real-time have already been explored by Denker et al. (2001).

In this article, we present the characteristics of the Kiepenheuer-Institut Speckle Interferometry Package (KISIP, Mikurda & von der Lühe 2006; von der Lühe 1993), which has been rewritten in the C programming language and enhanced for parallel processing[*]. In Sect. 2, we give an overview of the implemented algorithms. Section 3 describes our study of the performance of the two implemented phase reconstruction algorithms, as well as the overall scalability of the code with an increasing number of computational nodes. In addition, the photometric accuracy of the final reconstruction is tested with both a ground- and space-based telescope co-temporally observing the same target on the Sun.

  
2 Algorithmic details

In this section, we briefly describe the internal details of how KISIP was implemented to allow for parallel processing as well as the employed algorithms used for the reconstruction of the Fourier phase and amplitude.

In general, the imaging process through atmosphere and telescope is best described in the Fourier domain. Using the incoherent, space-invariant imaging equation, we get for a speckle burst consisting of N images

\begin{displaymath}I_{i}(\vec{f}) = O(\vec{f})~S\!_{i}(\vec{f}), \quad 1 \leq i \leq N,
\end{displaymath} (1)

where $\vec{f}$ is the two-dimensional, spatial frequency and $I_{i}(\vec{f})$ is the Fourier transform of the ith observed image of the object described by $O(\vec{f})$. The term $S\!_{i}(\vec{f})$ is the ith transfer function that incorporates aberrations due to both atmosphere and telescope, and is generally a complex function. We assume that there are no static aberrations in the telescope, thus complex contributions to the transfer function only arise from phase distortions due to Earth's turbulent atmosphere. This is justified by the fact that most solar telescopes today use AO systems. These systems are capable to correct some of the atmospheric aberrations in real-time, and additionally remove most of the static aberrations efficiently.

At an early stage of the reconstruction process, each recorded short-exposed frame is split into subframes that have roughly the size as the isoplanatic patch (the area in the field of view for which the optical transfer function is considered constant) and that overlap by half of their size. This makes a parallel treatment of the subframes easy as they are sent to separate computation nodes using the message passing interface (MPI Forum 1997). The KISIP package separates the image's Fourier phases from its amplitudes. The Fourier phases are treated with unity amplitude by both of the implemented phase reconstruction algorithms, which are described in further detail below. Fourier amplitudes are reconstructed independently. In what follows, we give a brief overview over these well-known techniques that form the basis of KISIP.

  
2.1 Phase reconstruction

The KISIP program incorporates two different algorithms to reconstruct the object's Fourier phases.

In one case, the package uses an extension of the Knox-Thompson (KT) algorithm (Knox & Thompson 1974) which is based on the original authors' idea to use average cross-spectra for the reconstruction of the object's Fourier phases. The Knox-Thompson average cross-spectrum is defined as

 \begin{displaymath}
\begin{array}{lcl}
C(\vec{f}, \vec{\delta})
& = & \langle I_...
...{f})~S^{*}_{i}(\vec{f} - \vec{\delta}) \rangle_{i}.
\end{array}\end{displaymath} (2)

Here, $\langle \cdot \rangle_{i}$ is the average over the N observed subframe images that incorporate independent realizations of atmospheric distortions. The two-dimensional, spatial frequency shift vector $\vec{\delta}$ can have a magnitude of up to the seeing limit in the Fourier domain, $r_{0} / \lambda$, where $\lambda$ denotes the observed wavelength and r0 is the Fried parameter describing the prevailing atmospheric conditions. For large N, it can then be shown that the atmospheric transfer function associated with the KT average cross-spectrum,

 \begin{displaymath}
KTTF(\vec{f}) = \langle S_{i}(\vec{f})~S^{*}_{i}(\vec{f} - \vec{\delta}) \rangle_{i},
\end{displaymath} (3)

remains finite up to the diffraction limit, $D / \lambda$, with D being the telescope pupil diameter. In addition, it is a real entity merely scaling the Fourier amplitudes (Knox & Thompson 1974; von der Lühe 1988). Thus, the extraction of the object's Fourier phases $O(\vec{f})/\vert O(\vec{f})\vert$ becomes possible from Eq. (2) by use of a recursive or iterative algorithm. The incorporated algorithm extends the original idea of Knox and Thompson by using more than two (linear independent) vectors $\vec{\delta}$. The extension is detailed in von der Lühe (1993), and Mikurda & von der Lühe (2006).


  \begin{figure}
\includegraphics[width=7.6cm,clip]{9894f1.eps} \end{figure} Figure 1: Time used for one reconstruction versus numbers of computation nodes used. Either 212992 cross-spectrum (KT) or 221320 bispectrum (IWLS) values for averaging.
Open with DEXTER

The other algorithm available within the package is a speckle masking (or triple correlation) algorithm. Speckle masking algorithms, the generalization of Knox and Thompson's idea using the bispectrum, have been used since Weigelt (1979) and Lohmann et al. (1983). The bispectrum is defined as

 \begin{displaymath}
\begin{array}{lcl}
B(\vec{u}, \vec{v})
& = & \langle I_{i}(\...
...i}(\vec{v})~S^{*}_{i}(\vec{u}+\vec{v}) \rangle_{i}.
\end{array}\end{displaymath} (4)

In analogy to Eq. (3), it can be shown that the speckle masking transfer function

 \begin{displaymath}
SMTF(\vec{f}) = \langle S_{i}(\vec{u})~S_{i}(\vec{v})~S^{*}_{i}(\vec{u}+\vec{v}) \rangle_{i}
\end{displaymath} (5)

is a real valued entity and remains finite up to the diffraction limit (von der Lühe 1985). The rather stringent restriction for $\vec{\delta}$'s magnitude in the extended KT approach is relaxed for $\vec{u}$ and $\vec{v}$ from the seeing to the diffraction limit. The algorithm implemented in KISIP is based on a technique described by Matson (1991), who proposes the reconstruction of phases using an iterative weighted least-squares (IWLS) fit to the bispectrum. Thus, it makes full use of the bispectrum that was computed with user specified truncation parameters to restrict it to a manageable size (e.g., Pehlemann & von der Lühe 1989).

As the extended KT algorithm uses cross-spectra, i.e. multiplications of two Fourier phase values (see Eq. (2)), it is computationally less expensive than a speckle masking algorithm, which involves the product of three phase values (Eq. (4)). It has been shown that both implemented algorithms can be equivalent (Ayers et al. 1988). However, the Knox-Thompson algorithm is sensitive to alignment errors of the speckle images, whereas triple correlation algorithms do not suffer from this because of the phase closure relation inherent to the bispectrum. In bad seeing conditions, this leads to a higher reconstruction error when using the extended Know-Thompson algorithm and a better performance of the speckle masking algorithm.


  \begin{figure}
\par\includegraphics[width=15cm]{9894f3} \end{figure} Figure 2: Convergence properties of the two implemented algorithms. Upper row: KT algorithm, lower row: IWLS algorithm. Columns from left to right: r0=5, 7, 10, 20 cm. Note that a panel of the IWLS algorithm corresponds to a subpanel of a KT panel. Shown is the residual phase variance per pixel in the Fourier domain.
Open with DEXTER

  
2.2 Amplitude reconstruction

The KISIP package reconstructs the object's Fourier amplitudes using the well-known method of Labeyrie (1970). With

\begin{displaymath}\langle \vert I_{i}(\vec{f})\vert^{2} \rangle_{i} = \vert O(\vec{f})\vert^2~\langle \vert S_{i}(\vec{f})\vert^{2} \rangle_{i}
\end{displaymath} (6)

the object's spatial power spectrum $\vert O(\vec{f})\vert^2$ becomes accessible if the speckle transfer function (STF)

\begin{displaymath}STF(\vec{f}) = \langle \vert S_{i}(\vec{f})\vert^{2} \rangle_{i}
\end{displaymath} (7)

is known. Due to the lack of possibility to simultaneously observe a reference point source in the sky when observing the sun, the Fourier amplitudes need to be calibrated with model STFs, the accuracy of which is vital to the photometry of the final reconstruction. In order to chose the correct model function, the value of Fried's parameter r0, a measure for the strength of atmospheric turbulence, needs to be well known. When solar data is reconstructed, the most common way to estimate r0 is to compute the spectral ratio from the observed data itself. This method was suggested by von der Lühe (1984) and is generally used by all solar speckle reconstruction algorithms. We have improved the method for the estimation of r0 originally described in von der Lühe (1984) to achieve a higher accuracy especially in situations where the spectral ratio is not well defined. A direct, iterative fit of model spectral ratios to the measured data, using the squared differences between model and data as a metric, increases reliability because more data points are used in the fit. The model functions are precomputed and accessible during the process via a lookup table. When an AO system was used for observations, the models need to be adjusted for the AO's performance and atmospheric anisoplanatism (Wöger & von der Lühe 2007). To correct for anisoplanatism, amplitudes are calibrated separately within each subfield: the spectral ratio delivers the appropriate value of r0 and, for AO-corrected data, the estimated distance from the AO reference point. The details are described in Wöger & von der Lühe (2007). Using this information, the photometric properties of the observed object can be reconstructed to the highest accuracy possible.


  \begin{figure}
\par\includegraphics[width=13cm]{9894f4s} \end{figure} Figure 3: Top: deconvolved image of the quiet Sun region near disk center observed with Hinode on April 18th, 2007, at 15:30:30 UT. Bottom: the same region observed with the Dunn Solar Telescope (DST); the data was post-processed using KISIP. Images are shown using the same intensity scale.
Open with DEXTER

  
3 Reconstruction performance

We are interested in the possibility of using speckle interferometry in real-time applications. Future ground-based solar telescopes will have have apertures of 1.5 m and more (Denker et al. 2006; Wagner et al. 2006; Volkmer et al. 2006) and will be equipped with AO systems to acquire high-resolution observations. As the correction of an AO system can only be partial and its performance is dependent on the atmospheric conditions present, there is a need for post-processing the data to achieve the diffraction limit of the telescope as often as possible. To achieve this goal, an analysis has been performed to analyze several important aspects of the KISIP code.


  \begin{figure}
\par\includegraphics[width=6.5cm,clip]{9894f4i.eps}\hspace*{1.5cm}
\includegraphics[width=6.5cm,clip]{9894f5i.eps} \end{figure} Figure 4: Close-up region of the region indicated in Fig. 3. Left: deconvolved Hinode image. Right: reconstructed DST image. Images are shown using the same intensity scale.
Open with DEXTER

  
3.1 Code scalability

The KISIP code has been tested on various combinations of platforms and operating systems, demonstrating its scalability with an increasing number of computational nodes and platform independence. For the tests, we used a data burst with 100 images consisting of $1024 \times 1024$ pixels. This data set was reconstructed with either 212 992 cross-spectrum (KT) or 221 320 bispectrum (IWLS) values and 30 iterations per 128 $\times$ 128 pixel subfield. We recorded the time to perform the reconstruction using an increasing number of computational nodes up to the maximum.

We present in Fig. 1, the results from tests run on a SuSE Enterprise Linux 10 cluster with 23 computational nodes plus one master node for job administration. This facility is installed at the Kiepenheuer-Institut für Sonnenphysik. Each of the 24 nodes is equipped with 2 Intel Xeon CPU 5160 with 3.00 GHz clock speed and 4 GB of random access memory (RAM). The employed CPUs have 2 cores leading to a total number of 92 usable processing units - the master node is usually not involved in computations. Each computer was connected to the main node via Infiniband fibers. As expected, the IWLS is slower than the KT algorithm because of the more involved computation. Additionally, Fig. 1 demonstrates that the code behaves linearly with an increasing number of nodes: the computational time decreases with the inverse of the number of nodes. However, there is a saturation in reconstruction time at around 22 s using both algorithms on this platform.

The saturation is an important issue that needs to be paid close attention to when designing a platform that is supposed to achieve (near) real-time reconstruction performance. The saturation is due to latency between the processors, be it because of restricted interconnect bandwidth between the computation nodes or because of a slow processor speed. Another reason for saturation is the overhead in the code that distributes the data to the computation nodes. Thus, an ideal system would provide several multi-core processors that are connected with a fast system bus, which is the current trend in processor development and high-performance computing. Nevertheless, already today a system such as the one tested above would provide near real-time performance for a camera that reads out and stores a $1024 \times 1024$ pixel frame at an effective rate of 5 frames per second.


  \begin{figure}
\includegraphics[width=6.5cm,clip]{9894f6.eps}\hspace*{1.5cm}
\includegraphics[width=6.7cm,clip]{9894f7.eps} \end{figure} Figure 5: Left: intensity histograms for the Hinode (red) and the DST image (black) as shown in Fig. 3. Right: azimuthally-integrated, spatial-power spectra of the Hinode (red) and the DST image (black).
Open with DEXTER

  
3.2 Convergence properties

In addition to the code's scalability, we analyzed its convergence properties with synthetically distorted data cubes, which have been aberrated using phase screens that correspond to atmospheric conditions that are similar to values of r0=5, 7, 10, 20 cm. The resulting 4 data cubes had 100 images of size 256 $\times$ 256 pixels. They are the same sets as used in the study of Mikurda & von der Lühe (2006). We were interested in the number of cross- and bi-spectrum values as well as iterations needed for a satisfactory convergence of the iterative phase reconstruction for a subfield size of $128~\times~128$ pixels. For each iteration, we compute the property

\begin{displaymath}\kappa(j)=\frac{1}{M}\sum_{\vec{f}} (\Phi_{j}(\vec{f})-\Phi_{j-1}(\vec{f}))^2,
\end{displaymath} (8)

where j represents the jth iteration step and M is the total number of evaluated frequency points $\vec{f}$. The quantity $\kappa(j)$ measures the squared difference from the current iteration step from the previous, which is an indicator for convergence.

Figure 2 shows the results of a detailed analysis of the implemented algorithms, focusing on the convergence properties in dependence of atmospheric conditions and number of evaluated cross- and bispectrum values. As can be seen, for both the KT and the IWLS algorithms, these parameters are of importance for convergence. With less severe atmospheric conditions, both algorithms converge faster as the signal-to-noise ratio in the images increases with increasing values of r0. Generally, the KT algorithm seems to converge more slowly than the IWLS algorithm, which is likely the result of the additional information that is used in the averaging process of the bispectrum computation. The penalty is longer computational time, as mentioned before. Nevertheless, greater than 30 iterations (or even less in case of the IWLS algorithm) in combination with approximately 250 000 evaluated cross- or bi-spectrum values for a $128 \times 128$ pixel subfield do not lead to a significant further change in the reconstructed phase, which allows for the minimization of computational time by optimizing the reconstruction parameters. This fact is important with respect to real-time reconstruction of speckle data.

  
3.3 Photometric accuracy

To test the accuracy of the phase reconstruction, we compare speckle reconstructed data taken at the DST of the National Solar Observatory with data observed simultaneously with the SOT instrument on the Hinode satellite. The data were observed at 15:30:30 UT on April 18th, 2007, with both facilities using very similar interference filters of 1 nm FWHM in the Fraunhofer G-band at 430.5 nm. A region of quiet solar granulation near Sun center was the target of the observations, the Fried parameter was estimated to be $r_{0} \approx 7$ cm, corresponding to average seeing. The data observed at the DST and at Hinode have been calibrated using standard flatfielding procedures.

The DST speckle burst was reconstructed using the IWLS algorithm with a subfield size of 128 $\times$ 128 pixels, which corresponds to approximately 7 $^{\prime\prime}$. It is important for the photometric accuracy that the subfield size be chosen based on the size of the isoplanatic patch, or smaller. However, smaller subfields than 128 $\times$ 128 pixel subfields are not recommended because numerical issues during the Fried parameter estimation could arise.

The Hinode image was deconvolved using a point spread function that was computed from the measured aberrations of the SOT main mirror (Suematsu et al. 2008). Is addition, a Wiener filter was applied, with a noise estimate derived from the power at frequencies that are higher than the theoretical diffraction cutoff frequency. The deconvolution is necessary to make the information content of the Hinode image comparable to that of the speckle interferometric reconstruction. It is successful up to 80% of the diffraction limit of the telescope. Beyond those spatial frequencies, the employed Wiener filter cuts off the signal due to a poor signal-to-noise ratio.

After reduction and alignment of the DST data to that of Hinode, the overall overlap in the images is 912 $\times$ 912 pixels, corresponding to a field of view of almost 50 $^{\prime\prime}$. Figure 3 demonstrates that the speckle algorithm is capable of reconstructing the same structures seen by a telescope that is not hampered by atmospheric turbulence. The minor differences in fine structure of the images arise mainly from the fact that the speckle burst of the DST spans approximately 20 s, as opposed to the single exposure of the Hinode satellite. Thus, the data is only in approximation simultaneous and some differences can be attributed to the evolution of the granulation. However, as the spatial correlation time of the solar granulation is approximately 5 min, the effect is small.

The photometric differences in the images are evaluated in several ways. We calculate the contrast of an image I with

\begin{displaymath}\mathcal{C}_{I} = \sigma_{I} / \langle I \rangle,
\end{displaymath} (9)

where $\sigma_{I}$ is the standard deviation of the mean value $\langle I \rangle$. In the speckle reconstructed DST image, this value is $\mathcal{C}_{\rm DST} = 15.1\%$, which is close to the value $\mathcal{C}_{Hin} = 16.3\%$ in the deconvolved Hinode image. To analyze this further, we computed histograms of the images' intensities with a binsize of 0.01 in normalized intensity. The histograms, shown in Fig. 5 (left), are very similar and indicate the similarity of the intensity distribution. The differences can be attributed to certain spatial scales using the radially integrated, spatial power spectra shown for both images in Fig. 5 (right). The abscissa is normalized to the theoretical spatial cutoff of Hinode's SOT. Both curves show a striking similarity, demonstrating the accuracy of the amplitude reconstruction discussed above. We attribute the deviations noticeable at normalized frequency 0.5 and above to the reduction of the signal-to-noise ratio caused by anisoplanatism. While anisoplanatism has been accounted for in the implemented model transfer functions, the signal-to-nose ratio in the fields furthest away from the structures that were used as reference for the AO correction (``lock-point'') is lower. This can lead to lower phase reconstruction performance in those parts of the observed field, and thus reduced contrast contributions from higher spatial frequencies.

Another measure for the similarity of images are the ``image distance'' metrics defined in Mikurda & von der Lühe (2006), here restated for convenience.

\begin{displaymath}D^{2} = \frac{1}{A}\sum_{\vec{x}} (I_{\rm DST}(\vec{x}) - I_{Hin}(\vec{x}))^{2}
\end{displaymath} (10)

and

\begin{displaymath}E^{2} = \frac{1}{A}\sum_{\vec{x}} (I_{\rm DST}(\vec{x}) - a - bI_{Hin}(\vec{x}))^{2},
\end{displaymath} (11)

where A is the area of the images. We have evaluated the euclidean image distance to be D2 = 0.00368925, and E2 = 0.00316514. The values of a and b were computed from a linear regression analysis of the scatter of reconstructed and Hinode image intensity. Again, the values demonstrate a good agreement of the two images.

  
4 Conclusions

We have presented the KISIP software package, which is capable of reconstructing solar speckle interferometric data observed using an AO system. The program is optimized to run in multi-processor environments to make use of parallel computing capabilities. While the reconstruction algorithms are based on well-known principles they had to be adapted for use with solar data, e.g., the amplitude calibration and estimation of the Fried parameter r0. The program scales well with an increasing number of nodes and shows good convergence properties in every situation tested. We have shown that using this program, with a high performance computing cluster, can lead to near real-time reconstruction performance.

The reconstruction accuracy has been demonstrated by comparison to data observed co-spatially and co-temporally with the Hinode satellite. We have presented evidence that not only the fine structure in ground-based data can be reconstructed well with this computer program, but also that high photometric accuracy can be achieved, even when the data that was obtained with an AO system. This has been achieved by implementing new models for the object's Fourier amplitude calibration. Satellite and ground-based data match very well.

One source for the deviation in contrast could be different amounts of stray light in Hinode/SOT and the DST. This is an important issue when comparing contrasts and can lead to significant biases in both intensity histograms and integrated power spectra, especially when data is compared which was observed using two facilities. Due to the lack of information on the stray light characteristics, we have assumed that the effect is similar for both telescopes and have applied no correction. Stray light would lead to a constant offset in the power spectra. Accurate measurements of stray light are needed to compute an accurate contrast for comparison with hydrodynamic models.

The anisoplanatism introduced by atmosphere and AO system can make the phase reconstruction performance dependent of the field of view and might be another source of differences between the images. This problem could be alleviated in the future by the use of multi-conjugate adaptive optics systems.

Acknowledgements
The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under cooperative agreement with the National Science Foundation. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (UK), NASA (USA), ESA, and NSC (Norway).

References

 

Copyright ESO 2008