A&A 488, 375-381 (2008)
F. Wöger1 - O. von der Lühe2 - K. Reardon1,3
1 - National Solar Observatory, PO Box 62, Sunspot, NM 88349, USA
2 - Kiepenheuer-Institut für Sonnenphysik, Schöneckstr. 6, 79104 Freiburg, Germany
3 - INAF - Osservatorio Astrosico di Arcetri, 50125 Firenze, Italy
Received 2 April 2008 / Accepted 17 May 2008
Context. Adaptive optics systems are used on several advanced solar telescopes to enhance the spatial resolution of the recorded data. In all cases, the correction remains only partial, requiring post-facto image reconstruction techniques such as speckle interferometry to achieve consistent, near-diffraction limited resolution.
Aims. This study investigates the reconstruction properties of the Kiepenheuer-Institut Speckle Interferometry Package (KISIP) code, with focus on its phase reconstruction capabilities and photometric accuracy. In addition, we analyze its suitability for real-time reconstruction.
Methods. We evaluate the KISIP program with respect to its scalability and the convergence of the implemented algorithms with dependence on several parameters, such as atmospheric conditions. To test the photometric accuracy of the final reconstruction, comparisons are made between simultaneous observations of the Sun using the ground-based Dunn Solar Telescope and the space-based Hinode/SOT telescope.
Results. The analysis shows that near real-time image reconstruction with high photometric accuracy of ground-based solar observations is possible, even for observations in which an adaptive optics system was utilized to obtain the speckle data.
Key words: techniques: high angular resolution - techniques: image processing - techniques: interferometric - sun: photosphere
The rapid development of computer technology, especially in the field of multi-core processors, makes a real-time application of reconstruction algorithms to speckle interferometric data feasible and warrants further development. The need for real-time - or at least near real-time - processing is clear when considering that speckle data is observed at high data rates: in general, a single ``speckle burst'' consists of approximately 100 images observed at a frame rate of around 15 images per second (or higher). When observing several hours a day this leads to a data volume of several hundred gigabytes of unreduced data per day. Even though the cost per byte is continually decreasing, the handling (transfer and distribution) is a costly and lengthy process. Thus, the reduction of speckle data at the telescope site is an important step to increase the telescope's efficiency because the data amount is reduced by a factor of around 100. Thus, the possibility of real-time data reduction becomes attractive when post-processing techniques like speckle interferometry are considered for image reconstruction. Some of the aspects of the application of post-processing algorithms to speckle data in near real-time have already been explored by Denker et al. (2001).
In this article, we present the characteristics of the Kiepenheuer-Institut Speckle Interferometry Package (KISIP, Mikurda & von der Lühe 2006; von der Lühe 1993), which has been rewritten in the C programming language and enhanced for parallel processing. In Sect. 2, we give an overview of the implemented algorithms. Section 3 describes our study of the performance of the two implemented phase reconstruction algorithms, as well as the overall scalability of the code with an increasing number of computational nodes. In addition, the photometric accuracy of the final reconstruction is tested with both a ground- and space-based telescope co-temporally observing the same target on the Sun.
In general, the imaging process through atmosphere and telescope is best described in the Fourier domain.
Using the incoherent, space-invariant imaging equation, we get for a speckle burst consisting of N images
At an early stage of the reconstruction process, each recorded short-exposed frame is split into subframes that have roughly the size as the isoplanatic patch (the area in the field of view for which the optical transfer function is considered constant) and that overlap by half of their size. This makes a parallel treatment of the subframes easy as they are sent to separate computation nodes using the message passing interface (MPI Forum 1997). The KISIP package separates the image's Fourier phases from its amplitudes. The Fourier phases are treated with unity amplitude by both of the implemented phase reconstruction algorithms, which are described in further detail below. Fourier amplitudes are reconstructed independently. In what follows, we give a brief overview over these well-known techniques that form the basis of KISIP.
In one case, the package uses an extension of the Knox-Thompson (KT) algorithm (Knox & Thompson 1974) which is based on the original authors' idea to use average cross-spectra for the reconstruction of the object's Fourier phases.
The Knox-Thompson average cross-spectrum is defined as
|Figure 1: Time used for one reconstruction versus numbers of computation nodes used. Either 212992 cross-spectrum (KT) or 221320 bispectrum (IWLS) values for averaging.|
|Open with DEXTER|
As the extended KT algorithm uses cross-spectra, i.e. multiplications of two Fourier phase values (see Eq. (2)), it is computationally less expensive than a speckle masking algorithm, which involves the product of three phase values (Eq. (4)). It has been shown that both implemented algorithms can be equivalent (Ayers et al. 1988). However, the Knox-Thompson algorithm is sensitive to alignment errors of the speckle images, whereas triple correlation algorithms do not suffer from this because of the phase closure relation inherent to the bispectrum. In bad seeing conditions, this leads to a higher reconstruction error when using the extended Know-Thompson algorithm and a better performance of the speckle masking algorithm.
|Figure 2: Convergence properties of the two implemented algorithms. Upper row: KT algorithm, lower row: IWLS algorithm. Columns from left to right: r0=5, 7, 10, 20 cm. Note that a panel of the IWLS algorithm corresponds to a subpanel of a KT panel. Shown is the residual phase variance per pixel in the Fourier domain.|
|Open with DEXTER|
|Figure 3: Top: deconvolved image of the quiet Sun region near disk center observed with Hinode on April 18th, 2007, at 15:30:30 UT. Bottom: the same region observed with the Dunn Solar Telescope (DST); the data was post-processed using KISIP. Images are shown using the same intensity scale.|
|Open with DEXTER|
|Figure 4: Close-up region of the region indicated in Fig. 3. Left: deconvolved Hinode image. Right: reconstructed DST image. Images are shown using the same intensity scale.|
|Open with DEXTER|
We present in Fig. 1, the results from tests run on a SuSE Enterprise Linux 10 cluster with 23 computational nodes plus one master node for job administration. This facility is installed at the Kiepenheuer-Institut für Sonnenphysik. Each of the 24 nodes is equipped with 2 Intel Xeon CPU 5160 with 3.00 GHz clock speed and 4 GB of random access memory (RAM). The employed CPUs have 2 cores leading to a total number of 92 usable processing units - the master node is usually not involved in computations. Each computer was connected to the main node via Infiniband fibers. As expected, the IWLS is slower than the KT algorithm because of the more involved computation. Additionally, Fig. 1 demonstrates that the code behaves linearly with an increasing number of nodes: the computational time decreases with the inverse of the number of nodes. However, there is a saturation in reconstruction time at around 22 s using both algorithms on this platform.
The saturation is an important issue that needs to be paid close attention to when designing a platform that is supposed to achieve (near) real-time reconstruction performance. The saturation is due to latency between the processors, be it because of restricted interconnect bandwidth between the computation nodes or because of a slow processor speed. Another reason for saturation is the overhead in the code that distributes the data to the computation nodes. Thus, an ideal system would provide several multi-core processors that are connected with a fast system bus, which is the current trend in processor development and high-performance computing. Nevertheless, already today a system such as the one tested above would provide near real-time performance for a camera that reads out and stores a pixel frame at an effective rate of 5 frames per second.
|Figure 5: Left: intensity histograms for the Hinode (red) and the DST image (black) as shown in Fig. 3. Right: azimuthally-integrated, spatial-power spectra of the Hinode (red) and the DST image (black).|
|Open with DEXTER|
Figure 2 shows the results of a detailed analysis of the implemented algorithms, focusing on the convergence properties in dependence of atmospheric conditions and number of evaluated cross- and bispectrum values. As can be seen, for both the KT and the IWLS algorithms, these parameters are of importance for convergence. With less severe atmospheric conditions, both algorithms converge faster as the signal-to-noise ratio in the images increases with increasing values of r0. Generally, the KT algorithm seems to converge more slowly than the IWLS algorithm, which is likely the result of the additional information that is used in the averaging process of the bispectrum computation. The penalty is longer computational time, as mentioned before. Nevertheless, greater than 30 iterations (or even less in case of the IWLS algorithm) in combination with approximately 250 000 evaluated cross- or bi-spectrum values for a pixel subfield do not lead to a significant further change in the reconstructed phase, which allows for the minimization of computational time by optimizing the reconstruction parameters. This fact is important with respect to real-time reconstruction of speckle data.
The DST speckle burst was reconstructed using the IWLS algorithm with a subfield size of 128 128 pixels, which corresponds to approximately 7 . It is important for the photometric accuracy that the subfield size be chosen based on the size of the isoplanatic patch, or smaller. However, smaller subfields than 128 128 pixel subfields are not recommended because numerical issues during the Fried parameter estimation could arise.
The Hinode image was deconvolved using a point spread function that was computed from the measured aberrations of the SOT main mirror (Suematsu et al. 2008). Is addition, a Wiener filter was applied, with a noise estimate derived from the power at frequencies that are higher than the theoretical diffraction cutoff frequency. The deconvolution is necessary to make the information content of the Hinode image comparable to that of the speckle interferometric reconstruction. It is successful up to 80% of the diffraction limit of the telescope. Beyond those spatial frequencies, the employed Wiener filter cuts off the signal due to a poor signal-to-noise ratio.
After reduction and alignment of the DST data to that of Hinode, the overall overlap in the images is 912 912 pixels, corresponding to a field of view of almost 50 . Figure 3 demonstrates that the speckle algorithm is capable of reconstructing the same structures seen by a telescope that is not hampered by atmospheric turbulence. The minor differences in fine structure of the images arise mainly from the fact that the speckle burst of the DST spans approximately 20 s, as opposed to the single exposure of the Hinode satellite. Thus, the data is only in approximation simultaneous and some differences can be attributed to the evolution of the granulation. However, as the spatial correlation time of the solar granulation is approximately 5 min, the effect is small.
The photometric differences in the images are evaluated in several ways.
We calculate the contrast of an image I with
Another measure for the similarity of images are the ``image distance'' metrics defined in Mikurda & von der Lühe (2006), here restated for convenience.
The reconstruction accuracy has been demonstrated by comparison to data observed co-spatially and co-temporally with the Hinode satellite. We have presented evidence that not only the fine structure in ground-based data can be reconstructed well with this computer program, but also that high photometric accuracy can be achieved, even when the data that was obtained with an AO system. This has been achieved by implementing new models for the object's Fourier amplitude calibration. Satellite and ground-based data match very well.
One source for the deviation in contrast could be different amounts of stray light in Hinode/SOT and the DST. This is an important issue when comparing contrasts and can lead to significant biases in both intensity histograms and integrated power spectra, especially when data is compared which was observed using two facilities. Due to the lack of information on the stray light characteristics, we have assumed that the effect is similar for both telescopes and have applied no correction. Stray light would lead to a constant offset in the power spectra. Accurate measurements of stray light are needed to compute an accurate contrast for comparison with hydrodynamic models.
The anisoplanatism introduced by atmosphere and AO system can make the phase reconstruction performance dependent of the field of view and might be another source of differences between the images. This problem could be alleviated in the future by the use of multi-conjugate adaptive optics systems.
The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under cooperative agreement with the National Science Foundation. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (UK), NASA (USA), ESA, and NSC (Norway).