Issue |
A&A
Volume 693, January 2025
|
|
---|---|---|
Article Number | A6 | |
Number of page(s) | 10 | |
Section | The Sun and the Heliosphere | |
DOI | https://doi.org/10.1051/0004-6361/202451850 | |
Published online | 23 December 2024 |
Deep learning image burst stacking to reconstruct high-resolution ground-based solar observations
1
Institute of Physics, University of Graz, Universitätsplatz 5, 8010 Graz, Austria
2
High Altitude Observatory, National Center for Atmospheric Research, 3080 Center Green Dr, Boulder, USA
3
Kanzelhöhe Observatory for Solar and Environmental Research, University of Graz, Treffen am Ossiacher See, Graz, Austria
4
Instituto de Astrofísica de Canarias (IAC), Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain
5
Departamento de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain
6
Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, 37077 Göttingen, Germany
⋆ Corresponding author; christoph.schirninger@uni-graz.at
Received:
9
August
2024
Accepted:
10
November
2024
Context. Large aperture ground-based solar telescopes allow the solar atmosphere to be resolved in unprecedented detail. However, ground-based observations are inherently limited due to Earth’s turbulent atmosphere, requiring image correction techniques.
Aims. Recent post-image reconstruction techniques are based on using information from bursts of short-exposure images. Shortcomings of such approaches are the limited success, in case of stronger atmospheric seeing conditions, and computational demand. Real-time post-image reconstruction is of high importance to enabling automatic processing pipelines and accelerating scientific research. In an attempt to overcome these limitations, we provide a deep learning approach to reconstruct an original image burst into a single high-resolution high-quality image in real time.
Methods. We present a novel deep learning tool for image burst reconstruction based on image stacking methods. Here, an image burst of 100 short-exposure observations is reconstructed to obtain a single high-resolution image. Our approach builds on unpaired image-to-image translation. We trained our neural network with seeing degraded image bursts and used speckle reconstructed observations as a reference. With the unpaired image translation, we aim to achieve a better generalization and increased robustness in case of increased image degradations.
Results. We demonstrate that our deep learning model has the ability to effectively reconstruct an image burst in real time with an average of 0.5 s of processing time while providing similar results to standard reconstruction methods. We evaluated the results on an independent test set consisting of high- and low-quality speckle reconstructions. Our method shows an improved robustness in terms of perceptual quality, especially when speckle reconstruction methods show artifacts. An evaluation with a varying number of images per burst demonstrates that our method makes efficient use of the combined image information and achieves the best reconstructions when provided with the full-image burst.
Key words: atmospheric effects / techniques: image processing / telescopes / Sun: atmosphere / Sun: photosphere
© The Authors 2024
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.