Issue |
A&A
Volume 697, May 2025
|
|
---|---|---|
Article Number | A136 | |
Number of page(s) | 12 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/202452308 | |
Published online | 16 May 2025 |
Joint multiband deconvolution for Euclid and Vera C. Rubin images
1
Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny,
1290
Versoix,
Switzerland
2
GEPI, Observatoire de Paris, Université PSL, CNRS,
5 Place Jules Janssen,
92190
Meudon,
France
3
ICC-UB Institut de Ciències del Cosmos, Universitat de Barcelona,
Martí Franquès, 1,
08028
Barcelona,
Spain
4
ICREA,
Pg. Lluís Companys 23,
Barcelona,
08010
Spain
5
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM,
91191
Gif-sur-Yvette,
France
6
Institutes of Computer Science and Astrophysics, Foundation for Research and Technology Hellas (FORTH),
Greece
★ Corresponding author; utsav.akhaury@epfl.ch
Received:
19
September
2024
Accepted:
24
March
2025
With the advent of surveys like Euclid and Vera C. Rubin, astrophysicists will have access to both deep, high-resolution images and multiband images. However, these two types are not simultaneously available in any single dataset. It is therefore vital to devise image deconvolution algorithms that exploit the best of both worlds and that can jointly analyze datasets spanning a range of resolutions and wavelengths. In this work we introduce a novel multiband deconvolution technique aimed at improving the resolution of ground-based astronomical images by leveraging higher-resolution space-based observations. The method capitalizes on the fortunate fact that the Rubin r, i, and z bands lie within the Euclid VIS band. The algorithm jointly de-convolves all the data to convert the r-, i-, and z-band Rubin images to the resolution of Euclid by leveraging the correlations between the different bands. We also investigate the performance of deep-learning-based denoising with DRUNet to further improve the results. We illustrate the effectiveness of our method in terms of resolution and morphology recovery, flux preservation, and generalization to different noise levels. This approach extends beyond the specific Euclid-Rubin combination, offering a versatile solution to improving the resolution of ground-based images in multiple photometric bands by jointly using any space-based images with overlapping filters.
Key words: methods: data analysis / methods: numerical / methods: statistical / techniques: image processing
© The Authors 2025
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
1 Introduction
High spatial resolution, a high signal-to-noise ratio (S/N), and broad wavelength coverage are all essential for most observations in astrophysics. However, it is difficult, or even impossible, to have all three simultaneously. Space telescopes, although free of atmospheric turbulence, are limited in size. Ground-based telescopes can deliver high-S/N data but are affected by atmospheric turbulence and have a higher sky background. In addition, blurring by the instrumental or atmospheric point spread function (PSF) differs for each type of data and varies from band to band. To capitalize on the strengths of all types of telescopes and data, it is crucial to develop robust deconvolution techniques that can remove blurring by the PSF and optimize the S/N of the final reconstruction, by combining all observations and accounting for the different bands and PSFs.
Due to the presence of noise, image deconvolution is a challenging ill-posed inverse problem that requires regularization for a well-defined solution. Early approaches in the field acknowledged this issue, proposing solutions such as minimizing the Tikhonov function (Tikhonov & Arsenin 1977) or maximizing the entropy of the solution (Skilling & Bryan 1984). Bayesian methods also emerged, including the Richardson-Lucy algorithm applied to early Hubble Space Telescope (HST) data (Richardson 1972; Lucy 1974). A novel approach proposed by Magain et al. (1998) separated point sources from extended ones and used a narrow PSF for deconvolution to achieve an improved resolution suitable for the chosen pixel sampling. This approach was improved with wavelet regularization for the extended channel (Cantale et al. 2016) and further refined by Michalewicz et al. (2023; STARRED), who employed Starlets, an isotropic wavelet basis (Starck et al. 2015), to regularize the solution. There have also been efforts to jointly deconvolve multiple astronomical observations of the same sky region (Donath et al. 2023). Furthermore, Ingaramo et al. (2014) explored the combination of multiple sources by demonstrating the application of Richardson–Lucy deconvolution to merge high-resolution, high-noise images with low-resolution, low-noise images.
A notable advancement in astronomical deconvolution was the use of deep learning. Once trained, deep-learning-based methods offer significant computational efficiency compared to traditional approaches. U-Nets (Ronneberger et al. 2015) have gained popularity for their nonlinear processing capabilities and multi-scale architecture. Expanding on U-Nets, Sureau et al. (2020) introduced the Tikhonet method for deconvolving optical galaxy images, demonstrating its superior performance over sparse regularization methods in terms of the mean squared error and a shape criterion that assesses galaxy ellipticity. Nammour et al. (2022) improved Tikhonet by incorporating a shape constraint into the loss function. Another powerful architecture, Learnlet (Ramzi et al. 2023), combines the strengths of wavelets and U-Nets while offering a fully interpretable neural network with minimal hallucination. In our previous work (Akhaury et al. 2022, 2024), we proposed a two-step deconvolution framework and investigated the performance of convolutional neural network (CNN) and transformer-based denoisers. We concluded that a Swin-transformer-based U-Net (SUNet; Fan et al. 2022) outperforms a CNN-based U-Net (Ronneberger et al. 2015) in terms of normalized mean squared error (NMSE) and structural similarity index measure.
While deconvolution is primarily used to reconstruct galaxy images at high spatial resolution in each photometric band independently, there are scenarios, particularly at low S/Ns, where joint multiband deconvolution can enhance the detection and characterization of systems. One such potential application is the joint multiband deconvolution of Euclid and Vera C. Rubin images. The Rubin Observatory is set to deliver a dataset of 500 petabytes across multiple optical frequency bands, while Euclid will observe images spanning the optical and infrared spectrum. Interestingly, the Euclid VIS band (central frequency = 715 nm) overlaps with three of the Rubin filters: r, i, z. As a space-based satellite, Euclid will produce images with sharper details due to its narrower PSF compared to Rubin. A related study by Joseph et al. (2021) also involves jointly modeling simulated images that model observations from both Euclid and Rubin.
In this work, we present a novel multiband deconvolution technique designed to enhance the resolution of ground-based astronomical images by leveraging higher-resolution space-based observations. Our approach, which focuses on the joint deconvolution of Rubin and Euclid images, effectively exploits the overlapping spectral coverage of the Rubin r, i, and z bands with the Euclid VIS band. By utilizing the Euclid VIS-band image as a term that provides additional information, our technique ensures that the deconvolved Rubin images retain high spatial resolution and accurate photometric measurements. The integration of deep-learning-based denoising further enhances the quality of the deconvolved outputs, reducing background noise without altering the main structures of the galaxies. We generated realistic Euclid and Rubin simulations from HST cutouts of varying magnitudes extracted from the GOODS-N and GOODS-S surveys (Retzlaff et al. 2010). The simulated Euclid-like VIS-band PSF was obtained from Liaudat et al. (2022), and the simulated Rubin-like r-, i-, and z-band PSFs from Abolfathi et al. (2021). Our method is effective in terms of resolution recovery, flux preservation, and generalization across different noise levels. Through our joint deconvolution approach, we achieve resolution recovery in simulated Rubin images close to that of HST, a feat nearly impossible with independent deconvolutions of each photometric band. The potential applications of our method go beyond the Euclid-Rubin pair, providing a flexible solution to enhancing the resolution of ground-based images in multiple photometric bands with any overlapping space-based filter band. This versatility is especially important as large-scale astronomical surveys gather increasing amounts of data, creating a need for effective and reliable deconvolution techniques.
In Sect. 2, we describe the deconvolution problem and introduce our proposed solution. The methodology for generating our dataset is detailed in Sect. 3. We then present the outcomes of our deconvolution algorithm in Sect. 4. Finally, Sect. 5 presents our conclusions. To support reproducible research, the codes utilized in this article are publicly accessible.
2 The deconvolution problem
2.1 The forward model
For the three Rubin filters, let yr, yi, and yz ∈ ℝn×n be the corresponding observed images and hr, hi, and hz ∈ ℝn × n be the PSFs. If , and
denote the corresponding target images, * denotes the convolution operation, and ηr, ηi, and ηz ∈ ℝn×n denote additive noise, the observed Rubin images can then be modeled as
(1)
(2)
(3)
As for Euclid, let yeuc ∈ ℝn×n be the observed image, be the target image, and heuc ∈ ℝn×n be the PSF. If ηeuc ∈ ℝn×n denotes additive noise and αr, αi, αz ∈ ℝn denote the corresponding fractional flux contribution from each Rubin filter, the target and the observed images can be modeled as
(4)
(5)
The motivation behind taking the weighted sum of the Rubin images to model the Euclid image can be seen in Fig. 1, which shows the overlap between the Rubin and Euclid filters.
![]() |
Fig. 1 Filter curves for Euclid and Rubin. The relative filter transmission is shown as a function of the wavelength. The Euclid VIS band overlaps with three of the Rubin filters: r, i, and z. |
2.2 The proposed solution
We formulated the following loss functions and minimize them using gradient descent to recover the optimal solutions:
(6)
(7)
(8)
(9)
The first terms in Eqs. (6)–(8) represent the data fidelity terms for each respective band, with σr, σi, and σz being the corresponding noise maps. The second terms are the constraining terms that enforce the condition that the sum of the Rubin r-, i-, and z-band images equals the Euclid VIS-band image, as expected from Fig. 1 after flux calibration. Within these constraining terms, the individual images xr, xi, and xz are multiplied by their respective fractional flux contributions αr, αi, and αz, which represent the fractional area overlaps between their corresponding filter curves. The values of αr, αi, and αz are obtained by integrating the area under the curves in Fig. 1 and normalizing them to sum up to one. The resulting values are αr = 0.3785, αi = 0.3468, and αz = 0.2746. The denominator, σeuc, denotes the Euclid image noise map. The choice of the multiplicative hyperparameters λr, λi, and λz is described in Sect. 2.3.
2.3 Hyper-parameter tuning
The hyper-parameters λr, λi, and λz are determined by varying the ratios between the second and first terms in Eqs. (6)–(8). These ratios indicate the contribution coming from the constraining term. We varied the ratios from 0 (no contribution from the constraining term) to 1 (equal contribution from the constraining term) in steps of 0.01. From this experiment, the optimal solution yields the lowest mean squared error with a ratio of 0.3 for all three photometric bands. Subsequently, the values of λr, λi, and λz are computed from these ratios by dividing them by K (Eq. (9)).
2.4 Optimization
The aim is to find optimal solutions that minimize the individual loss functions:
which is done in an alternative and iterative manner using gradient descent:
(10)
where x[k] denotes the variable x at kth iteration, and βr, βi, βz ∈ ℝn are the step sizes chosen such that convergence is guaranteed (described in more detail in Sect. 2.5). While computing Eq. (10) for one band, it is assumed that the other two bands are known and remain constant. The gradients of the loss functions 6–8 are given by
(11)
(12)
(13)
(14)
2.5 Gradient descent step size
Suppose a function f : ℝn×n → ℝn×n that is convex and differentiable. Its gradient is Lipschitz-continuous if there exists some constant C such that
Since the loss functions (6)–(8) are convex and differentiable, one could find Lipschitz constants C{r,i,z} such that
(15)
Once that is found, the optimal constraints on the individual step sizes, β{r,i,z}, that ensure convergence are as follows:
(16)
From Eqs. (15) and (16), it is important to note that the step size also depends on the Rubin and Euclid PSFs (see details in Sects. 3.2 and 3.3) and the values of λ{r,i,z} (described in Sect. 2.3) and α{r,i,z} (shown in Sect. 2.2).
3 Dataset generation
3.1 Ground truth images
We extracted HST cutout windows of dimensions 128 × 128 pixels from GOODS-N and GOODS-S (Retzlaff et al. 2010) in the F606W, F775W, and F850LP bands by centering them at the centroid of the object. These HST bands were selected because their central wavelengths align with those of the Rubin r, i, and z bands, and these HST images were subsequently used to simulate the Rubin images, as explained in Sect. 3.2. The mosaicked HST ACS (Advanced Camera for Surveys) images along with the catalog can be found at this link1. These HST cutouts are at a pixel scale of 0.05″. We aimed to perform the experiments on galaxies with sizes large enough to effectively assess the impact of deconvolution on our ability to resolve their structural and morphological features, such as arms, bars, and clumps. To ensure the selection of large galaxies and exclude point-sized objects, we applied the following filtering criteria to the F775W band catalog:
18 < MAG_AUTO < 23 (AB magnitude in SExtractor “AUTO” aperture)
Flux_Radius80 > 10 (80% enclosed flux radius in pixels)
FWHM > 10 (full width at half maximum in pixels)
We then visually inspected and selected 92 objects that exhibited extended and complex galaxy structures. The histogram of the HST F775W band magnitude for all galaxies in our dataset is shown in Fig. 2. It is important to highlight that, according to our simulations, the HST F775W band corresponds to the Rubin i band, and we refer to it as the Rubin i band throughout the text.
3.2 Vera C. Rubin images
The simulated Rubin-like PSFs were obtained from the second data challenge (DC2) of the Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). The atmospheric and optical effects, as well as sensor-induced electrostatic effects, are simulated using physically motivated models, with a final adjustment to the PSF sizes to match the expected data from the Rubin Observatory, as described in detail in Abolfathi et al. (2021).
To generate the Rubin simulated images, we first convolved the HST images in the F606W, F775W, and F850LP bands with the Rubin r-, i-, and z-band PSFs, respectively, such that the simulated images are at the expected Rubin resolution, with a pixel scale of 0.2″. Subsequently, we added white Gaussian noise such that our Rubin-simulated images have a S/N ranging between 12 and 28, with a median around 20. Based on the survey parameters outlined in Željko Ivezić et al. (2019), our simulations suggest an S/N range that corresponds to the initial few visits of the telescope. This indicates that our method could be effectively applied as soon as the first images start arriving.
![]() |
Fig. 2 Histogram of the HST F775W-band magnitude for all galaxies in our dataset after filtering. Note that the HST F775W band matches with the Rubin i band. |
3.3 Euclid images
The simulated Euclid-like VIS-band PSF was obtained using the WaveDiff model proposed by Liaudat et al. (2022), which changes the data-driven PSF modeling space from the pixels to the wavefront by adding a differentiable optical forward model in the modeling framework. WaveDiff outputs an approximation of the true Euclid PSF, which was derived before the actual launch of the satellite.
Next, we calculated the fractional flux contributions αr, αi, αz by integrating the area under the curves in Fig. 1 and normalizing them to sum to 1. The resulting values, as also mentioned in Sect. 2.2, are αr = 0.3785, αi = 0.3468, and αz = 0.2746. To generate the Euclid simulated images, we multiplied αr, αi, and αz by the HST images in the F606W, F775W, and F850LP bands, respectively. The result is then convolved with the simulated Euclid PSF such that the simulated images are at the expected Euclid resolution with a pixel scale of 0.1″. Finally, white Gaussian noise is added such that the Euclid-simulated images have a S/N ranging between 20 and 45, with a median around 35. Based on the calculations presented by Euclid Collaboration: Scaramella et al. (2022), who assessed the S/N statistics for Euclid, our simulations are conservative, implying that our method would perform well when applied to real images with higher S/Ns.
4 Results
The algorithm simultaneously processed the noisy simulations from the three Rubin bands and the Euclid VIS band, along with their respective PSFs. These noisy images served as initializations or first guesses for the algorithm. Subsequently, the algorithm iteratively minimized the loss functions (6)–(8), as detailed in Sect. 2.4. The algorithm was run for 200 iterations, with convergence typically observed within 50–100 iterations for all images in our dataset. Figure 3 shows the convergence plot of the loss function for the deconvolved output in Fig. 7a.
![]() |
Fig. 3 Loss function for the galaxy shown in Fig. 7a. Convergence is guaranteed at around 100 iterations when the relative change in loss value is <10−3 and the curve is flat. |
![]() |
Fig. 4 Unit test to verify that there is no leakage of flux from one channel to another. The recovered Gaussians remain at their original centers. |
4.1 Flux leakage test
As a validation to ensure no flux leakage between channels during the joint deconvolution of Rubin and Euclid images, we conducted a unit test. We assumed three distinct Gaussians placed separately in the Rubin channels, with the Euclid simulation being a weighted sum of these three images, resulting in three disjoint Gaussians. Post-deconvolution, the Gaussians remained intact without any structure extending beyond their boundaries. This confirms that the structure present in the deconvolved image within each Rubin band is independent of the structures in other bands and each image accurately retains only the information relevant to its specific band. Figure 4 provides a visual demonstration of these findings.
Moreover, to further validate that our method is effective for objects with non-flat spectral energy distributions (SEDs), we analyzed the transfer of information across different bands for such objects. The results are presented in Appendix A.
4.2 Deconvolved outputs
We present two examples of deconvolved images in Fig. 7, illustrating the algorithm’s capability to recover features that were lost in the original Rubin simulations. Visually, the deconvolved outputs seem to capture the variations between different bands. Qualitatively, these outputs exhibit high quality with minimal background noise and result in clean residuals. However, in Sect. 4.3, we demonstrate that employing a deep-learningbased denoiser further enhances the already impressive results achieved initially. Table 1 presents the NMSE computed with respect to the ground-truth HST image for the pre-denoised and post-denoised images.
![]() |
Fig. 5 DRUNet architecture, which incorporates an additional noise level map as input and integrates U-Net (Ronneberger et al. 2015) with ResNet (He et al. 2016). “SConv” stands for strided convolution, and “TConv” stands for transposed convolution. Image credits: Zhang et al. (2022). |
![]() |
Fig. 6 Fractional error in the output flux as a function of the i-band magnitude (which is chosen in order to have the same scale on the x-axis). The dots correspond to the individual galaxies, and the gray line is the best-fit line after binning the magnitude values. |
NMSE with respect to HST images.
4.3 Deep-learning-based denoising
After obtaining the deconvolved outputs, we feed them to DRUNet, a neural network proposed by Zhang et al. (2022) that combines U-Net (Ronneberger et al. 2015) and ResNet (He et al. 2016). U-Net is renowned for its efficiency in image-to-image translation, while ResNet excels in increasing modeling capacity through stacked residual blocks. Inspired by FFDNet (Zhang et al. 2018a), which incorporates a noise level map as input, DRUNet enhances U-Net by integrating residual blocks to improve prior denoising modeling. Similar approaches that combine U-Net and ResNet can be found in other studies (Zhang et al. 2018b; Venkatesh et al. 2018). The backbone of DRUNet is a U-Net architecture with four scales, and the schematic proposed by Zhang et al. (2022) is depicted in Fig. 5.
We chose DRUNet because it is a non-blind denoiser, meaning it takes the noise map as input. This ensures that any unknown noise can be estimated and given as input for denoising. We estimate the noise level map in our deconvolved images using the scikit-image package (van der Walt et al. 2014) by calculating the average noise level within four 16 × 16 pixel squares placed at each corner of the image. This approach ensures that only background noise is measured, avoiding any contribution from the signal. This map is then fed to the pre-trained DRUNet, along with the deconvolved image. The denoised outputs are illustrated in Fig. 8. It is observed that the denoiser exclusively eliminates noise from the background without affecting the main structure of the galaxies. Although the enhancement in image quality is marginal (since the original deconvolved image is already of high quality), the NMSE computed with respect to the ground-truth HST image decreases, as shown in Table 1. The most notable improvement is observed in the z-band images. Even though DRUNet was trained on a combination of images from the BSD (Chen & Pock 2017), Waterloo Exploration Database (Ma et al. 2017), DIV2K (Agustsson & Timofte 2017), and Flick2K (Lim et al. 2017), it is remarkable that it works well on astronomical data, showing great generalization. Finally, the fractional error in the output flux as a function of the i-band magnitude is shown in Fig. 6, which indicates that the mean flux error is less than 5% for the entire magnitude range for all the bands.
![]() |
Fig. 7 Two deconvolved outputs that illustrate the successful recovery of features completely lost in the Rubin simulations. Additionally, the outputs seem to capture the variations when transitioning from one band to another. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z bands. Third: deconvolved outputs for the three bands. Fourth: corresponding ground-truth HST images. Fifth: residuals, which are defined as follows: residual = noisy Rubin image – PSF * deconvolved image. |
![]() |
Fig. 8 Two denoised outputs from DRUNet that demonstrate its ability to effectively remove noise from the background while preserving the structure of the central galaxy. First column: Rubin simulations in the r, i, and z bands. Second: corresponding pre-denoised outputs. Third: corresponding post-denoised outputs. Fourth and fifth: Residuals. |
![]() |
Fig. 9 Galaxy shown in Fig. 7b deconvolved using two different approaches, illustrating that joint deconvolution outperforms independent deconvolutions of individual photometric bands. The joint method allows us to leverage the correlation between the different bands and the space-based image, thus improving the final output. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z, bands. Third: independently deconvolved SUNet outputs for the three bands. Fourth: corresponding joint deconvolution outputs followed by denoising with DRUNet. Fifth: ground-truth HST images. |
4.4 Improvement compared to an independent deconvolution of each band
To demonstrate the advantages of jointly deconvolving multiple photometric bands, we performed independent deconvolution of each band using the deconvolution framework introduced in Akhaury et al. (2024). The method proceeds in two steps: it first de-convolves the input using Tikhonov regularization (Tikhonov & Arsenin 1977), and denoises the result using SUNet (Fan et al. 2022), a state-of-the-art Swin transformer-based architecture. The results, presented in Fig. 9, show that the joint deconvolution method outperforms the independent deconvolution of individual bands. The joint method enables us to leverage the correlation between the different bands and the space-based image, thus improving the final output.
5 Conclusion
We have presented a novel image deconvolution technique designed to improve the resolution of multiband ground-based data by leveraging higher-resolution space-based observations. Our approach, which focuses on the joint deconvolution of Rubin and Euclid images, effectively exploits the overlapping spectral coverage of the Rubin r, i, and z bands with the Euclid VIS band. Through rigorous testing, we have demonstrated that our iterative algorithm successfully recovers fine details while preserving the flux for each band. Different noise levels were tested, and the resolution achieved by ground-based data is close to that of HST. Our results indicate that a joint deconvolution of all data outperforms independent deconvolutions of individual photometric bands using existing state-of-the-art methods. By utilizing the Euclid VIS-band image as a term that provides additional information, our technique ensures that the deconvolved Rubin images retain high spatial resolution and accurate photometric measurements. The integration of deep-learning-based denoising using DRUNet enhances the quality of the deconvolved output, further reducing background noise without altering the main structures of the galaxies.
The potential applications of our method extend beyond the Euclid-Rubin pair, offering a versatile solution to improving the resolution of ground-based images in multiple photometric bands as long as there exists a space-based image of the same field of view in a band that encompasses all ground-based filters. In the future, we intend to test our deconvolution method on images of the Perseus cluster by using ground-based observations from the Canada–France–Hawaii Telescope (CFHT) and space-based observations from the Euclid Early Release Observations (ERO) public release.
Data availability
For the sake of reproducible research, the codes used for this article are publicly available online.
The ready-to-use version of our joint deconvolution method is available at: https://github.com/utsav-akhaury/Multiband-Deconvolution/tree/main
The DRUNet repository from Zhang et al. (2022) is available at: https://github.com/cszn/DPIR
Acknowledgements
This work was funded by the Swiss National Science Foundation (SNSF) under the Sinergia grant number CRSII5_198674. This work was supported by the TITAN ERA Chair project (contract no. 101086741) within the Horizon Europe Framework Program of the European Commission, and the Agence Nationale de la Recherche (ANR-22-CE31-0014-01 TOSCA).
Appendix A Generalization to objects with non-flat SEDs
To assess the impact of information transfer across different bands for objects with a non-flat SED, we conducted an experiment where we successively replaced the Rubin bands with pure noise, ensuring that no galaxy signal was present. We then analyzed how features from the high-resolution Euclid image propagated into the deconvolved Rubin bands. The outputs are shown in Fig. A.1.
Our results confirm that reconstructed features appear only in bands where the original galaxy signal is present. This demonstrates that the algorithm does not artificially imprint Euclid information onto Rubin bands lacking real data. In cases where a structure is visible in the Euclid image but absent from one or more Rubin bands, the problem becomes degenerate: the feature could either be entirely attributed to a single band or distributed across multiple bands. However, our findings indicate that the outputs are directly influenced by the input data in each band rather than being dictated solely by the high-resolution Euclid image. Our results demonstrate that the joint deconvolution method effectively utilizes the available signal in each band while respecting the constraints imposed by the data. This also confirms that the method would work for objects with nonflat SEDs, where signal may be present in only one band but absent in others, ensuring that features are accurately transferred according to their actual distribution across the bands.
As mentioned in Sect. 4, we find that the algorithm converges within 200 iterations when the signal is available in all three bands. When one band is replaced with a noise map, as shown in Fig. A.1a, convergence requires approximately 1000 iterations. Replacing a second band with a noise map, as seen in Fig. A.1b, further increases the iteration count to around 5000. This demonstrates that incorporating more data across different bands significantly accelerates the convergence of the loss functions as the algorithm can benefit by capturing the correlations between these bands.
Appendix B Plug-and-play ADMM
The plug-and-play alternating direction method of multipliers (PnP ADMM) has emerged as a powerful framework for solving inverse problems by combining iterative optimization techniques with deep-learning-based priors. Originally developed for convex optimization problems with linear equality constraints (Boyd et al. 2010), ADMM decomposes the minimization process into sequential sub-problems, typically involving a data fidelity term and a regularization term, followed by an update of the dual variable. Previous works (Venkatakrishnan et al. 2013; Sreehari et al. 2016; H. Chan et al. 2016) have interpreted these sub-steps as an inversion step followed by a denoising step, coupled via the augmented Lagrangian term and the dual variable. The PnP ADMM approach extends this idea by replacing the proximal operator related to the prior with a deep neural network (DNN) trained as a denoiser (Meinhardt et al. 2017; Bigdeli et al. 2017; Gupta et al. 2018; Sureau et al. 2020), allowing for greater flexibility in handling complex image priors. Compared to direct deep-learning-based inverse models, PnP ADMM offers several advantages: (1) it decouples the inversion step from the DNN, enabling the inclusion of additional convex constraints that can be efficiently handled via optimization, (2) it reduces the cost of learning by focusing solely on training a denoiser rather than multiple networks, as seen in unfolding approaches, and (3) by iterating between denoising and inversion, it ensures that the network output remains consistent with the observed data. In this work, we integrate the PnP ADMM framework from Sureau et al. (2020) with the DRUNet denoiser from Zhang et al. (2022). Specifically, we employ DRUNet in the proximal update step of the PnP framework to enhance the denoising performance.
![]() |
Fig. A.1 Galaxy from Fig. 7b, with each band successively replaced by a random noise map, as shown when progressing from Figs. A.1a to A.1c. Reconstructed features appear only in bands where the original galaxy signal is present. |
B.1 The proposed solution
Considering the forward model described in Sect. 2.1, we defined the following loss functions, akin to Eqs. 6–8, but incorporating an additional augmented Lagrangian term with the dual variable. These functions were then minimized using the algorithm outlined in Appendix B.2.
(B.1)
1: Initialize: Set ρ0 = 1, ρmax = 10, η = 0.5, γ = 1.4, Δ0 = 0, x(0) = y, z(0) = x(0), μ(0) = 0, ε
2: for k = 0 to Niterations do {Main Loop}
3: Deconvolution sub-problem: x(k+1) = FISTA(y, x(k), z(k), μ(k), ρk) (Beck & Teboulle 2009)
4: Denoising sub-problem: z(k+1) = Nθ (x(k+1)+ μ(k)) (Nθ = DRUNet denoiser)
5: Lagrange multiplier update: μ(k+1) = μ(k) + (x(k+1) − z(k+1))
6:
7: if Δk+1 ≥ ηΔk and ρk+1 ≤ ρmax then
8 ρk+1 = γρk
9: else
10: ρk+1 = ρk
11: end if
12: end for
13: return {x(k+1)}
As before, the first terms in Eqs. B.1–B.3 represent the data fidelity terms for each respective band, with σr, σi, and σz being the corresponding noise maps. The second terms represent the augmented Lagrangian, incorporating the dual variable z to split the problem into two sub-problems: an inversion/deconvolution step followed by a denoising step. The third terms, as explained in Sect. 2.2, are the constraining terms that enforce the condition that the sum of the Rubin r-, i-, and z-band images equals the Euclid VIS-band images. For the ADMM update, the parameter ρ was manually tuned based on the approach from Sureau et al. (2020) to strike a balance between quickly stabilizing the algorithm (with a higher ρ) and prioritizing the minimization of the data fidelity term in the early iterations (with a lower ρ). The values of all other hyperparameters have been previously described in Sects. 2.2 and 2.3.
B.2 The algorithm
The PnP ADMM algorithm inspired by Sureau et al. (2020) is summarized in Table 1. The first step consists of solving the loss functions B.1–B.3 alternatively at iteration k using the accelerated iterative convex algorithm FISTA (Beck & Teboulle 2009). In the second step, the DRUNet denoiser functions as a projector in the proximal update step, as described earlier. The final step regulates the augmented Lagrangian parameter, ensuring its increase when the optimization parameters exhibit insufficient change.
B.3 Results
The algorithm simultaneously processed the noisy simulations from the three Rubin bands and the Euclid VIS band, along with their respective PSFs. These noisy images served as initializations or first guesses. The algorithm was run for 200 iterations, with convergence typically observed within 150-200 iterations for all images in our dataset. Figure B.1 shows the convergence plot of the loss function for the deconvolved output in Fig. B.2a.
![]() |
Fig. B.1 Loss function for the galaxy shown in Fig. B.2a. Convergence is achieved at around 200 iterations when the relative change in loss value is < 10−3 and the curve is flat. |
Figure B.2 presents the same two examples of deconvolved images as shown in Fig. 7. Qualitatively, the outputs for the two methods closely resemble each other. Compared to the original algorithm, there is a slight reduction in NMSE by approximately 0.26%. However, due to the additional computational steps involved in the iterative process and the proximal denoising step, PnP ADMM takes approximately 50 times longer to run than the original algorithm. This is consistent with the findings of Sureau et al. (2020). Hence, this significant increase in computational time makes it less practical for use on large datasets.
![]() |
Fig. B.2 Outputs of the PnP ADMM algorithm for the two galaxies shown in Fig. 7, closely matching the results presented in the same figure. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z bands. Third: Deconvolved outputs for the three bands. Fourth: Corresponding ground-truth HST images. Fifth: Residuals. |
References
- Abolfathi, B., Alonso, D., Armstrong, R., et al. 2021, ApJS, 253, 31 [CrossRef] [Google Scholar]
- Agustsson, E., & Timofte, R. 2017, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1122 [CrossRef] [Google Scholar]
- Akhaury, U., Starck, J.-L., Jablonka, P., Courbin, F., & Michalewicz, K. 2022, Front. Astron. Space Sci., 9, 1001043 [Google Scholar]
- Akhaury, U., Jablonka, P., Starck, J.-L., & Courbin, F. 2024, A&A, 688, A6 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Beck, A., & Teboulle, M. 2009, SIAM J. Imag. Sci., 2, 183 [CrossRef] [Google Scholar]
- Bigdeli, S. A., Jin, M., Favaro, P., & Zwicker, M. 2017, in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’ 17 (USA: Curran Associates Inc.), 763 [Google Scholar]
- Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. 2010, Mach. Learn., 3, 1 [Google Scholar]
- Cantale, N., Courbin, F., Tewes, M., Jablonka, P., & Meylan, G. 2016, A&A, 589, A81 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Chan, S. H., Wang, X., & Elgendy, O. A., 2017, IEEE Trans. Computat. Imag., 3, 84 [Google Scholar]
- Chen, Y., & Pock, T. 2017, IEEE Trans. Pattern Anal. Mach. Intell., 39, 1256 [Google Scholar]
- Donath, A., Siemiginowska, A., Kashyap, V., Dyk, D. V., & Burke, D. 2023, Bull. AAS, 55, 4 [Google Scholar]
- Euclid Collaboration (Scaramella, R., et al.) 2022, A&A, 662, A112 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Fan, C.-M., Liu, T.-J., & Liu, K.-H. 2022, in 2022 IEEE International Symposium on Circuits and Systems (ISCAS) (IEEE) [Google Scholar]
- Gupta, H., Jin, K. H., Nguyen, H. Q., McCann, M. T., & Unser, M. 2018, IEEE Trans. Med. Imag., 37, 1440 [Google Scholar]
- He, K., Zhang, X., Ren, S., & Sun, J. 2016, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770 [Google Scholar]
- Ingaramo, M., York, A. G., Hoogendoorn, E., et al. 2014, ChemPhysChem, 15, 794 [Google Scholar]
- Joseph, R., Melchior, P., & Moolekamp, F. 2021, arXiv e-prints [arXiv:2107.06984] [Google Scholar]
- Liaudat, T., Starck, J.-L., Kilbinger, M., & Frugier, P.-A. 2022, Inverse Problems, 39, 035008 [Google Scholar]
- Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. 2017, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1132 [CrossRef] [Google Scholar]
- Lucy, L. B. 1974, AJ, 79, 745 [Google Scholar]
- Ma, K., Duanmu, Z., Wu, Q., et al. 2017, IEEE Trans. Image Process., 26, 1004 [Google Scholar]
- Magain, P., Courbin, F., & Sohy, S. 1998, ApJ, 494, 472 [NASA ADS] [CrossRef] [Google Scholar]
- Meinhardt, T., Moller, M., Hazirbas, C., & Cremers, D. 2017, in Proceedings of the IEEE International Conference on Computer Vision, 1781 [Google Scholar]
- Michalewicz, K., Millon, M., Dux, F., & Courbin, F. 2023, J. Open Source Softw., 8, 5340 [NASA ADS] [CrossRef] [Google Scholar]
- Nammour, F., Akhaury, U., Girard, J. N., et al. 2022, A&A, 663, A69 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Ramzi, Z., Michalewicz, K., Starck, J.-L., Moreau, T., & Ciuciu, P. 2023, J. Math. Imag. Vis., 65, 240 [Google Scholar]
- Retzlaff, J., Rosati, P., Dickinson, M., et al. 2010, A&A, 511, A50 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Richardson, W. H. 1972, J. Opt. Soc. Am., 62, 55 [NASA ADS] [CrossRef] [Google Scholar]
- Ronneberger, O., Fischer, P., & Brox, T. 2015, arXiv e-prints [arXiv:1505.04597] [Google Scholar]
- Skilling, J., & Bryan, R. K. 1984, MNRAS, 211, 111 [NASA ADS] [CrossRef] [Google Scholar]
- Sreehari, S., Venkatakrishnan, S., Wohlberg, B., et al. 2016, IEEE Trans. Computat. Imag., 2, 408 [Google Scholar]
- Starck, J.-L., Murtagh, F., & Bertero, M. 2015, Starlet Transform in Astronomical Data Processing, ed. O. Scherzer (New York, NY: Springer New York), 2053 [Google Scholar]
- Sureau, F., Lechat, A., & Starck, J.-L. 2020, A&A, 641, A67 [EDP Sciences] [Google Scholar]
- Tikhonov, A. N., & Arsenin, V. Y. 1977, Solutions of Ill-posed Problems (Washington, D.C.: John Wiley & Sons, New York: V. H. Winston & Sons), xiii +258, translated from the Russian, Preface by translation editor Fritz John, Scripta Series in Mathematics [Google Scholar]
- van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., et al. 2014, arXiv e-prints [arXiv:1407.6245] [Google Scholar]
- Venkatakrishnan, S. V., Bouman, C. A., & Wohlberg, B. 2013, 2013 IEEE Global Conference on Signal and Information Processing, 945 [CrossRef] [Google Scholar]
- Venkatesh, G. M., Naresh, Y. G., Little, S., & O’Connor, N. E. 2018, in OR 2.0/CARE/CLIP/ISIC@MICCAI [Google Scholar]
- Željko Ivezić, Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111 [NASA ADS] [CrossRef] [Google Scholar]
- Zhang, K., Zuo, W., & Zhang, L. 2018a, IEEE Trans. Image Process., 27, 4608 [Google Scholar]
- Zhang, Z., Liu, Q., & Wang, Y. 2018b, IEEE Geosci. Remote Sens. Lett., 15, 749 [CrossRef] [Google Scholar]
- Zhang, K., Li, Y., Zuo, W., et al. 2022, IEEE Trans. Pattern Anal. Mach. Intell., 44, 6360 [Google Scholar]
All Tables
All Figures
![]() |
Fig. 1 Filter curves for Euclid and Rubin. The relative filter transmission is shown as a function of the wavelength. The Euclid VIS band overlaps with three of the Rubin filters: r, i, and z. |
In the text |
![]() |
Fig. 2 Histogram of the HST F775W-band magnitude for all galaxies in our dataset after filtering. Note that the HST F775W band matches with the Rubin i band. |
In the text |
![]() |
Fig. 3 Loss function for the galaxy shown in Fig. 7a. Convergence is guaranteed at around 100 iterations when the relative change in loss value is <10−3 and the curve is flat. |
In the text |
![]() |
Fig. 4 Unit test to verify that there is no leakage of flux from one channel to another. The recovered Gaussians remain at their original centers. |
In the text |
![]() |
Fig. 5 DRUNet architecture, which incorporates an additional noise level map as input and integrates U-Net (Ronneberger et al. 2015) with ResNet (He et al. 2016). “SConv” stands for strided convolution, and “TConv” stands for transposed convolution. Image credits: Zhang et al. (2022). |
In the text |
![]() |
Fig. 6 Fractional error in the output flux as a function of the i-band magnitude (which is chosen in order to have the same scale on the x-axis). The dots correspond to the individual galaxies, and the gray line is the best-fit line after binning the magnitude values. |
In the text |
![]() |
Fig. 7 Two deconvolved outputs that illustrate the successful recovery of features completely lost in the Rubin simulations. Additionally, the outputs seem to capture the variations when transitioning from one band to another. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z bands. Third: deconvolved outputs for the three bands. Fourth: corresponding ground-truth HST images. Fifth: residuals, which are defined as follows: residual = noisy Rubin image – PSF * deconvolved image. |
In the text |
![]() |
Fig. 8 Two denoised outputs from DRUNet that demonstrate its ability to effectively remove noise from the background while preserving the structure of the central galaxy. First column: Rubin simulations in the r, i, and z bands. Second: corresponding pre-denoised outputs. Third: corresponding post-denoised outputs. Fourth and fifth: Residuals. |
In the text |
![]() |
Fig. 9 Galaxy shown in Fig. 7b deconvolved using two different approaches, illustrating that joint deconvolution outperforms independent deconvolutions of individual photometric bands. The joint method allows us to leverage the correlation between the different bands and the space-based image, thus improving the final output. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z, bands. Third: independently deconvolved SUNet outputs for the three bands. Fourth: corresponding joint deconvolution outputs followed by denoising with DRUNet. Fifth: ground-truth HST images. |
In the text |
![]() |
Fig. A.1 Galaxy from Fig. 7b, with each band successively replaced by a random noise map, as shown when progressing from Figs. A.1a to A.1c. Reconstructed features appear only in bands where the original galaxy signal is present. |
In the text |
![]() |
Fig. B.1 Loss function for the galaxy shown in Fig. B.2a. Convergence is achieved at around 200 iterations when the relative change in loss value is < 10−3 and the curve is flat. |
In the text |
![]() |
Fig. B.2 Outputs of the PnP ADMM algorithm for the two galaxies shown in Fig. 7, closely matching the results presented in the same figure. First column: Euclid VIS image. Second: Rubin simulations in the r, i, and z bands. Third: Deconvolved outputs for the three bands. Fourth: Corresponding ground-truth HST images. Fifth: Residuals. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.